SHARE
Facebook X Pinterest WhatsApp

Researchers Build Uncertainty Calibrator For AI Models

thumbnail
Researchers Build Uncertainty Calibrator For AI Models

Deepfake or deep fake technology as AI or artificial intelligence as a biometrics fake visual identity concept as a fake news cyber threat to deceptively influence social issues in a 3D illustration style.

Researchers have developed an improved version of a method to predict uncertainty for AI models making decisions.

Written By
thumbnail
David Curry
David Curry
Jun 26, 2023

Due to the rapid advancement in the size and complexity of artificial intelligence models, it has become harder to judge why a model made a decision and if it was the correct one. For researchers working on these models it is even more challenging, as most AI do not have a good sense of the accuracy of a decision made. 

Generative AI has shown this deficiency most clearly, with chatbots regularly “hallucinating” information to provide an answer to the end user. At most times, the chatbot is not aware it made up the information, and user input is required to rectify a mistake. Self-driving cars are another example of this, with algorithms sometimes not recognizing pedestrians or objects, despite humans being able to in video feedback. 

SEE ALSO: Generative AI Model Speed Up Drug Discovery

Researchers at the Massachusetts Institute of Technology and University of California at Berkeley have developed a new method for developers building AI interference algorithms, which is able to provide multiple explanations for the data, and estimate the accuracy of each explanation. 

The method is an improvement on the sequential Monte Carlo (SMC), a mathematical approach which is widely used as a tool to calibrate uncertainty in AI models. The issue with SMC is, because of the simplicity of the algorithm, it is not an accurate solution for multimedia models, such as generative AI platforms and self-driving cars, which incorporate images and video. 

The researchers improved on SMC with probabilistic program proposals (SMCP3), which includes the ability to choose any probabilistic program and apply it to the strategy for proposing explanations of data. With the capability to include all types of programs, AI researchers can estimate the quality of explanations in more sophisticated ways, and include multiple stages of explanations. 

“Today, we have lots of new algorithms, many based on deep neural networks, which can propose what might be going on in the world, in light of data, in all sorts of problem areas,” said Vikash Mansinghka, principal research scientist at MIT and senior author of the paper. “But often, these algorithms are not really uncertainty-calibrated. They just output one idea of what might be going on in the world, and it’s not clear whether that’s the only plausible explanation or if there are others, or even if that’s a good explanation in the first place. But with SMCP3, I think it will be possible to use many more of these smart but hard-to-trust algorithms to build algorithms that are uncertainty-calibrated. As we use ‘artificial intelligence’ systems to make decisions in more and more areas of life, having systems we can trust, which are aware of their uncertainty, will be crucial for reliability and safety.”

Having an AI model more able to explain why it came to a decision is going to be critical for the next generation of commercial AI apps, which will operate alongside humans in workplace environments. In industries such as healthcare and transport, an algorithm may be faced with a life or death situation, and proper analysis needs to be available to all parties regardless of if the model is correct. 

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

Model-as-a-Service Part 1: The Basics
If 2025 was the Year of AI Agents, 2026 will be the Year of Multi-agent Systems
AI Agents Need Keys to Your Kingdom
The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.