Researchers Build Uncertainty Calibrator For AI Models

PinIt

Researchers have developed an improved version of a method to predict uncertainty for AI models making decisions.

Due to the rapid advancement in the size and complexity of artificial intelligence models, it has become harder to judge why a model made a decision and if it was the correct one. For researchers working on these models it is even more challenging, as most AI do not have a good sense of the accuracy of a decision made. 

Generative AI has shown this deficiency most clearly, with chatbots regularly “hallucinating” information to provide an answer to the end user. At most times, the chatbot is not aware it made up the information, and user input is required to rectify a mistake. Self-driving cars are another example of this, with algorithms sometimes not recognizing pedestrians or objects, despite humans being able to in video feedback. 

SEE ALSO: Generative AI Model Speed Up Drug Discovery

Researchers at the Massachusetts Institute of Technology and University of California at Berkeley have developed a new method for developers building AI interference algorithms, which is able to provide multiple explanations for the data, and estimate the accuracy of each explanation. 

The method is an improvement on the sequential Monte Carlo (SMC), a mathematical approach which is widely used as a tool to calibrate uncertainty in AI models. The issue with SMC is, because of the simplicity of the algorithm, it is not an accurate solution for multimedia models, such as generative AI platforms and self-driving cars, which incorporate images and video. 

The researchers improved on SMC with probabilistic program proposals (SMCP3), which includes the ability to choose any probabilistic program and apply it to the strategy for proposing explanations of data. With the capability to include all types of programs, AI researchers can estimate the quality of explanations in more sophisticated ways, and include multiple stages of explanations. 

“Today, we have lots of new algorithms, many based on deep neural networks, which can propose what might be going on in the world, in light of data, in all sorts of problem areas,” said Vikash Mansinghka, principal research scientist at MIT and senior author of the paper. “But often, these algorithms are not really uncertainty-calibrated. They just output one idea of what might be going on in the world, and it’s not clear whether that’s the only plausible explanation or if there are others, or even if that’s a good explanation in the first place. But with SMCP3, I think it will be possible to use many more of these smart but hard-to-trust algorithms to build algorithms that are uncertainty-calibrated. As we use ‘artificial intelligence’ systems to make decisions in more and more areas of life, having systems we can trust, which are aware of their uncertainty, will be crucial for reliability and safety.”

Having an AI model more able to explain why it came to a decision is going to be critical for the next generation of commercial AI apps, which will operate alongside humans in workplace environments. In industries such as healthcare and transport, an algorithm may be faced with a life or death situation, and proper analysis needs to be available to all parties regardless of if the model is correct. 

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *