Three Methods Researchers Use To Understand AI Decisions

PinIt

Making sense of AI decisions is important to researchers, decision-makers, and the wider public. Fortunately, there are methods available to ensure we know more.

Deep-learning models, of the type that are used by leading-edge AI corporations and academics, have become so complex that even the researchers that built the models struggle to understand decisions being made. 

This was shown most clearly to a wide audience during DeepMind’s AlphaGo tournament, in which data scientists and pro-Go players were regularly bamboozled by the AI’s decision-making during the game, as it made unorthodox plays which were not considered the strongest move. 

SEE ALSO: Artificial Intelligence More Accepted Post-Covid According to Study

In an attempt to better understand the models they build, AI researchers have developed three main explanation methods. These are local explanation methods, which explain one specific decision, rather than the decision making for an entire model, which can be challenging given the scale. 

Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL), discussed these methods in a MIT News article. 

Feature attribution 

With feature attribution, an AI model will identify which parts of an input were important to a specific decision. In the case of an x-ray, researchers can see a heatmap or the individual pixels that the model perceived as most important to making its decision. 

“Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” said Zhou.

Counterfactual explanation 

When coming to a decision, the human on the other side may be confused as to why an AI has decided one way or the other. As AI is being deployed in high-stakes environments, such as in prisons, insurance, or mortgages, knowing why an AI rejected an offer or appeal should help them attain approval the next time they apply. 

“The good thing about the [counterfactual] explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” said Zhou.

Sample importance

Sample importance explanation requires access to the underlying data behind the model. If a researcher notices what they perceive to be an error, they can run a sample importance explanation to see if the AI was fed data that it couldn’t compute, which led to an error in judgment.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *