SHARE
Facebook X Pinterest WhatsApp

MIT Researchers Create Explanation Taxonomy to ML Models

thumbnail
MIT Researchers Create Explanation Taxonomy to ML Models

For non-technical staff, the lack of understanding and obfuscation by some developers is one of the lead causes of under-utilization of machine learning services in the real world.

Written By
thumbnail
David Curry
David Curry
Jan 18, 2023

Machine learning models are growing in complexity year-on-year, to the point that researchers working on the models sometimes have trouble understanding decisions made. 

For non-technical staff, the lack of understanding and obfuscation by some developers is one of the lead causes of under-utilization of machine learning services in the real world. In finance, healthcare, and logistics, businesses are attempting to implement AI into their decision making processes, but are finding decision makers often reject or doubt AI systems, due to not understanding what factors the AI used to come to a certain observation or decision. 

SEE ALSO: Recommender Systems: Why the Future is Real-Time Machine Learning

Take, for example, a physician who receives a prediction from an AI model about a patient’s likelihood of developing cardiac disease. The physician may want to better understand what factors were prominent in the AI’s inference, to better tackle the issue. 

Researchers at MIT have been working on a solution to this issue, by building a taxonomy that is inclusive to all types of people who interact with a ML model. By publishing the paper, titled “The Need for Interpretable Features: Motivation and Taxonomy”, the researchers aim to push forward the idea that explainability should be built into a model from the start, and not retroactively added when the AI model is made available for general use. 

“We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” said Alexandra Zytek, phD student at MIT and lead author of the paper.

The taxonomy covers how best to explain and interpret different features, but also how to transform hard to understand features into formats that are easier to understand for non-technical users. With this, end users should be better inclined to trust the decisions an ML model makes, while also being able to more accurately describe why it has come to that decision. 

Several ML processes lead to obfuscation and increased levels of complexity, which are difficult to convert into easy-to-understand formats. For example, most ML models cannot process categorical data if its not in numerical code format, but without reconverting it back to digestible values nobody outside of machine learning experts can understand what is happening. The paper discusses some best practices for reconverting models to understandable formats. 

“In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible,” said Zytek. 

The next step for the MIT researchers is developing a system for developers to handle feature to format transformations faster, which should improve the time-to-market for ML models.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

3 Challenges of Adopting Machine Learning (and How to Solve Them)
Maxime Vermeir
Jun 4, 2025
The Importance of Validating AI Content
Nicos Vekiarides
Feb 21, 2025
Transforming Public Transit with AI and Machine Learning
Vision Transformers Breakthrough Enhances Efficiency

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.