SHARE
Facebook X Pinterest WhatsApp

IBM Aims to Explain AI with New Toolkit

thumbnail
IBM Aims to Explain AI with New Toolkit

Digital artificial intelligence text hologram on blue grey background 3D rendering

The toolkit uses machine learning algorithms to help organizations validate how their AI models are constructed

Written By
thumbnail
Michael Vizard
Michael Vizard
Aug 8, 2019

IBM today announced it is making available an open-source toolkit made up of algorithms that make it simpler to explain and interpret how artificial intelligence (AI) models really work.

While interest in AI is obviously widespread, it turns out many of the organizations that have created an AI model to automate a process cannot precisely explain how that AI model works. That lack of “AI explainability” is creating a host of issues, spanning from concerns over whether human bias has been inserted into a model to being able to show how an AI model is governed to an auditor. In fact, a recent survey for business leaders published by IBM Institute for Business Value found 68% of business leaders expect customers will demand more explainability relating to how AI models specifically work within the next three years.

The IBM AI Explainability toolkit helps resolve those concerns by essentially applying AI in the form of additional machine learning algorithms that validate how the AI model is constructed using case-based reasoning, directly interpretable rules, post hoc local explanations, and post hoc global explanations among other algorithmic-based approaches, says Saška Mojsilović, an IBM Fellow and co-director of IBM Science for Social Good.

See also: Explaining “Black Box” Neural Networks

The initial release of the IBM AI Explainability toolkit includes eight algorithms created by IBM Research along with metrics that serve as quantitative proxies for the quality of explanations. IBM is also encouraging other interested parties to contribute algorithms to the open source toolkit.

The toolkit is also designed to interoperate with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes the IBM made available last year to make it simpler to create machine learning pipelines.

IBM is also including training tools and tutorials through which builders of AI models can see how the explainability algorithms created by IBM can be applied to, for example, a credit scoring model.

The fundamental issues organizations that embrace any type of AI application need to be wary of are not just the accuracy of the algorithms being employed, but also the quality of the data that is being used to drive those algorithms. Organizations will need to be wary that the data being used to drive an AI model has not been weighted in a way that favors one outcome versus another, cautions Mojsilović.

“There will always be malicious actors,” says Mojsilović.

Organizations will also need to validate the way AI models are constructed because right now it is too easy to simply swap AI models in and out until an organization finds the one that drives for the most desirable result for the organization versus what may be best for a customer or business partner.

In the meantime, the inability to precisely explain how AI models work does not seem to be slowing down adoption. However, organizations should proceed with caution. Lawsuits stemming from how AI models were applied are almost inevitable at this point and regardless of how elegant the algorithms relied on might be, it is unlikely judge or jury is going to be sympathetic to any defense based on technology that cannot be easily explained.

Recommended for you...

Real-time Analytics News for the Week Ending January 24
Beware the Distributed Monolith: Why Agentic AI Needs Event-Driven Architecture to Avoid a Repeat of the Microservices Disaster
Ali Pourshahid
Jan 24, 2026
The Key Components of a Comprehensive AI Security Standard
Elad Schulman
Jan 23, 2026
Fastvertising: What It Is, Why It Matters, and How Generative AI Amplifies It

Featured Resources from Cloud Data Insights

In the Race for Speed, Is Semantic Layer the Supply Chain’s Biggest Blind Spot?
Sajal Rastogi
Jan 25, 2026
The Manual Migration Trap: Why 70% of Data Warehouse Modernization Projects Exceed Budget or Fail
The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.