Center for Continuous Intelligence

IBM Aims to Explain AI with New Toolkit

PinIt
AI explainability

The toolkit uses machine learning algorithms to help organizations validate how their AI models are constructed

IBM today announced it is making available an open-source toolkit made up of algorithms that make it simpler to explain and interpret how artificial intelligence (AI) models really work.

While interest in AI is obviously widespread, it turns out many of the organizations that have created an AI model to automate a process cannot precisely explain how that AI model works. That lack of “AI explainability” is creating a host of issues, spanning from concerns over whether human bias has been inserted into a model to being able to show how an AI model is governed to an auditor. In fact, a recent survey for business leaders published by IBM Institute for Business Value found 68% of business leaders expect customers will demand more explainability relating to how AI models specifically work within the next three years.

The IBM AI Explainability toolkit helps resolve those concerns by essentially applying AI in the form of additional machine learning algorithms that validate how the AI model is constructed using case-based reasoning, directly interpretable rules, post hoc local explanations, and post hoc global explanations among other algorithmic-based approaches, says Saška Mojsilović, an IBM Fellow and co-director of IBM Science for Social Good.

See also: Explaining “Black Box” Neural Networks

The initial release of the IBM AI Explainability toolkit includes eight algorithms created by IBM Research along with metrics that serve as quantitative proxies for the quality of explanations. IBM is also encouraging other interested parties to contribute algorithms to the open source toolkit.

The toolkit is also designed to interoperate with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes the IBM made available last year to make it simpler to create machine learning pipelines.

IBM is also including training tools and tutorials through which builders of AI models can see how the explainability algorithms created by IBM can be applied to, for example, a credit scoring model.

The fundamental issues organizations that embrace any type of AI application need to be wary of are not just the accuracy of the algorithms being employed, but also the quality of the data that is being used to drive those algorithms. Organizations will need to be wary that the data being used to drive an AI model has not been weighted in a way that favors one outcome versus another, cautions Mojsilović.

“There will always be malicious actors,” says Mojsilović.

Organizations will also need to validate the way AI models are constructed because right now it is too easy to simply swap AI models in and out until an organization finds the one that drives for the most desirable result for the organization versus what may be best for a customer or business partner.

In the meantime, the inability to precisely explain how AI models work does not seem to be slowing down adoption. However, organizations should proceed with caution. Lawsuits stemming from how AI models were applied are almost inevitable at this point and regardless of how elegant the algorithms relied on might be, it is unlikely judge or jury is going to be sympathetic to any defense based on technology that cannot be easily explained.

Leave a Reply

Your email address will not be published. Required fields are marked *