Most use cases for AI in companies are developed by data scientists and then deployed and used essentially as a black box.
There is no questioning the booming success of artificial intelligence (AI) and its increasing adoption by businesses in all fields. AI is critical in the growing number of critical applications that make use of continuous intelligence (CI). However, the AI and CI adoption juggernaut could be impeded as transparency and regulatory issues emerge and grow.
These issues were the subject of my introductory comments to a day-long conference track on AI implementation for senior strategists. The conference track was part of last month’s AI Summit and the session included presentations by the Expedia Group, Deloitte Consulting, New York University, Lockheed Martin, New York Presbyterian Hospital, Wells Fargo, USA Today Network, ExxonMobil, Novartis, and the IBM Watson AI XPRIZE – XPRIZE Foundation.
See also: Act Now to Prevent Regulatory Derailment of the AI Boom
The speakers talked about the benefits and challenges their organizations realized with their AI implementations. Throughout the day, most of my conversations with conference attendees focused on transparency and regulatory issues. And rightly so.
Two-fold Issues with Transparency
Today, most use cases for AI in businesses are developed by data scientists and then deployed and used essentially as a black box. Lacking insights into what data was used to train models, what assumptions were made when developing solutions, which algorithms were used, and how choices were justified opens a can of worms down the line.
One issue that can go unnoticed and impact results is model or data drift. If a model was trained on one set of data and that data changes over time, the inferences made by the AI solution would no longer be correct. An extreme example of this would be developing an AI-based clothing recommendation engine for an online retail site using historical customer preferences and real-time data from clickstreams, social media influencers, and other sources.
If the model was developed using say data from winter months, the recommendations in the summertime would not be applicable. Without the knowledge of what went into the solution, a retailer might be puzzled as to the decline in purchases in warmer weather.
A second data issue is when data changes over time. An AI-based credit authorization solution developed using economic indicators from 2018 would produce different results if 2020 data were used. Many AI deployments do not address this changing data issue. They are developed, deployed, and untouched.
Organizations are increasingly becoming aware of the model drift issue. But, again, there is a need for transparency to know when results might be impacted.
Finally, there is perhaps the biggest transparency issue: Bias. Not fully knowing what went into an AI solution can lead to skewed or invalid results. For example, an AI-based medical diagnosis algorithm developed using data from say North American patient datasets might not make an accurate assessment when used for a patient from Sudan.
Recently, AI bias has gotten wide-spread (bad) media coverage. One example got extensive coverage due to the people involved. Last year, Apple launched a credit card. CNN reported that “tech entrepreneur David Heinmeier Hansson wrote that Apple Card offered him twenty times the credit limit as his wife, although they have shared assets and she has a higher credit score.” His opinion about the card’s bias and discrimination was seconded by Apple co-founder Steve Wozniak. He and his wife had a similar experience with the card.
Goldman Sachs administers the dealings with the Apple card. According to Bloomberg Businessweek: “There’s been no evidence that the bank, which decides who gets an Apple Card and how much they can borrow, intentionally discriminated against women. But that may be the point, according to critics. The complex models that guide its lending decisions may inadvertently produce results that disadvantage certain groups.” Again, the bias issue has its roots in transparency.
Are Regulations on the Horizon?
If transparency and bias problems lead to complaints of bias or mistaken predictions, government entities will likely get involved and could introduce regulations.
Already, The New York Department of Financial Services is looking into allegations of gender discrimination against users of the Apple Card, which is administered by Goldman Sachs. And some U.S. Senators are urging healthcare organizations to combat racial bias in AI algorithms.
If businesses do not adopt their own best practices in addressing AI transparency and bias issues, they may not have a choice in the future.