Finding Trust and Transparency in AI During Doubtful Times

PinIt

To trust AI, businesses need reliable tools and applications that are frequently regulated and evaluated.

In a world experiencing a climate of uncertainty and distrust, it’s important that businesses and decision-makers prove their value through practices that effectively highlight honesty and the very best ethical practices. With the prioritization of digital transformation, artificial intelligence is at the forefront of many business conversations, making it more important than ever that companies prioritize AI transparency today.

See also: Is Your AI Model Still Accurate After the Coronavirus Pandemic?

What is AI transparency?

Rather than a technical definition, a more colloquial way to describe transparency in AI is thinking about the mechanisms within AI to earn our trust. The motivation behind explaining data sources, data provenance, model algorithms, and model outcomes are motivated by principles of fairness, equality, inclusivity, and privacy, all with the ultimate goal of earning the trust of the human users. Transparent and ethical AI is the foundation upon which trust in businesses, and their technology is built. About one-third of executives in a recent Deloitte survey named ethical risks as one of the top three concerns related to AI – transparency is key to solving this growing unease.

Transparent AI in 2020, Explained

How should businesses use transparent AI?

AI is not intended to replace humans in many cases; rather, it is a valuable and powerful tool that has the potential of greatly benefiting society in all aspects of our lives. However, as with any new tool, it needs to be used responsibly and ethically.

The challenge with AI is that it is still so new and sophisticated that it can be presented as a “black box” that cannot be scrutinized or challenged. This lack of transparency results in distrust at best, and at worst, AI that has been implemented in an unfair, biased, or disrespectful manner.

Enterprises need to know how AI makes decisions to audit for responsible implementation, both for themselves and for their users that are most impacted by those outcomes. Businesses should constantly be self-analyzing their AI tools and ask the following questions:

  • Are the models biased? Frequently measure the model’s fairness and test for learned bias and discrimination in the classifiers
  • Are the datasets appropriate? Make sure datasets are unbiased, representative, and relevant for fairer predictions
  • Do the decisions make sense? Continue to question decisions made by AI and scrutinize the underlying models, and make appropriate changes, as necessary

Increased transparency provides a natural mechanism for humans to be included in the decision-making process. The subject of an AI decision can always ask the human for an explanation because there can always be exceptional circumstances that an AI system is unaware of, that a human can override with further context. This “human-in-the-loop” becomes important for critical, life-impacting decisions that might otherwise go unchecked if completely automated through AI.

Why should businesses be more transparent?

Transparent AI places businesses at an extreme competitive advantage. A recent survey of 1,580 executives in 510 organizations found that 62% said they would place higher trust in a company whose AI interactions they perceived as ethical, and 51% consider that it is important to ensure that AI systems are ethical and transparent.

The vendor of a transparent AI solution is able to sell more easily to a customer base that may be skeptical about AI technologies, or to a regulated industry that has a legal responsibility to explain or audit all decisions. If businesses are able to disclose their decisions around AI solutions, they are able to reach a wider market that are actively looking for more trustworthy technology.

An organization deploying transparent AI will find greater user adoption and acceptance as that trust is more easily built up over time. These systems also provide more agile feedback, so that corrections and improvements are made to the AI system over time.

When is it important?

Security AI products need to be able to explain why the system believes a threat has been detected, especially when the detection is related to suspicious human behaviors so that any resultant investigation can be properly prioritized, explained, and audited. Although not vital to many aspects of the business, the value of transparent AI should not be ignored and, by default, should always be a goal. At a minimum, AI transparency allows the implementer to ensure that they have deployed a responsible AI system, and to improve the system over time based on the afforded introspection.

Summary

AI is more effective and useful when businesses prioritize innovation and intelligence behind the models and technology. In order to trust AI, businesses need reliable tools and applications that are frequently regulated and evaluated.  Following the integration of transparent AI, businesses are placed at a more competitive advantage in the market through greater earned trust from customers and the greater public alike.

Stephan Jou

About Stephan Jou

Stephan Jou is CTO of Interset at Micro Focus, a leading-edge cybersecurity and In-Q-Tel portfolio company that uses machine learning and behavioral analytics. Jou holds a M.Sc. in Computational Neuroscience and Biomedical Engineering, and a dual B.Sc. in Computer Science and Human Physiology, all from the University of Toronto. He has held advisory positions on NSERC Strategic Networks and is involved in setting goals for NSERC Strategic Research Grant research topics in the areas of analytics and security for Canada and was an invited participant in 2018’s G7 Multistakeholder Conference on Artificial Intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *