Explainable AI (XAI): The Key to Building Trust and Preparing for a New Era of Automation

PinIt

XAI provides full insight into what decisions a system is making and why, which, in turn, identifies what data can be trusted and what data should be cast aside.

As the adoption of artificial intelligence (AI) becomes increasingly prevalent across various industries, the urgency of explainable AI (XAI) is revealing itself. XAI — systems that can clearly and transparently explain their decision-making processes to human users — will be vital moving forward as regulatory and consumer pressures put transparency at a premium.

XAI is already being incorporated out of necessity within industries such as financial services and healthcare, but organizations looking to successfully enter the next generation of AI must ensure they have a glimpse into all that makes their complex models and decisions click.

Public and regulatory pressure will necessitate XAI

Believe it or not, for the first four decades after the coining of the phrase “Artificial Intelligence,” its most successful and widely adopted practical applications provided results that were, for the most part, explainable. But with a leap in sophistication of mathematical models, specialized computing power, and advances in deep learning occurring from the early 21st century, the focus shifted significantly to using very sophisticated and highly efficient models that could successfully apply AI to new kinds of real-world problems for the first time, but in doing so, sacrificing explainability for performance.

In 2023 and beyond, this single-minded strategy will prove to be unacceptable to both regulators and consumers. California’s recent California Privacy Rights Act (CPRA) gives consumers progressive opt-out rights when it comes to “automated decision-making technology,” and EU GDPR can be considered as going even further in stating that “the existence of automated decision making should carry meaningful information about the logic involved.” This is certain to spawn copycat legal provisions throughout the U.S. and other jurisdictions and — similar to GDPR’s impact on data privacy in Europe — will place new pressure on businesses to be transparent about their approach to AI.

Beyond the consideration of the rights of individuals, governments and other regulatory bodies are taking a serious stance on how they can enforce the requirement to explain AI-based decision making within systemically important industries and markets. As early as 2017, the UK Financial Stability Board identified that “lack of interpretability and auditability of AI and ML methods could become a macro-level risk.” Governments around the globe perceive that unconstrained and unexplainable automated decision making to be a major threat to society’s most fundamental market-based dynamics and even its social fabric.

While there are undoubtedly ever-growing benefits to AI investment, and the pressure on businesses to use it to gain and sustain competitive advantage is acute, the imperative for organizations to understand and make explainable their existing and future automated decision-making processes is even more urgent. Regulatory requirements to protect privacy and market stability, along with global citizens becoming more aware of and starting to exercise their right to fairness and transparency, means that the organizations that will fare the best from this once-in-a-generation shift in data will be the ones that can get the best balance between making better decisions and making those decisions explainable.

So, what should data leaders be thinking about when they come to consider the best way of incorporating XAI into their organizations?

See also: NIST: AI Bias Goes Way Beyond Data

Bad AI is not usually a data quality problem. It could be the other way around.

At first glance, the central issues with AI — bias, privacy, trust, etc. — may appear to stem from poor data quality. If you’re feeding garbage into models, garbage is assuredly going to come out, right?

While it’s true that poor data quality leads to subpar results, depending entirely on perfect data for successful AI is an exercise in futility. XAI is the critical piece of this puzzle, as it provides professionals full insight into what decisions a system is making and why, which, in turn, identifies what data can be trusted and what data should be cast aside.

This can be achieved through metadata that maps out — in an interpretable way — steps the AI took with each piece of data to arrive at a conclusion. From there, humans can make the final legitimacy call and make informed action recommendations. 

Regarding racial or gender bias specifically, professionals must apply the same problem-solving approach: a data quality issue to be managed through XAI. Individuals within a dataset will never perfectly represent the population, and obviously, it’s malpractice to assume that the AI will magically work through these blemishes. Instead of wasting resources on perfecting the data, leverage XAI to work backward from the final product and weed out processes that stem from bias. No machine can replace human discretion on this front, and it’s important that technology enables proper intervention. 

XAI applies as a solution to racial or gender bias, but professionals must be careful and view bias as a smaller component of a larger problem.

Imperfect data is inevitable, so it’s vital that XAI is adopted to ensure model output is reviewed with a human eye and conscience. To date, the biggest issue with AI has been uncertainty and fear of low-quality input. XAI removes that fear and equips professionals with the tools to make confident, machine-assisted decisions.

With IT budgets tighter than ever, spending time and money on a quest to optimize data quality at the expense of equipping teams to be able to work with imperfect data and use XAI as a tool to do that should be avoided. Invest in tooling and education to empower your data users to get the best from their data.

See also: FICO Warns Financial Services Dodging Responsible AI Initiatives

Involve and empower people across an organization for full XAI success

Similar to the data literacy movement, successful XAI ultimately requires human involvement throughout every phase of implementation, from problem definition and development to ongoing data governance.

With data literacy, organizations discovered that data management practices must be accessible for all manner of skillset, technical or not. The same applies to this AI reckoning –– if the goal is to create a product that appears ethical and unbiased to the general public, people who can see from that perspective must be involved in operations. Without that strategy — or a deep understanding of why XAI is a key influence in how future generations accept AI — businesses will fall victim to heightened scrutiny in the coming years.

Policies are also important and need to be created at the same level as (either alongside or as part of) data privacy, security, and compliance rules for the organization. But unlike other compliance processes, even in the largest and most sophisticated organizations, it will be difficult –– at least for now –– to monitor and police the implementation of AI across the business. So, these policies should be clear in requirements (the “what”), but even more importantly, they must capture the spirit and intent behind them (the “why”) to make sure that if an individual is faced with a decision about how to do the right thing, they feel equipped and empowered to do so. 

Focus on understanding where XAI matters most and what it means for those things

There’s general consensus for what explainability at its highest level means in terms of needing to be able to describe the logic or reasoning behind a decision. But exactly what explainability means for a specific decision and how explainable a decision needs to be will depend on both the type of decision and the type of AI that is being used. It’s important that data leaders don’t waste time and energy chasing universal definitions that, whilst technically correct, are not practically helpful.

Once you have been successful in applying XAI to different situations and have developed working models of what it means, then it’s time to go back to your policies and make sure that you can capture anything that’s generally applicable across your organization. You must also still recognize that you’re going to rely a lot on people doing the right thing, so the ethical spirit of the policy should remain at its core.

XAI incorporates transparency by design into these models. When businesses are inevitably questioned about issues within their AI systems, whether that be about bias or data scarcity, answers will be readily available because engineers can work backwards from the recommendation.

Avatar

About Gareth Shercliff

Gareth Shercliff is Senior Director, Architecture, Strategy and Delivery Innovation at Talend.

Leave a Reply

Your email address will not be published. Required fields are marked *