SHARE
Facebook X Pinterest WhatsApp

Not Sure What Your AI Is Doing? Google Will Explain

thumbnail
Not Sure What Your AI Is Doing? Google Will Explain

Artificial intelligence (AI), machine deep learning, data mining, expert system software, and another modern computer technologies concepts. Brain representing artificial intelligence and businessman holding futuristic tablet.

Google Explainable AI is a new set of tools and resources to help developers better understand model behavior and detect and resolve bias, drift, and other gaps in their models.

Written By
thumbnail
David Curry
David Curry
Dec 4, 2019

While a lot of organizations are already seeing upwards trends with their AI deployment, some are struggling to understand what the AI is doing and why.

Not many businesses have strong AI expertise in-house, and most AI programs do not come with a simplistic user interface to explain decisions. The end result is confused business leaders, who cannot understand the AI behavior.

SEE ALSO: Summarization Platform Receives Google, Microsoft Backing

Google wants to help bridge the communication gap, with the launch of Explainable AI. The new set of tools and resources, available to its cloud platform customers, enables developers to better understand model behavior and “detect and resolve bias, drift, and other gaps” in their models.

In cases where an AI has gone off the rails or incorrectly examined data, Explainable AI will provide an analysis of what led to the decision. From that, developers should be able to patch the AI and better understand the cause-and-effect.

“While this capability allows AI models to reach incredible accuracy, inspecting the structure or weights of a model often tells you little about a model’s behavior,” said director of AI strategy, Tracy Frey, in a blog post. “This means that for some decision-makers, particularly those in industries where confidence is critical, the benefits of AI can be out of reach without interpretability.”

Google does note that while Explainable AI will provide an analysis of the model, it cannot make a judgement on the data inputted. As explained in a Gartner webinar, one of the key factors in AI deployment success is good data quality and governance.

“AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers while being transparent about its limitations,” said Frey.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

Engineering the Agentic Enterprise: Building Smarter, Adaptive, Autonomous Systems
Varun Goswami
Mar 10, 2026
The AI That Actually Scales Is Boring. That’s the Point.
Jared Coyle
Mar 9, 2026
Real-time Analytics News for the Week Ending March 7
The State of the Neoclouds Market

Featured Resources from Cloud Data Insights

Engineering the Agentic Enterprise: Building Smarter, Adaptive, Autonomous Systems
Varun Goswami
Mar 10, 2026
The AI That Actually Scales Is Boring. That’s the Point.
Jared Coyle
Mar 9, 2026
Real-time Analytics News for the Week Ending March 7
The State of the Neoclouds Market

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.