AI Fairness Depends on Our Ability to Remove and Detect Bias

PinIt

To overcome AI bias challenges, businesses should utilize technology to address the problem head-on and at a scale.

While AI has helped organizations automate mundane tasks, it’s also vulnerable to the inherent biases of human nature. Algorithms learn how to behave mainly based on the kind of data they are fed – after all, the rules of AI are set up by humans. If the data has underlying bias characteristics, the AI model will learn to act on them. Biases are, in fact, more prevalent than we think. Gartner predicted that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. Despite our best intentions, these biases can creep in and build up over time – unless data scientists are vigilant about keeping their models as fair as possible.

See also: FICO’s Scott Zoldi Talks Data Scientist Cowboys and Responsible AI

Data can be valuable, and problematic

Data has the potential to significantly transform the way organizations work, from enhancing decision-making to improving brand loyalty. But with its power, data also has the potential to negatively affect a company’s ability to accurately and fairly engage with customers. Some types of data have a higher potential to be used – likely inadvertently – to discriminate against based on race, gender, or religion. But seemingly “safe” data such as someone’s zip code can also be used by AI to form biased opinions, resulting in potential regulatory violations, discriminatory actions, and a loss in public trust.

Take bank loans, for example. If a bank typically doesn’t approve many loans to people in a minority neighborhood, it could unintentionally learn to avoid providing market loans to other people fitting the characteristics of that same zip code, thus introducing racial bias into the AI model. In this instance, even with taking race out of the equation, the AI finds a way to discriminate without the bank realizing it was happening.

To prevent discrimination from happening, businesses need to carefully scrutinize the data ingested by AI. If they don’t, irresponsibly-used AI can lead to damaging practices, such as delivering fewer advertisements, limiting loans, or offering fewer product discounts to underserved populations that really need them. These practices aren’t only ethically wrong, they become serious liabilities for organizations that are not diligent about preventing bias from occurring in the first place.

Real-world effects of AI bias

Several recent high-profile incidents have highlighted the risks of unintentional bias and the effect it can have on damaging a brand’s ability to fairly interact with its customers. With today’s unpredictable economy and heightened and necessary focus on social responsibility, businesses cannot risk losing the trust of their customers.  Lost revenue, loss of trust among customers, employees, and other stakeholders, regulatory fines, damaged brand reputation, and legal ramifications are many of the significant costs associated with discrimination.

The American criminal justice system relies on dozens of algorithms to determine a defendant’s propensity to become a repeat offender. In one case, ProPublica analyzed Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool. Their study revealed that African American defendants were significantly more likely to be incorrectly categorized as having a higher risk of becoming a repeat offender, while white defendants were inaccurately judged as having a lower risk. The algorithm they used to process justice system data was identified as inherently biased against African Americans. This is just one example of bias being more prevalent than people realize. Organizations must make a concerted effort to monitor their AI to ensure that it’s fair for everyone.

How companies can identify and prevent bias

Marketing to specific groups with similar characteristics is not necessarily unfair in all cases. For example, sending offers to parents of school-aged children with promotions on school supplies is perfectly acceptable if the company is offering something of value to that group. Similarly, targeting senior citizens with Medicare plans or offers for a new retirement community is fair, as long the company is promoting something relevant and useful. This is not predatory marketing, it’s just intelligent marketing.

But this kind of segmented marketing can quickly become a slippery slope, so the responsibility falls on organizations to incorporate bias detection technology into all AI models that are used. This is particularly true in regulated industries where the ramifications of non-compliance can be severe, like financial services and insurance. To be successful, companies can’t rely on quarterly or monthly bias detection –they must continuously monitor their self-learning AI models in real-time to proactively flag and eliminate intentional or unintentional discriminatory behavior.

To prevent undetected and unintentional biases, companies should first start with “clean data sources” for building models. They must be aware of class or behavioral identifiers that can become biased in certain situations, such as education level, credit score, occupation, employment status, language of origin, marital status, number of followers, and more. Organizations cannot always readily identify these issues without the automated help of software made specifically to watch for them.

To flag potential biases before damage is done, businesses should focus on evaluating AI training data and simulating real-world scenarios before deploying AI. This becomes even more critical as machine learning applications are increasingly utilized. For example, potentially powerful opaque algorithms, or algorithms that struggle to explain themselves, can easily conceal built-in biases. Businesses often fail at detecting bias if they only focus their efforts at the predictive model level. This approach fails to detect bias as a result of the interplay between multiple models and business rules that make up the company’s customer strategy. Instead, businesses should test bias as final decisions are being made with regard to a customer, not just the underlying propensities.

While organizations need to protect their brand and treat their customers and prospects with respect, bias detection cannot be done manually at scale. Humans have a hard time seeing bias, even when it’s right there in front of their eyes. To overcome bias challenges, businesses should utilize technology to address the problem head-on and at a scale. With an always-on strategy for preventing bias in AI, companies will be well on their way to proactively protecting the relationships that are valuable to building and protecting their brand identity.

Dr. Rob Walker

About Dr. Rob Walker

Dr. Rob Walker is Vice President of Decision Management & Analytics at Pegasystems. He has more than 20 years of experience in predictive analytics and related disciplines. Prior to Pega, he managed strategic direction and development for Chordiant, a leading decisioning technology company acquired by Pega in 2010. In 2000, Rob co-founded KiQ Ltd., a specialist provider of decision management and predictive analytics software that was acquired by Chordiant in 2004. Rob also spent eight years with Capgemini and was responsible for the creation, development, and evangelization of technological innovations in information capitalization, business intelligence, and predictive analytics. He holds a Ph.D. in Artificial Intelligence from Free University in the Netherlands.

Leave a Reply

Your email address will not be published. Required fields are marked *