AI Bias: FTC Cautions Businesses and Offers Guidance

PinIt

While AI bias is a relatively new concern, the FTC has decades of experience enforcing bias laws that are relevant to today’s challenges.

A Federal Trade Commission (FTC) blog posted this week issued a stern warning to businesses about the need to address AI bias. The blog noted that while AI promises “to revolutionize our approach to medicine, finance, business operations, media, and more,” research has found that it can “produce troubling outcomes – including discrimination by race or other legally protected classes.”

The blog cited an example of COVID-19 prediction models designed to help health systems deal with the virus through efficient allocation of ICU beds, ventilators, and other resources. A study published in the Journal of the American Medical Informatics Association drew attention to potential bias in decision-making that could creep in if precautions were not taken. The authors of that study noted:

“The COVID-19 pandemic is presenting a disproportionate impact on minorities in terms of infection rate, hospitalizations, and mortality. Many believe AI is a solution to guide clinical decision-making for this novel disease, resulting in the rapid dissemination of underdeveloped and potentially biased models, which may exacerbate the disparities gap.”

See also: FICO’s Scott Zoldi Talks Data Scientist Cowboys and Responsible AI

FTC guidance on AI bias

The FTC noted that while AI bias is a relatively new concern, the agency has decades of experience enforcing bias laws that are relevant to today’s challenges. Specifically, three laws important to developers and users of AI include:

  • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
  • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
  • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

Combining past research into the topic by the agency and information gathered in a 2018 FTC hearing on AI algorithms and predictive analytics, the agency issued some guidance for businesses to follow. Many companies are already trying to address the topic via responsible or ethical AI efforts. Those in need of additional direction to avoid AI bias might use previously issued FTC suggestions as a start. Those suggestion include:

  • Be transparent
  • Explain your decisions to the customer
  • Ensure that your decisions are fair
  • Ensure that your data and models are robust and empirically sound
  • Hold yourself accountable for compliance, ethics, fairness, and non-discrimination

The most recent guidance issued this week builds on those suggestions. Specifically, the agency offers important lessons on using AI truthfully, fairly, and equitably. Among the lessons:

Start with the right foundation. If a data set is missing information from particular populations, using that data to build an AI model may yield unfair or inequitable results to legally protected groups. From the start, think about ways to improve your data set, design your model to account for data gaps, and limit where or how you use the model.

Watch out for discriminatory outcomes.  How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.

Embrace transparency and independence. As your company develops and uses AI, think about ways to embrace transparency and independence – for example, by using transparency frameworks and independent standards, conducting and publishing the results of independent audits, and opening your data or source code to outside inspection.

Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver.

Tell the truth about how you use data. In its guidance on AI last year, the FTC advised businesses to be careful about how they get the data that powers their model.

Do more good than harm. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. If your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.

Hold yourself accountable – or be ready for the FTC to do it for you. Companies need to hold themselves accountable for their algorithms’ performance. The FTC recommendations for transparency and independence can businesses do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *