Is Your AI Ethical?

PinIt

Businesses should do their part to ensure products are designed judiciously to reflect core company values and provide audit trails of how AI is learned.

As we examine an increasing reliance on artificial intelligence (AI) and machine learning, it’s being revealed that AI can have a built-in bias, whether intentional or not.

In late 2019, Apple and Goldman Sachs faced allegations that the Apple Card used an algorithm that discriminated against women in credit-scoring evaluation – after Apple’s own co-founder Steve Wozniak and entrepreneur David Heinemeier Hansson received credit limits of 10-20 times higher than their wives.

See also: Act Now to Prevent Regulatory Derailment of the AI Boom

recent study also found that AI-based automated speech recognition (ASR) systems from Amazon, Apple, Google, IBM, and Microsoft exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for African American speakers, compared with 0.19 for Caucasian speakers. The study highlighted the need to invest resources into ensuring that ASR systems and speech researchers are broadly inclusive.

Companies are only beginning to understand issues with AI-based products and take action towards more ethical AI. A State of AI in the Enterprise survey from Deloitte found that 32% of executives ranked ethical issues as a top-three risk of AI, but most don’t yet have specific plans in place to address the risk.

The Road to Ethical AI

In the corporate world, AI is being used for everything from the development of new products and platforms to driving marketing initiatives and major business decisions. 

In a world where data is the new oil, it is increasingly tempting to throw all the data you can lay your hands on at a problem to best solve it quickly, cheaply, and accurately for your planned use case; however, it’s vital that the data that you collect, clean, and use to train such models is nurtured both on its inclusiveness, correctness, and its ethics.

Part of any major machine learning project that interacts with humanity should assess its fairness. Indeed, many standards groups are attempting to build consensuses around such ethics, such as the IEEE’s P7000 series on Ethics of Autonomous and Intelligent Systems.

While progress is being made within standards bodies, organizations can also take steps to do their part to ensure products are designed judiciously to reflect core company values, provide audit trails of how AI is learned, and finally be remediated if or when it discriminates or causes harm. If we cannot do so at the outset of AI design and ensure inclusive data for machine learning, we risk losing the benefits of AI altogether. Organizations using and creating AI-based products are recognizing that responsible innovation requires stronger internal governance.

Getting Started

One solution is building internal ethics committees. For organizations, they can help with the design of AI-based products and services, while ensuring privacy, security, fairness, and ultimately building trust among consumers and partners.

But building ethics committees is easier said than done. It was reported that Google formed an AI ethics committee that was to meet quarterly in 2019, but it only lasted a week.

Luckily, a new report from the Ethics Institute at Northeastern University and Accenture entitled Building Data & AI Ethics Committees offers expert guidance.

The report states that when getting started with an ethics committee, it’s critical to put together the right team of people to represent organizational stakeholders. From there, they must think through and agree on key functions, values, principles, and processes. Key committee considerations include:

  • What are the basic values the committee is meant to protect?
  • What are the guiding principles in support of the values?
  • What are the types of expertise needed?
  • What are the standards by which committees make judgments?
  • How can the committee avoid bias and conflicts of interest?
  • When should the committee be consulted?
  • What authority does the committee have?

While forming and managing a committee may be difficult in the beginning, having one in place could help prevent AI issues down the road caused by biased product development, and better inform engagements and business decisions that will ultimately build trust and confidence with customers.

Andrew Bolster

About Andrew Bolster

Andrew Bolster is ML Team Lead at WhiteHat Security. He also sits on the IEEE’s Working group: Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems.

Leave a Reply

Your email address will not be published. Required fields are marked *