AI’s Empathy Problem

PinIt

AI analysis may identify certain emotional traits, without providing a more holistic perspective on the person exhibiting those traits.

Should AI-driven systems measure and mimic human empathy? There are two sides to this question: AI is being employed to deliver human-like responses, and is also capturing and analyzing human responses to address their states of mind.

The frontline of many of today’s enterprise AI systems is customer service — chatbots and online response systems that attempt to deliver friendly, informed services on demand. Some systems come close, but AI may not be ready to deliver the kind of empathy that a live human agent can provide – such as going the extra mile to help a customer.

See also: 8 Digital Transformations Coming in 2020

AI’s empathy issues were explored in a survey of 6,000 consumers from across the globe, conducted by Pegasystems. Trust in corporations having the best interests of customers is lukewarm, and there’s even less trust in AI. Only 40% agree that AI has the potential to improve customer service and interactions. For example, more than two-thirds, 68%, trust a human more than AI to decide on bank loan approvals, and 69% say they would be more inclined to tell the truth to a person rather than AI.

About one-third of respondents, 38%, don’t believe AI could ever understand their preferences as well as a human being. Only 30% of respondents said they’re comfortable with a business using AI to interact with them.

Looking at how things flow in the other direction, there is also concern about how accurately AI can capture and analyze human responses. AI analysis may identify certain emotional traits, without providing a more holistic perspective on the person exhibiting those traits. “Emotional AI is especially prone to bias,” according to Mark Purdy, John Zealley and Omaro Maseli, writing in Harvard Business Review. For instance, “one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others,” the authors, all with Accenture, warn. Plus, “consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.”

Consumers in the Pegasystems survey also expressed concern about bias in AI-driven interactions: A majority, 54%, believe AI could show bias in the way it makes decisions, with decisions based on the biases of the person who created it. And 54% of respondents were skeptical about a machines’ ability to learn and adapt.

Purdy and his Accenture team offer recommendations on how AI can be better tuned to handle human emotions:

Improve the ability to create products that adapt to consumer emotions. “With emotional AI, any product or service — whether in the car or elsewhere — can become an adaptive experience.” However, this could induce bias, the Accenture team says. “A biased adaptive in-cabin environment could mean that some passengers are misunderstood. Elderly people, for example, might be more likely to be wrongly identified as having driver fatigue — the older the age of the face, the less likely it is that expressions are accurately decoded.”

Improve tools to measure customer satisfaction. Needed are “algorithms can not only identify “compassion fatigue” in customer service agents but can also “guide agents on how to respond to callers via an app.” A biased algorithm,” he continues, “perhaps skewed by an accent or a deeper voice, might result in some customers being treated better than others — pushing those bearing the brunt of bad treatment away from the brand.”

Transform the learning experience. Emotional insights could, for example, “allow teachers to design lessons that spur maximum engagement, putting key information at engagement peaks and switching content at troughs. It also offers insights into the students themselves, helping to identify who needs more attention.” An algorithm, however, might completely miss or misinterpret learning styles, they state, resulting in misassumptions that could affect learning outcomes all the way to the workplace, “meaning that even in work training programs, only a fraction of employees can enjoy full professional development.”

Human emotions are reactions are complex, and likely too much for AI to handle, beyond shallow engagements or analysis. This is the next frontier for AI.

Avatar

About Joe McKendrick

Joe McKendrick is RTInsights Industry Editor and industry analyst focusing on artificial intelligence, digital, cloud and Big Data topics. His work also appears in Forbes an Harvard Business Review. Over the last three years, he served as co-chair for the AI Summit in New York, as well as on the organizing committee for IEEE's International Conferences on Edge Computing. (full bio). Follow him on Twitter @joemckendrick.

Leave a Reply

Your email address will not be published. Required fields are marked *