SHARE
Facebook X Pinterest WhatsApp

Emotion AI Researchers Question Accuracy, Call For Regulation

thumbnail
Emotion AI Researchers Question Accuracy, Call For Regulation

Connected Minds series. Arrangement of human profiles, wires, shapes and abstract elements on the subject of mind, artificial intelligence, technology, science and design

Emotion recognition is being used for education, policing, and recruitment, yet the technology is considered by many experts to be unreliable and may lead to discrimination.

Written By
thumbnail
David Curry
David Curry
Feb 21, 2020

Researchers have called on the U.S. government to regulate the emotion recognition industry, which has taken a few hits in recent months over accusations of exaggeration and societal harm.  

Emotion recognition, a branch of affective computing, is currently being utilized by companies for education, policing, and recruitment, yet the technology is considered by many experts to be unreliable and may lead to discrimination.

SEE ALSO: Unlocking the Potential of Electronic Health Records with AI

A study published in July last year affirmed that it is not possible – with the current tools – to judge emotion by looking at a person’s face. This is the sales pitch for most of the commercial products sold as emotion recognition, which are currently being deployed in the U.S., South Korea, and China for high-stakes decisions.

Speaking to MIT Technology Review, University of Augsburg computing expert Elisabeth André said: “No serious researcher would claim that you can analyze action units in the face and then you actually know what people are thinking.”

But due to the lack of regulation, several companies, including HireVue, Affectiva, and NVISO, are promoting their products as being capable of just that. According to researchers, to identify an emotion, the technology needs to be used in conjunction with other sensors, like heart-rate and posture.

Research institute AI Now recently called for a blanket ban of all emotion recognition technologies for high-stakes decisions. Nuria Oliver, chief data scientist at DataPop Alliance, has a more measured plan: “We have clearly defined processes to certify that certain products that we consume—be it food that we eat, be it medications that we take—they are safe for us to take them, and they actually do whatever they claim that they do. We don’t have the same processes for technology.”

Oliver calls for regulations should be in place to let some companies operate, while keeping bad actors out. The issue with this is law enforcement and other government departments are already working with these companies, and clearly do not fully understand the technology behind it.

Meredith Whittaker, a co-director at AI Now, said that emotion recognition research should still go ahead, but the commercial side should desist. “We are particularly calling out the unregulated, unvalidated, scientifically unfounded deployment of commercial affect recognition technologies. Commercialization is hurting people right now, potentially, because it’s making claims that are determining people’s access to resources,” she said.

While emotion recognition is not receiving as much interest as facial recognition by the 2020 Democratic nominees, it is starting to see some pushback. In Illinois, it is now illegal to use AI analysis in a job interview and the FTC has been tasked with investigating HireVue. In Europe, the EU Commission is considering a five-year ban of facial recognition in public; an emotion recognition ban may follow.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

If 2025 was the Year of AI Agents, 2026 will be the Year of Multi-agent Systems
AI Agents Need Keys to Your Kingdom
The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis
Why the Next Evolution in the C-Suite Is a Chief Data, Analytics, and AI Officer

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.