Emotion AI Researchers Question Accuracy, Call For Regulation

PinIt

Emotion recognition is being used for education, policing, and recruitment, yet the technology is considered by many experts to be unreliable and may lead to discrimination.

Researchers have called on the U.S. government to regulate the emotion recognition industry, which has taken a few hits in recent months over accusations of exaggeration and societal harm.  

Emotion recognition, a branch of affective computing, is currently being utilized by companies for education, policing, and recruitment, yet the technology is considered by many experts to be unreliable and may lead to discrimination.

SEE ALSO: Unlocking the Potential of Electronic Health Records with AI

A study published in July last year affirmed that it is not possible – with the current tools – to judge emotion by looking at a person’s face. This is the sales pitch for most of the commercial products sold as emotion recognition, which are currently being deployed in the U.S., South Korea, and China for high-stakes decisions.

Speaking to MIT Technology Review, University of Augsburg computing expert Elisabeth André said: “No serious researcher would claim that you can analyze action units in the face and then you actually know what people are thinking.”

But due to the lack of regulation, several companies, including HireVue, Affectiva, and NVISO, are promoting their products as being capable of just that. According to researchers, to identify an emotion, the technology needs to be used in conjunction with other sensors, like heart-rate and posture.

Research institute AI Now recently called for a blanket ban of all emotion recognition technologies for high-stakes decisions. Nuria Oliver, chief data scientist at DataPop Alliance, has a more measured plan: “We have clearly defined processes to certify that certain products that we consume—be it food that we eat, be it medications that we take—they are safe for us to take them, and they actually do whatever they claim that they do. We don’t have the same processes for technology.”

Oliver calls for regulations should be in place to let some companies operate, while keeping bad actors out. The issue with this is law enforcement and other government departments are already working with these companies, and clearly do not fully understand the technology behind it.

Meredith Whittaker, a co-director at AI Now, said that emotion recognition research should still go ahead, but the commercial side should desist. “We are particularly calling out the unregulated, unvalidated, scientifically unfounded deployment of commercial affect recognition technologies. Commercialization is hurting people right now, potentially, because it’s making claims that are determining people’s access to resources,” she said.

While emotion recognition is not receiving as much interest as facial recognition by the 2020 Democratic nominees, it is starting to see some pushback. In Illinois, it is now illegal to use AI analysis in a job interview and the FTC has been tasked with investigating HireVue. In Europe, the EU Commission is considering a five-year ban of facial recognition in public; an emotion recognition ban may follow.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *