SHARE
Facebook X Pinterest WhatsApp

AI Now: Emotional AI Should Be Banned

thumbnail
AI Now: Emotional AI Should Be Banned

Side profile young man looking at his side getting his thought together isolated on gray wall background

AI Now Institute recommends regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it.

Written By
thumbnail
David Curry
David Curry
Dec 18, 2019

Research institute AI Now has called for emotional AI, which uses affect recognition from people’s facial expressions to identify emotion, to be regulated and banned from making important decisions.

In its 2019 annual AI report, the institute said affect recognition is clearly flawed and should not be used to make decisions in recruitment, education, customer services or criminal justice. As noted in a story by The Verge in July: “the belief that facial expressions reliably correspond to emotions is unfounded.”

SEE ALSO: Act Now to Prevent Regulatory Derailment of the AI Boom

“Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it,” said AI Now in the report.

Emotional intelligence is a large market, worth $12 billion. According to Market Reports World that value could rise to $90 billion by 2024. Key players are already deploying technology in the workplace, for example, IBM recently shipped the second version of its AI robot to the International Space Station with built-in emotional intelligence.

As more industries look for ways to integrate artificial intelligence, there is a worry that too much control and power will be handed to these programs, without properly assessing the accuracy and value they bring.

In the case of affect recognition, it may be difficult and time consuming for human evaluation to provide feedback and counterbalance, especially in underfunded public services. This could lead to someone not getting a job or going to prison because an algorithm identified the wrong emotion.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

AI Agents Need Keys to Your Kingdom
The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis
Why the Next Evolution in the C-Suite Is a Chief Data, Analytics, and AI Officer
Digital Twins in 2026: From Digital Replicas to Intelligent, AI-Driven Systems

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.