SHARE
Facebook X Pinterest WhatsApp

AI Now: Emotional AI Should Be Banned

thumbnail
AI Now: Emotional AI Should Be Banned

Side profile young man looking at his side getting his thought together isolated on gray wall background

AI Now Institute recommends regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it.

Written By
thumbnail
David Curry
David Curry
Dec 18, 2019

Research institute AI Now has called for emotional AI, which uses affect recognition from people’s facial expressions to identify emotion, to be regulated and banned from making important decisions.

In its 2019 annual AI report, the institute said affect recognition is clearly flawed and should not be used to make decisions in recruitment, education, customer services or criminal justice. As noted in a story by The Verge in July: “the belief that facial expressions reliably correspond to emotions is unfounded.”

SEE ALSO: Act Now to Prevent Regulatory Derailment of the AI Boom

“Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it,” said AI Now in the report.

Emotional intelligence is a large market, worth $12 billion. According to Market Reports World that value could rise to $90 billion by 2024. Key players are already deploying technology in the workplace, for example, IBM recently shipped the second version of its AI robot to the International Space Station with built-in emotional intelligence.

As more industries look for ways to integrate artificial intelligence, there is a worry that too much control and power will be handed to these programs, without properly assessing the accuracy and value they bring.

In the case of affect recognition, it may be difficult and time consuming for human evaluation to provide feedback and counterbalance, especially in underfunded public services. This could lead to someone not getting a job or going to prison because an algorithm identified the wrong emotion.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

Why Your AI Pilot Is Stuck in Purgatory; And What to Do About It
AI Infrastructure Services: Banking and Financial Services’ New Foundation for Computing
Ratan Saha
Dec 16, 2025
Real-time Analytics News for the Week Ending December 13
Why Open Source Is Powering the Next Generation of Scalable Data Architecture
RTInsights Team
Dec 12, 2025

Featured Resources from Cloud Data Insights

Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
The Role of Data Governance in ERP Systems
Sandip Roy
Nov 28, 2025
What Is Sovereign AI? Why Nations Are Racing to Build Domestic AI Capabilities
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.