For the most part, AI systems in the limited or minimal risk category will be able to operate as they did previously, with the EU legislation specifically tackling AI systems that could put EU citizen’s security or privacy at risk.
The European Union has published its draft legislation on AI regulation, which is set to be the key framework for AI providers and distributors looking that work in the EU.
In the legislation, the EU sorts AI systems into three risk categories: unacceptable-risk, high-risk, and limited or minimal risk. For the most part, AI systems in the limited or minimal risk category will be able to operate as they did previously, with the EU legislation specifically tackling AI systems that could put EU citizen’s security or privacy at risk.
SEE ALSO: Self-Regulation and Policymaking Guidance Regarding the Use of AI and ML
“Artificial Intelligence is a fantastic opportunity for Europe and citizens deserve technologies they can trust,” said President of the European Commission, Ursula Gertrud von der Leyen. “Today we present new rules for trustworthy AI. They set high standards based on the different levels of risk.”
AI systems of minimal or low risk include chatbots, spam filters, video and computer games, and inventory management systems, along with most other non-personal AI systems that are already deployed in the world.
High-risk AI systems include most artificial intelligence that is deployed with real-world effects, such as consumer credit scores, recruitment, and safety-critical infrastructure. While these are not banned, the EU legislation aims to ensure that there are more stringent requirements and oversight into these systems, along with more costly fines for those that fail to properly secure data.
The EU intends to review the high-risk list annually, either to add new AI systems to it or to downgrade some AI systems that were high-risk but have become either normalized in society or do not have the same risk factor as in previous years.
Unacceptable-risk AI systems will not be permitted in the EU, following the passing of this AI regulation. These include AI systems that use subliminal, manipulative, or exploitative techniques, which is not so much a category but a general ban on forms of AI like targeted political advertising, or AI that interprets emotion through facial expressions.
Remote bio-metrics identification systems will also be banned under the AI regulation, specifically when used by law enforcement to identify a person. Social scoring, which is currently in use in China, is another system on the ban list.
For organizations that operate or distribute AI systems inside the EU or those that conduct business inside the economic bloc, this legislation is the first clear sign of what is coming. The EU will take 12 to 24 months to agree on the finer details, but it is unlikely that the legislation will be much changed from its first reading.
That gives organizations in the high-risk AI systems category a short amount of time to re-tune and hire to ensure that their AI programs are workable in the EU. Additional human oversight, transparency, and risk management will be needed to ensure that AI systems pass inspection, and the penalties for non-compliance are currently set at €30 million or six percent of global revenue, so for larger organizations, the costs may force them to exit the EU if they cannot retool their AI systems.