Center for Data Innovation Outlines 10 Principles for AI Regulation

PinIt
ai-enabled edge use cases

The publication of these principles comes as policymakers across the world have proposed regulations to ensure that AI is focused on safety, privacy, and responsible pursuits.

The Center for Data Innovation, a non-profit thinktank, has published ten principles for AI regulation, with an aim to provide policymakers with ways to regulate while continuing to spur on innovation.  

The publication of these principles comes as policymakers across the world have proposed regulations to ensure that AI production and deployment is focused on safety, privacy, and responsible pursuits. The White House has published a blueprint for an AI Bill of Rights, Congress has its own bill in development, and the European Union has published the first draft of its AI Act

SEE ALSO: Open Banking: Has Technology Outpaced Regulations?

“AI has the potential to create many significant economic and social benefits,” said director of the Center for Data Innovation, Daniel Castro. “However, concerns about the technology have prompted policymakers to propose a variety of laws and regulations to create responsible AI. Unfortunately, many proposals would likely harm AI innovation because few have considered what responsible regulation of AI entails.”

The ten principles proposed by the Center for Data Innovation are as follows: 

Avoid Pro-Human Biases 

According to the center, AI should be held to the same standard as humans. Anything which is legal for humans to do should be legal for AI. 

Regulate Performance, Not Process 

Regulators should look at addressing concerns about AI safety and bias by looking at the outcomes, rather than banning a process wholesale. 

Regulate Sectors, Not Technologies 

Similar to the previous point, regulators should look at individual sectors to define necessary regulations, instead of banning a technology from use by any business. 

Avoid AI Myopia 

Some of the issues with AI are to do with larger issues on the internet and of society at large. By defining an issue only in AI-terms, it fails to properly deal with it. 

Define AI Precisely 

With AI legislation likely to regulate and possibly ban certain technologies, legislators need to be precise in their language to avoid including other software and systems. 

Enforce Existing Rules

There are worker safety, product liability, privacy, and safety laws already in place to protect people. Ensuring that AI is accountable to those laws, instead of writing new ones, could reduce the complexity of regulating. 

Ensure Benefits Outweigh Costs 

The cost of setting up regulations and ensuring that businesses comply is always going to be high, especially in a burgeoning industry such as AI. Legislators need to ensure that the benefits of their regulations outweigh the costs, both to them and to businesses. 

Optimize Regulations

Policymakers should routinely focus on optimizing regulations to ensure that they are effective at preventing problematic AI development while not damaging the rest of the industry. 

Treat Firms Equally

Changing the rules for firms of a certain size or certain domicile will lead to an uneven playing field and put consumers at risk.  

Seek Expertise

Draw on the wealth of technical expertise to help draft regulations and laws that will work for the industry and help protect consumers. 

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *