Google CEO: We Need Global Regulation On AI

PinIt

Google and Alphabet CEO Sundar Pichai has called for global regulation on artificial intelligence, admitting the company cannot do it alone.

Google and Alphabet CEO Sundar Pichai called for global regulation on artificial intelligence in a Financial Times column that appeared this week. He said governance cannot come from Google or market forces, and called on countries to come together to develop a set of standards.

Google has already made some efforts to regulate AI, publishing a set of principles in 2018 as guidance for ethical development. It also developed tools to test AI decisions and conducted assessments on new products, to ensure they meet the guidelines. These tools were open-sourced last year. Pichai said every company working with AI should have a similar set of guidelines.

SEE ALSO: Not Sure What Your AI Is Doing? Google Will Explain

However, guidelines and tools are not enough to stop businesses from launching ethically questionable AI or bad actors building tools for mass surveillance or other human rights violations. To avoid this, Pichai believes government regulation is required.

“Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it,” said Pichai. He said Europe’s GDPR would be a strong foundation for regulatory frameworks.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits,” he added.

It is not the first time a technology giant has called on governments to regulate. Facebook CEO Mark Zuckerberg called for more government regulation and stricter controls in May 2019 and said the company has too much power.

While it may seem to some like a cop-out by tech firms who don’t care enough to self-regulate, it is also an admission that things are becoming too difficult to handle alone. Whether it’s misinformation or AI, the big tech companies are starting to realize it may take more than an algorithm or set of principles to ensure safety and privacy on the web.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *