Deployment of new products and services based on AI and ML should be done carefully and cautiously. Otherwise, regulators might step in.
Artificial intelligence is at the forefront of the development process for new and existing products at Google, but the search giant is being careful to avoid pushing AI out into the world too fast, with a lot of time spent thinking what could go wrong.
“These technologies come with an extraordinary range of risks and challenges,” said James Manyika, Google’s senior vice president of technology and society, at Fortune‘s Brainstorm A.I. conference in San Francisco. He spoke of the potential impact of AI on human labor, as well as misuse of technology and it not working as expected.
SEE ALSO: Will Synthetic Data Drive the Future of AI/ML Training?
Google may have been taken by surprise by the excitement around the launch of OpenAI’s two AI apps, DALL-E and ChatGPT, which have received quite a bit of buzz. However, executives have assured investors and the wider public that while OpenAI can launch these apps with little reputational worry, Google is not in a similar position.
That said, Google is finding value in some of its less buzzy but equally meaningful AI tools.
“We’re starting to be able to show that AI can actually help us make extraordinary breakthroughs in these foundational fields that are going to be incredibly useful,” said Manyika, referencing improvements to search, alongside new products which would not be possible without AI, such as Alphabet’s self-driving vehicle unit, Waymo.
As one of the leaders of Google’s strategy when it comes to the deployment of technology and the impact it will have, it’s not surprising that Manyika is erring on the side of caution when it comes to deployment of AI and ML.
“I think you’ll find that many of us embrace regulation because we’re going to have to be thoughtful about when is it appropriate to use these technologies, how do we put them out into the world,” said Manyika.
While that may be true, Google and other software giants have opposed legislation that would curtail their ability to launch products and assess the underlying algorithms that run their services. It often seems that “embracing regulation” means the passing on of blame to some other entity, but without any of the scrutiny that needs to be in place to avoid such damage.