Will Escalating Computational Costs Cause Another AI Winter?

PinIt

Without a major breakthrough in computational power, through something like quantum computing, improvements will come less frequently and at a higher cost.

For over a decade, AI has been in a hype cycle, triggered by the extensive deployment of machine learning technologies. But, in the past few years, AI experts and executives at major technology companies have warned of a looming AI winter, as we reach the computational limits of our current hardware.

As research laboratory OpenAI detailed in 2018, the amount of compute used to train large AI models is doubling every 3.4 months. That means without a major breakthrough in computational power, through something like quantum computing, improvements will come less frequently and at a higher cost.

SEE ALSO: The State of Government AI Initiatives

And the compute demands are only expected to increase, to a point where it’ll be impractical to train larger AI models.

“One of the biggest challenges is to develop methods that are much more efficient in terms of the data and compute power required to learn to solve a problem well,” said Facebook AI research scientist, Edward Grefenstette. “In the past decade, we’ve seen impressive advances made by increasing the scale of data and computation available, but that’s not appropriate or scalable for every problem.”

Hardware improvements, like AI-specific processors and supercomputers specifically built for AI, could provide another elevation, however, we have not seen a computational ceiling for AI. If anything, in a few years, this AI-specific hardware may not be able to do the job adequately.

Academics are already being priced out of AI research, which has led to calls for a federally-funded National Research Cloud, for the U.S. to better compete with China. Startups regularly try to receive grants or partnerships with major cloud providers (Amazon, Microsoft, Google) to train AI models.

In China, the government has pressured cloud operators and AI foundries to provide open access to academics, as part of the country’s plan to be the leader in AI development by 2030.

Software Enhancements May Save AI

Some see the solution as more software than hardware related. A team of researchers at TU Gruz, in Austria, developed an algorithm that mimics how the brain sends electrical impulses to neurons. This, according to the team, could massively reduce the energy costs required to train AI.

Another academic team, from the University of South California, developed a method called Sample Factory, which splits the computational resources into separate parts.

This method reduces the computational requirements but is specifically designed for deep reinforcement learning algorithms, allowing academics without supercomputers to train an AI model.

“If we want to scale to more complex behavior, we need to do better with less data, and we need to generalize more,” said Grefenstette. Creating new methods of training and resource management may provide developers with that scale.

Even if we manage to lower the needs, some believe we are reaching a plateau on what can be achieved with the current subsets of AI that are in demand, such as machine and deep learning.

Once we reach that, the playing field may level out, but we will still be far off artificial general intelligence, which is what companies like DeepMind and OpenAI are aiming to accomplish.   

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *