Google has launched the latest version of its TPU AI chip, with a dig at Nvidia’s A100 in the official research paper published alongside.
Google has announced the latest edition of its artificial intelligence chips, the fourth generation tensor processing unit, which it claims is faster than Nvidia’s A100 tensor core GPU.
According to Google, TPU v4 is 1.2-1.7 times faster than the Nvidia A100, and uses 1.3-1.9 times less power, although Nvidia has disputed some of these figures. Nvidia CEO Jensen Huang said in a blog post that its latest H100 chips delivered four times more performance than A100, and hinted that those are more powerful than Google’s TPU v4.
In the technical paper released alongside, Google notes that the TPU v4 was created specifically for supercomputer which trained the 540 billion-parameter PaLM large language model (LLM). This LLM was the framework for Bard, Google’s chatbot and ChatGPT competitor.
This back and forth between Nvidia and Google has heated up over the past few years, although Nvidia is very firmly in the driver seat when it comes to industry acclamation. That said, Google is a potent challenger and has some big name LLM developers on board, including Midjourney.
“We’re proud to work with Google Cloud to deliver a seamless experience for our creative community powered by Google’s globally scalable infrastructure,” said David Holz, founder and CEO of Midjourney. “From training the fourth version of our algorithm on the latest v4 TPUs with JAX, to running inference on GPUs, we have been impressed by the speed at which TPU v4 allows our users to bring their vibrant ideas to life.”
Generative systems will most likely be the main talking point in tech this year, with each month a new type of system coming online. ChatGPT remains the focal point for a lot of this creation, but we are also seeing AI generated music featured the likeness of Drake and Kanye West, AI generated videos of Harry Potter and Lord of the Rings in the style of Balenciaga, and AI generated ebooks appearing on Amazon.
To power all of this, GPU and TPU designers are working overtime to improve the performance of these chips, while lowering the amount of power each chip needs. Expect this industry to heat up over the next 12 months, as Amazon and Microsoft also look into building their own custom chips for AI and ML-related projects.