Intel’s Loihi Speeds Up AI Chips Race

PinIt

From better self-driving cars to robots that learn from humans, the value of AI chips like Intel’s Loihi is becoming clear and the competition is getting fierce.

Long ago, most tasks under the artificial intelligence (AI) umbrella were performed using traditional CPUs, much like those found in smartphones or laptops, albeit much more powerful. In time, the industry changed to graphical processing units (GPUs), first popularized by those wanting to push their gaming to its limits. Now, there is demand for entirely different chip architectures — ones that are uniquely crafted to make AI and cognitive computing tasks more accessible than ever.

Intel is now joining that front, with a new self-learning neuromorphic IoT chip it’s calling Loihi, which “mimics how the brain functions by learning to operate based on various modes of feedback from the environment.” In a blog post, Michael Mayberry of Intel says this chip uses “spikes and plastic synapses that can be modulated based on timing,” which could help it “self-organize and make decisions based on patterns and associations.”

Intel’s Loihi joins already-crowded AI chips market

Loihi will be facing substantial competition, even in its early days. Nvidia is the clear leader now, helped along by that coincidence that its GPUs, originally made for gamers, turned out to be excellent for deep learning. Since that change in the industry, Nvidia has created and marketed certain GPUs to scientists and researchers who need to run these kinds of applications.

[ Related: How Open-Source GPUs Could Speed Deep Learning, Big Data Analytics ]

The rest of the competition, however, is a little nebulous, due in part to the fact that they’re proprietary technologies developed by some of the tech world’s giants. Google, for example, has its Tensor Processing Unit (TPU), which delivers “an order of magnitude higher performance per watt than all commercially available GPUs and FPGA,” according to Google CEO Sundar Pichai. Intel’s new chip won’t be competing with TPUs, but Intel can also count Google out as a potential customer.

Startups are also eager to enter this market with a reliable product. Based in Los Altos, Calif., Cerebras Systems has already raised $112 million in venture capital, at a valuation of $860 million, for a product that they still haven’t talked about publicly. Still, the company’s CEO, Andrew Feldman, said, “I don’t think the GPU is very good for machine learning. … The GPU represents 25 years of optimization for a different problem.”

Intel even tried to create its own AI-centric chip — more specifically, a general-purpose computing on graphics processing units (GPGPU) chip — which was called Larrabee. The company canceled that effort in May 2010 and has been quiet on the AI chip front ever since. Clearly, they were holding their cards close to their chest all this time.

All of these companies are focused on packing as many “cores,” or processing units, on a single chip as possible. With more cores comes more parallel computing power, in that the computer can make many calculations at once, rather than make them one at a time, albeit at a breakneck pace.

How will Intel’s Loihi compete in the AI chips market

To not go the way of Larrabee, Intel put its substantial technological weight behind Loihi, and it shows — the chip is fabricated on Intel’s 14nm process technology and features 130,000 neurons and 130 million synapses.

[ Related: Why PostgreSQL should power GPU databases and analytics platforms ]

Each neuromorphic core is connected with up to thousands of other neurons via a “core mesh,” which supports sparse, hierarchical, and recurrent neural networks. And each of them has their own learning engine, which can be configured to use supervised, unsupervised, and reinforcement learning, among other techniques.

Perhaps most importantly, Loihi is energy-efficient, according to Intel. Mayberry claims it is “up to 1,000 times more energy-efficient than general purpose computing required for typical training systems,” which means that future Intel customers could process much more data for the same energy cost.

Intel has high hopes for its new chip as well. In the first half of 2018, Intel will share a test version of Loihi with a handful of universities and research institutions that focus on AI to get things off the ground. In the future, Intel hopes to deliver “exascale” performance on a chip that’s inspired by how the brain works on a fundamental level.

But for now, Intel still believes that Loihi and other self-learning chips will help businesses make significant strides in machine learning applications that need to be both precise enough to detect anomalies, but generalized enough to work for everyone.

[ Related: Data Science and Deep Learning Leaders Create GPU Open Analytics Initiative ]

Mayberry proposes a device that reads a person’s heartbeat during a variety of activities and then learns the details of that particular person’s “normal” heartbeat. Or, the same philosophy could be extended to help cybersecurity experts protect their infrastructure from incoming attacks by parsing the difference between normal and anomalous network traffic.

From better self-driving cars to robots that learn from their human companions, the value of AI chips like Intel’s Loihi, TPU, and others is evident. The only question now is what companies will be able to accomplish using them, and just how long that migration will take.

Joel Hans

About Joel Hans

Joel Hans is a copywriter and technical content creator for open source, B2B, and SaaS companies at Commit Copy, bringing experience in infrastructure monitoring, time-series databases, blockchain, streaming analytics, and more. Find him on Twitter @joelhans.

Leave a Reply

Your email address will not be published. Required fields are marked *