Nervana is designed to be a ‘purpose-built architecture for deep learning’ that frees it from the limitations of traditional computing chips.
While details about the launch are still forthcoming, Intel recently announced its intention to compete — again — in the artificial intelligence (AI) chip market with its new Nervana Neral Network Processor (NNP). Nervana will use a specialized architecture that makes it less than ideal for everyday computing tasks, but spectacular at those involving high volumes of data and incredibly complex questions.
Intel acquired the chip through its $408 million acquisition of California technology startup Nervana, along with the company’s deep learning framework and learning platform. Nervana was founded by Naveen Rao back in 2014 after a stint developing neural networks with Qualcomm. Facebook also collaborated on the technical design.
Intel itself is already familiar with this market — not weeks ago the company announced its Loihi neuromorphic chip, which tries to mimic how the human brain analyzes difficult problems. There’s also competition from Google’s Tensor Processing Unit and Nvidia’s graphical processing unit (GPU) components, not to mention a handful of Silicon Valley startups, all looking for the best way to stock their specialized chips in data centers around the world.
When Nervana was acquired, CEO Rao claimed that the newest hardware would be able to transfer data at 2.4 terabytes per second, at a latency five to 10 times faster than traditional chips.
What’s a neural network, though?
In short, a neural network mimics the brain’s capacity to process information in a parallel, interconnected fashion. A network takes input from the world, learns how to make the right decision, and can then be used in the “wild” to automate all types of decisions.
Neural networks can be run on traditional computing hardware, but the serial (as in chains, not myriads of parallel connections as in our brains) architecture isn’t ideal — hence all the new specialty chips.
A unique approach to hardware
Rao says that Nervana is a “purpose-built architecture for deep learning” that frees it from the limitations found when trying to solve these problems with traditional computing chips. Among the interesting hardware changes with Nervana is that it doesn’t use a cache like most chips. Without this, Nervana can actually use computing power even higher, which translates to faster training.
On top of that, Nervana uses “massive bi-directional data transfer,” which means that multiple chips can act as one large virtual chip to tackle the biggest problems, such as self-driving cars or supplementing health care decisions.
Intel also developed “a new numeric format” they’ve dubbed Flexpoint. According to the company, “Flexpoint allows scalar computations to be implemented as fixed-point multiplications and additions while allowing for large dynamic range using a shared exponent.” In simpler terms: the chip is specialized to process the type of math required by neural networks, which improves parallelism and decreases the amount of power required for each computation.
Nervana’s goals: solving our toughest problems
Along with other products like Loihi, Myriad X for machine vision, and Xeon Scalable processors, Nervana aims to help companies “develop new classes of AI applications.” Rao says, “We’ll look back in 10 years and see this time as the inflection point of when compute architectures became neural.”
Intel itself has pointed out a few potential applications for its specialty Nervana chips. In healthcare, the company wants to help doctors make diagnoses earlier, and with more accuracy, which will help better treat those with cancer or Parkinson’s disease. Unsurprisingly, this technology will also be leveraged to bring self-driving cars to our roads faster.
Predicting weather via AI is difficult at best, but Intel seems confident that Nervana will help in that regard. They imagine a neural network that collects the immense amount of data collected about a hurricane’s movement, wind speeds, water temperatures, and more, and then tries to make predictions about which regions in the world will be more susceptible to these disasters in the years to come. With that information in hand, people and governments can better prepare for the worst.
According to Intel, Nervana’s innovations bring the company much closer to its 2016 promise to achieve a 100-fold increase in deep learning performance by 2020, and the company plans to improve the hardware every year until then. We’ll have to wait until then to see how the use cases change and broaden in scope, but one thing is clear: we probably don’t need a neural network to tell us that, even in a few short years, AI will be a completely different beast.