Intel Lays Out Vision for Future of AI Applications

PinIt

Chipmaker Intel is making artificial intelligence its focus for the future, infusing AI into all applications via their expanding portfolio of processors.

Intel today laid out an ambitious effort to infuse artificial intelligence (AI) into every application by leveraging an expanding portfolio of processors intended to be deployed across an extended enterprise.

Announced at an AI DevCon event, the core of that effort consists of traditional Xeon-class processors, field programmable gate arrays (FPGAs), Intel Movidius platforms and compute sticks for processing deep learning and computer vision algorithms, extensions to the Intel Nervana Neural Network Processors (NNPs). Those extensions include a smaller floating-point instruction set that should make it simpler to share calculations more rapidly across processors, says Naveen Rao, corporate vice president and general manager of the Artificial Intelligence Products Group at Intel.

That approach will enable developers to overcome the memory limitations that are inherent in graphical processor units (GPUs) that today are the primary vehicle for developing AI applications, says Rao.

“It’s not one-size fits all for AI silicon,” he adds.

In fact, Rao notes a combination of processor types will make it possible to soon provide enough compute horsepower to run simulations of events that can be employed to train AI applications, a process today that requires massive amounts of manual labor on the part of developers when relying solely on GPUs because in part the limited amount of memory available.

See also: AI needs big data, and big data needs AI

IT vendors and enterprise IT organizations that are lending their support to the Intel initiative include Google, Amazon Web Services (AWS), Microsoft, Novartis and C3 IoT, a provider of a framework for building AI and Internet of Things (IoT) applications. Today C3 IoT announced it is collaborating with Intel on the development of an IoT appliance based on an instance of Intel AI software deployed on Microsoft Azure Stack, an implementation of the Microsoft Azure cloud software that runs in an on-premises environment. The goal is to make AI algorithms accessible not only in the cloud but also close to where applications will be deployed in a local data center.

To facilitate the development of AI applications Intel also announced it is incorporating deep learning frameworks such as TensorFlow, MXNet, Paddle Paddle, CNTK and ONNX into nGraph, a framework-neutral deep neural network (DNN) model compiler that Intel has developed. That framework will make it simpler to build AI applications that can be consistently deployed all the way from the network edge to the cloud. Intel is also providing Natural Language Processing Library for JavaScript under an open source license that is intended to enable researchers to build NLP algorithms that will optimally run on Intel processors.

Going forward its now apparent that Intel sees various classes of algorithms being embedded within multiple classes of processors. That approach should make it simpler for developers to build AI applications using tools that function at much higher levels of abstraction. Less clear is to what degree using those algorithms might lock developers into specific families of Intel processors.

In the meantime, IT organizations should be developing long and short-term strategies when it comes to infusing analytics into applications. There’s clearly a short-term arms race to leverage deep and machine learning algorithms to gain a competitive edge by deploying next-generation applications capable of automating processes in real time. But Intel is also making it clear it will soon democratize access to those algorithms across its entire processor lineup, which in time should serve to make AI application all but ubiquitous sometime shortly after the turn of the decade.

Leave a Reply

Your email address will not be published. Required fields are marked *