Center for Continuous Intelligence

Hardware Acceleration Drives Continuous Intelligence

PinIt

AI hardware accelerators have massively parallel architectures that economically deliver the needed compute performance for Continuous Intelligence applications.

Many continuous intelligence (CI) applications make use of increasingly compute-intensive artificial intelligence (AI) algorithms and machine learning (ML) models. Factor in the volume and speed of streaming data in many applications, and the need to derive actionable information to make decisions in milliseconds to seconds, and you realize no traditional compute platform can handle the workload. What’s the solution? Continuous intelligence applications need hardware acceleration.

See also: Will the Consumerization of AI Set Unrealistic Expectations?

What the issue is for CI applications is that traditional compute systems based on standard CPUs will not suffice for a couple of reasons. They cannot process the data, train the algorithms, or run the applications against new data in an efficient manner. In most cases, scaling legacy systems to the processing level required is too costly. And even if the investment is made, the time it takes to train and run ML applications is impractical for the needs of the business.

The heart of the problem is that CPUs are designed for serial processing. CI applications that use AI or ML must execute in parallel on many cores. Hardware accelerators address this issue. GPUs and custom-designed processors such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) make available thousands of cores. Specifically, such hardware accelerators have massively parallel architectures that economically deliver the needed compute performance.

As a result, the use of hardware accelerators is on the rise. Their use is being driven by the explosive adoption of AI and ML in mainstream business processes. IHS Markit predicts the AI applications market will more than triple, growing to $128.9 billion by 2025, up from about $42.8 billion in 2019. AI processor revenues will similarly increase to $68.5 billion by 2025.

An aside worth noting is that going hand-in-hand with the use of accelerators, systems that can scale to meet the demands of CI applications also must have high-speed interconnects, increased memory size, and fast storage.

Familiar Names and New Faces

Companies providing AI acceleration hardware include NVIDIA, the leading GPU provider, as well as CPU leaders Intel and AMD. Chipsets from these vendors are being used in AI applications deployed on-premises and in AI instances offered by the major cloud providers.

Others have developed AI acceleration technologies that they incorporate (or plan to incorporate) into their AI acceleration offerings. For example, IBM’s TrueNorth neuromorphic chip architecture is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. And ARM offers AI acceleration solutions based on its Project Trillium Machine Learning Framework.

The market also includes non-traditional chip manufacturers that also have developed hardware. Some of the offerings are designed for specific AI use cases or applications. Examples include:

  • The Tensor Processing Unit (TPU), an ASIC designed for Google’s TensorFlow programming framework. TPUs often are used for ML and deep learning applications. Google developed the chip to use in many of its own applications. It then made the capabilities available to businesses via offerings such as Cloud TPU and, more recently, Edge TPU.
  • AWS Inferentia, a high-performance ML inference chip, custom-designed by AWS. Businesses can access the processing power of the chip through Amazon EC2 Inf1 instances that support ML inference applications. Inf1 instances can include up to 16 AWS Inferentia chips.

Other startups or non-traditional semi manufacturers in the market include Blaize (formerly known as Thinci), Cambricon Technology, Cerebras, Graphcore, Greenwaves, Groq, Gyrfalcon Technology, Habana (purchased late last year by Intel), Halio, Horizon Robotics, Kalray, Novumind, Syntiant, Wave Computing, and others.

This is a sampling of the companies in the market and is not intended to be an exhaustive list of companies in the space. By the time you read this article, some of the companies named here will have been acquired as great consolidation is expected.

The common thread that cuts across all these companies and their offerings are specifically intended for use in AI applications. Their use is expected to grow rapidly, according to McKinsey & Company. It predicts AI-related semiconductors will see a growth of about 18 percent annually over the next few years—five times greater than the rate for semiconductors used in non-AI applications.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *