SHARE
Facebook X Pinterest WhatsApp

NVIDIA’s New Grace CPU Enables Giant AI

thumbnail
NVIDIA’s New Grace CPU Enables Giant AI

The Grace processor, named for computer technology pioneer Grace Hopper, will make working with some of the most processing intense and data massive applications easier and more efficient.

Apr 13, 2021

At the 2021 keynote for GTC, NVIDIA’s CEO and founder Jensen Huang announced the arrival of its first data center CPU, designed to amplify by ten times the performance of the world’s fasters servers. Known as “Grace,” it’s the first data center CPU, an ARM-based processor designed to handle the most processing-intensive, big data needs on the market.

The world’s fastest supercomputing

In addition to multiplying compute power, NVIDIA worked to reduce the power required to run Grace. It uses an energy-efficient ARM CPU with a low-power memory subsystem. The new processor allows adopters to push artificial intelligence and data processing boundaries by leveraging Arm’s data center architecture and offering choice to those in the AI and HPC community.

See also: NVIDIA Supercharges Hawk Supercomputer for AI Work

Grace is named for Grace Hopper, the mathematician and rear admiral in the U.S. Navy who was a pioneer in developing computer technology. The processor is NVIDIA’s response to the growing parameters of giant AI, models distinguished by billions of parameters and only growing larger. Grace features:

  • fourth-generation NVIDIA NVLink interconnect technology: 900 GB/s connection between Grace and coupled NVIDIA GPUs.
  • LPDDR5x memory subsystem: Ten times more efficient and twice the bandwidth of DDR4 memory
  • Unified cache coherence with a single memory address space: simplifying programmability
  • Support by the NVIDIA HPC software development kit
  • The full suite of CUDA and CUDA-X libraries
Advertisement

Grace first adopters

The Swiss National Supercomputing Center (CSCS) will be among the first to build Grace-powered supercomputers to further scientific research efforts in the Swiss community. The U.S. Department of Energy’s Los Alamos National Laboratory is also planning to develop some of the first supercomputers.

The new processor will make working with some of the most processing intense and data massive applications — think natural language processing, recommender systems, and AI supercomputing — easier and more efficient. Although this is still a niche of computing, it enables researchers and scientists to tackle some of the universe’s biggest questions. NVIDIA expects availability at the beginning of 2023.

thumbnail
Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Recommended for you...

Why AI Needs Certified Carrier Ethernet
Real-Time RAG Pipelines: Achieving Sub-Second Latency in Enterprise AI
Abhijit Ubale
Jan 28, 2026
Excel: The Russian Tsar of BI Tools
Real-time Analytics News for the Week Ending January 24

Featured Resources from Cloud Data Insights

Real-time Analytics News for the Week Ending January 31
The Foundation Before the Speed: Preparing for Real-Time Analytics
Why AI Needs Certified Carrier Ethernet
Real-Time RAG Pipelines: Achieving Sub-Second Latency in Enterprise AI
Abhijit Ubale
Jan 28, 2026
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.