NVIDIA Advances AI Ambitions Via Open Source RAPIDS Libraries

PinIt

NVIDIA’s new effort should reduce the time it takes to make AI models for tomorrow’s applications.

NVIDIA has launched an open source project that provides data scientists with access to libraries to optimize the processing of analytics and machine learning algorithms running in parallel on its graphical processor units (GPUs).

Launched at the GTC Europe conference, the RAPIDS suite of software from NVIDIA promises to reduce both the time it takes to develop, for example, artificial intelligence (AI) models as well as improve overall performance of advanced analytics applications.

IBM and Oracle along with Anaconda, BlazingDB, Databricks, Quansight and scikit-learn, as well as Wes McKinney, head of Ursa Labs and creator of Apache Arrow, an open source platform for processing in memory, and pandas, a Python data science library, have all thrown their collective weight behind RAPIDS.

See also: VW, Bosch, NVIDIA announce autonomous car alliance

By way of example of the potential benefits of RAPIDS, NVIDIA claims that initial RAPIDS benchmark running the XGBoost machine learning algorithm for training on an NVIDIA DGX-2 system, shows 50x performance gain over x86 processor.

NVIDIA is also making the various components of the RAPIDS suite available as a set of Docker containers that can be deployed anywhere.

The RAPIDS optimizations are part of an ongoing NVIDIA effort to circumvent the limitations of Moore’s Law when it comes to processing massive amounts of data, says Jeff Tseng, head of product for AI infrastructure at NVIDIA.

“Moore’s Law has hit a brick wall,” says Tseng.

Moore’s Law posits that the number of transistors in a circuit doubles every two years, resulting in corresponding leaps in processing horsepower. But as silicon wafers have become smaller, Intel appears to be approaching the limits circuit density per processor. Processing large amounts of data now requires additional processors that increase overall costs.

NVIDIA has been making a case for employing GPUs employing a CUDA framework it developed to process massive amounts of data in parallel more efficiently. NVIDIA is the case of AI has been largely making that case to reduce the amount of time required to train AI models. But with its most recent processor launch NVIDIA is now also contending it has a more efficient way to run the inference engines that drive AI models when deployed in a production environment. At the GTC Europe conference, NVIDIA also announced that Volvo Cars has specifically selected the NVIDIA DRIVE AGX Xavier system for its next generation of vehicles. Production of those AI-infused vehicles that will include digital assistants to augment human drivers is expected to begin in the early 2020s.

Obviously, the cost of entry when it comes to employing GPUs to run advanced analytics and train AI models is high. Because of that issue most organizations tend to favor invoking GPUs via a cloud service. But as organizations begin to rely more on GPUs to run inference engines need to deploy AI models in a production environment, NVIDIA envisions GPUs being deployed all the way out to the network edge.

Of course, Intel is not sitting idly by while NVIDIA dominates AI. Intel is betting that an approach that combines Intel Xeon processors and float-point gate arrays (FPGAs) will eventually carry the AI day. In the meantime, however, the AI community is making it clear they can’t wait for Intel.

Leave a Reply

Your email address will not be published. Required fields are marked *