Cisco Bets AI Will Be Deployed On-Premises

PinIt

Cisco’s new rack server shows the firm thinks that their clients’ AI development won’t necessarily be cloud-based.

Cisco is making a case for running a wide variety of artificial intelligence (AI) workloads on servers located in an on-premises environment. The firm has announced their UCS C480 ML M5 Rack Server based on graphical processor units (GPUs) from NVIDIA and Xeon processors from Intel.

The UCS C480 ML M5 is a 4U server that can be configured with up to eight NVIDIA Tesla V100-32G GPUs alongside Intel processors. The GPUs are required to process deep learning algorithms, also known as neural networks, in a parallel to train and build sophisticated AI models, while Intel processors are typically relied on to process machine learning algorithms.

To drive AI workloads on to the platform Cisco has validated multiple instances of the open source TensorFlow framework for building AI applications that can be deployed on Kubernetes clusters and Big Data platforms such as Apache Spark and Hadoop being curated by Cloudera and Hortonworks.

See also: Cisco report says we value but don’t yet trust IoT

To advance those efforts Cisco has been making contributions to Kubeflow, a machine learning toolkit that runs natively on Kubernetes.

Given the cost of buying servers loaded with GPUs most development of applications that make use of deep learning algorithms has occurred in public clouds that make GPUs accessible to a variety of customers. But because of privacy, compliance, performance and security issue, there will be organizations that will need to build, train and deploy AI models in an on-premises IT environment, says Tim Stack a product marketing manager within the Cisco Data Center and Virtualization Group.

“A lot of data will stay on-premises because of data gravity,” says Stack.

That shift is part of a general democratization of AI that is already underway, adds Stack.

Cisco like most of its server rivals is betting the injection of AI into application workloads will drive demand for more advanced servers loaded with memory and storage. Stack notes that if data is the fuel for AI, the volume and velocity of that data being fed into AI models is about to substantially increase.

Less clear is to what degree data scientists that build these applications will fund the acquisition of those servers in the immediate future versus traditional IT organizations. What is clear is that data scientists and internal IT teams need to collaborate more, which in turn is driving a realization that existing legacy systems are not especially optimized for AI workloads.

Of course, once those AI models are trained they will increasingly be distributed to edge of the network to drive analytics into processing in near real-time. The analytics generated at the edge will then be used to inform and a train AI models being developed on-premises and in the cloud to create a virtuous AI cycle.

It may take a while before AI models become pervasive across the enterprise. But it’s apparent that just about every enterprise application will soon be infused with AI to one degree or another. The only real issue will be determining to what portion of the AI development and deployment process will run where and when.

Leave a Reply

Your email address will not be published. Required fields are marked *