HPE Applies DevOps to AI Models

PinIt

A new HPE initiative aims to accelerate AI model building by reducing data scientist dependencies on internal IT teams.

Hewlett-Packard Enterprise (HPE) today launched a formal HPE ML Ops initiative based on a platform it gained by acquiring BlueData last November.

The goal is to provide IT organizations with a set of DevOps framework specifically designed to accelerate the building and deploying of artificial intelligence (AI) models using machine and deep learning algorithms, says Anant Chintamaneni, vice president and general manager for BlueData at HPE.

Many organizations have hired data scientists to build AI models, but they lack a structured approach for incorporating them into them in a production environment.

“They don’t know how to operationalize it,” says Chintamaneni.

HPE ML Ops also enable workflows with code, model, and project repositories in a fashion that evokes the processes typically associated with continuous integration/continuous deployment (CI/CD) platforms.

See also: Gartner: 77% Organizations Aim To Deploy AI, Staff Skill Holds Adoption Back

Acquiring Blue Data gave HPE access to an EPIC platform based on containers that makes it possible for data scientists to spin up environments on their own. They can use those environments to build and update AI models that come complete with self-service sandboxes prepopulated with machine learning tools and data science notebooks that would be employed to train AI models. HPE ML Ops addresses the entire machine learning lifecycle from data preparation and model building to training, monitoring, and collaboration in a way that reduces AI deployment times to days versus weeks, says Chintamaneni. 

The HPE ML Ops solution supports a range of open source machine learning and deep learning frameworks, including Keras, MXNet, PyTorch, and TensorFlow as well as commercial machine learning applications from HPE partners such as Dataiku and H2O.ai. The platform can be deployed on-premises or in a public cloud and integrated with various authentication protocols to ensure cybersecurity.

Collectively, those capabilities accelerate the time to which AI models are built by reducing data scientist dependencies on internal IT teams, says Chintamaneni.

Many organizations underestimate how frequently AI models will need to be retrained and updated. Organizations frequently gain access to new data sources that need to be factored into their AI models. As more applications begin to consume data in real-time, the amount of data that needs to be evaluated will only increase.

Many of the assumptions that data scientists have made about any given process are also subject to change as evolving business conditions warrant. An AI model that delivered optimum results a few weeks ago may need to be replaced by a different AI model. The challenge organizations face today is there is no framework in place for continuously updating and training AI models. Citing Gartner estimates, HPE noted today that by 2021 at least 50% of machine learning projects will not be fully deployed due to a lack of processes for operationalizing them.

AI involves a lot more trial and error than many data scientists often like to admit. Many of the AI models being built also need to be vetted for biases that can send data scientists back to the proverbial drawing board. There’s really no such thing as a static AI model. The sooner organizations come to terms with that data management reality the sooner the return on investments in AI will manifest themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *