SHARE
Facebook X Pinterest WhatsApp

Red Hat Summit Keynote: Red Hat Doubles Down on AI

thumbnail
Red Hat Summit Keynote: Red Hat Doubles Down on AI

This week’s Red Hat Summit focused squarely on bringing open source and AI together with the introduction of critical solutions to help make the deployment and management of cloud and AI projects faster and easier.

Written By
thumbnail
Les Yeamans
Les Yeamans
May 8, 2024

Red Hat is focusing on the intersection of open source and AI. That was the message from Red Hat President and CEO Matt Hicks and others on the opening day of this week’s Red Hat Summit.

In his opening keynote, Hicks noted the importance of open source in driving AI innovation. “Red Hat believes that open source is the best model for AI and encourages contributions from a broad pool of contributors,” said Hicks.

To complement such work, Hicks announced the open-sourcing of Instruct Lab, a technology that allows anyone to contribute to and train large language models. He also announced the open-sourcing of the Granite family of language and code models in partnership with IBM Research.

During his keynote talk, Hicks brought in Ashesh Badani, Red Hat’s Senior Vice President and Chief Product Officer. Bandani introduced Red Hat Enterprise Linux (RHEL) AI. He noted that RHEL AI is an easy starting point for building AI applications with support for the Granite models and Instruct Lab. He emphasized that the solution gives organizations choice, openness, and control of their AI strategy.

A proof point about the choice and openness of the solution is the varied partners that are lined up to make use of RHEL AI. During the keynote, highlighted partners included Dell, Intel, and NVIDIA.

See also: Red Hat Summit Report: AI Takes Center Stage

A closer look at the solution

The major announcements in the Summit keynote are consistent with Red Hat’s open-source legacy.

Red Hat Enterprise Linux AI (RHEL AI) brings together the open source-licensed Granite large language model (LLM) family from IBM Research, InstructLab model alignment tools based on the LAB (Large-scale Alignment for chatBots) methodology, and a community-driven approach to model development through the InstructLab project.

The solution is packaged as an optimized, bootable RHEL image for deployments across hybrid cloud infrastructure. The solution is also included as part of OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform, for running models and InstructLab at scale across distributed cluster environments.

Simply put, RHEL AI is a foundation model platform that enables users to develop, test, and deploy generative AI (GenAI) models more seamlessly. To that point, Badani said: “RHEL AI and the InstructLab project, coupled with Red Hat OpenShift AI at scale, are designed to lower many of the barriers facing GenAI across the hybrid cloud, from limited data science skills to the sheer resources required, while fueling innovation both in enterprise deployments and in upstream communities.”

Advertisement

Bringing automation into the fold

Red Hat, like many other companies, has long known the importance of automation. Since its acquisition of Ansible in 2015, the company has routinely incorporated automation, and specifically, features of the Red Hat Ansible Automation Platform, into its offerings.

At the Summit, it announced the expansion of Red Hat Lightspeed across its platforms, “infusing enterprise-ready AI across the Red Hat hybrid cloud portfolio,” according to the company. (Red Hat Lightspeed with IBM watsonx Code Assistant was first introduced in the Red Hat Ansible Automation Platform as a solution to address hybrid cloud complexities and help overcome industry-wide skills gaps.)

In this week’s announcement, Red Hat OpenShift Lightspeed and Red Hat Enterprise Linux Lightspeed will offer natural language processing capabilities designed to make Red Hat’s enterprise-grade Linux and cloud-native application platforms easier to use for users of any skill level. It does this through an integration with generative AI technology.

Red Hat described two use cases for the technology. One example is to use OpenShift Lightspeed with GenAI to help users with different skill sets deploy traditional and cloud-native applications on OpenShift clusters. Another example is to use OpenShift Lightspeed when a cluster is at capacity. The solution will suggest to the user that autoscaling should be enabled and, after assessing that the clusters are hosted on a public cloud, suggest a new instance of the appropriate size.

Advertisement

A final word

The wide scale embracement of AI will only translate into benefits if organizations have the right underlying infrastructure to support it, as well as the ability to ramp up capacity for AI quickly to meet changing market demands.

What is needed is a robust hybrid cloud infrastructure that combines the best of AI and the cloud. This week’s Red Hat Summit focused squarely on this topic and saw the introduction of critical solutions to help make the deployment and management of cloud and AI projects faster and easier.

thumbnail
Les Yeamans

Les Yeamans is founder and Executive Editor of RTInsights and CDInsights. He is a business entrepreneur with over 25 years of experience developing and managing successful companies in the enterprise software, financial services, strategic consulting and Internet publishing markets. Before founding RTInsights, Les founded and led ebizQ.net, an Internet portal company specializing in the application of critical enterprise technologies including BPM, event-driven architectures, and event processing. When ebizQ.net was acquired by TechTarget, Les became Associate Publisher, managing a group of websites. Previously, Les had founded a new enterprise software business called ezBridge which provided fault-tolerant, guaranteed delivery transaction messaging on 10 different hardware platforms. This product was licensed to IBM as the initial code base for IBM MQSeries (renamed WebSphere MQ and later renamed IBM MQ) which was co-developed and co-marketed with IBM. Les was also co-founder of the Message Oriented Middleware Association (MOMA). Les has worked extensively as an analyst and consultant for end users and vendors in this growing market. Prior to ezBridge, Les raised venture capital for development and marketing of PowerBase, the industry-leading database software package for the IBM PC. He started his career consulting at Accenture, providing end-user IT solutions. Les has an MBA from the University of Michigan and a Bachelor's degree from the State University of New York at Binghamton. He is based in New Rochelle, NY.

Recommended for you...

Why Your AI Pilot Is Stuck in Purgatory; And What to Do About It
AI Infrastructure Services: Banking and Financial Services’ New Foundation for Computing
Ratan Saha
Dec 16, 2025
Real-time Analytics News for the Week Ending December 13
Why Open Source Is Powering the Next Generation of Scalable Data Architecture
RTInsights Team
Dec 12, 2025

Featured Resources from Cloud Data Insights

Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
The Role of Data Governance in ERP Systems
Sandip Roy
Nov 28, 2025
What Is Sovereign AI? Why Nations Are Racing to Build Domestic AI Capabilities
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.