Red Hat AI

Deliver AI value with the resources you have, the insights you own, and the freedom you need. 

Built on open source technology, Red Hat® AI is engineered to help you build and run AI solutions with confidence. 

Red Hat AI product graphic
Red Hat AI 3
New

Red Hat AI 3 is here

Our latest release makes AI inference more efficient and cost-effective with llm-d. Additional capabilities lay the foundation for delivering scalable agentic AI workflows, through a unified, flexible experience tailored to the collaborative demands of building production-ready AI solutions at scale.

3 things to know about Red Hat AI 3. Video duration: 3:18

Your models. Your data. Your choice.

For AI applications to operate at their best, they need fast, cost-effective inference. Red Hat AI provides a unified, flexible platform to achieve this, featuring llm-d—a framework for distributed inference at scale. 

Built on the success of vLLM, llm-d taps into the proven value of Kubernetes and offers consistent, efficient processing that delivers predictable performance.

As organizations shift to agentic, they need more than efficiency; they need an interoperable framework to connect models, data, and AI workflows across the hybrid cloud.

The introduction of a unified API layer based on Llama Stack provides an entry point for a wide range of AI capabilities. This includes integration with Model Context Protocol (MCP), making it easier to deliver and run agentic AI at scale in production environments.

2 hours | Tuesday, Oct. 14 at 10 am EST

Webinar

What’s new and what’s next with Red Hat AI

Your path to enterprise-ready AI

Join Red Hat AI leaders to hear the latest advancements in Red Hat AI.

The latest release prioritizes high-performance, predictable inference and accelerated AI agent development. Product updates help overcome AI challenges like cost, complexity, and control.

Red Hat AI includes:

Red Hat AI Inference Server Logo

Red Hat AI Inference Server optimizes model inference across the hybrid cloud for faster, cost-effective model deployments. 

Powered by vLLM, it includes access to validated and optimized third-party models on Hugging Face. It also includes LLM compressor tools. 

Red Hat Enterprise Linux AI Logo

Red Hat Enterprise Linux® AI is a platform to consistently run large language models (LLMs) in individual server environments. 

With the included Red Hat AI Inference Server, you get fast, cost-effective hybrid cloud inference, using vLLM to maximize throughput and minimize latency.

Plus, with features like image mode, you can consistently implement solutions at scale. It also lets you apply the same security profiles across Linux, uniting your team in a single workflow.

Includes Red Hat AI Inference Server

Red Hat OpenShift AI Logo

Red Hat OpenShift® AI builds on the capabilities of Red Hat OpenShift to provide a platform for managing the lifecycle of generative and predictive AI models at scale. 

It provides production AI, enabling organizations to build, deploy and manage AI models and agents across hybrid cloud environments, including sovereign and private AI.

Includes Red Hat AI Inference Server

Includes Red Hat Enterprise Linux AI

Validated performance for real-world impact

Red Hat AI provides access to a set of ready-to-use, validated third-party models that run efficiently on vLLM across our platform.

Use Red Hat third-party validated models to test model performance, optimize inference, and get guidance for cutting through complexity to accelerate AI adoption.

 

Red Hat AI use cases

Generative AI

Generative AI

Produce new content, like text and software code. 

Red Hat AI lets you run the generative AI models of your choice, faster, with fewer resources, and lower inference costs. 

Predictive AI

Predictive AI

Connect patterns and forecast future outcomes. 

With Red Hat AI, organizations can build, train, serve and monitor predictive models, all while maintaining consistency across the hybrid cloud.

Operationalized AI

Operationalized AI

Create systems that support the maintenance and deployment of AI at scale. 

With Red Hat AI, manage and monitor the lifecycle of AI-enabled applications while saving on resources and ensuring compliance with privacy regulations. 

Agentic AI

Agentic AI

Build workflows that perform complex tasks with limited supervision. 

Red Hat AI provides a flexible approach and stable foundation for building, managing and deploying agentic AI workflows within existing applications.

More AI partners. More paths forward.

Experts and technologies are coming together so our customers can do more with AI. A variety of technology partners are working with Red Hat to certify their operability with our solutions. 

Dell Technologies
Lenovo
Intel
NVIDIA
AMD

AI customer stories from Red Hat Summit and AnsibleFest 2025

Turkish Airlines

Turkish Airlines doubled the speed of deployment times with organization-wide data access.

JCCM Logo

JCCM improved the region's environmental impact assessment (EIA) processes using AI.

DenizBank

Denizbank sped up time to market from days to minutes.

Hitachi logo

Hitachi operationalized AI across its entire business with Red Hat OpenShift AI.

Solution Pattern

Red Hat AI applications with NVIDIA AI Enterprise

Create a RAG application

Red Hat OpenShift AI is a platform for building data science projects and serving AI-enabled applications. You can integrate all the tools you need to support retrieval-augmented generation (RAG), a method for getting AI answers from your own reference documents. When you connect OpenShift AI with NVIDIA AI Enterprise, you can experiment with large language models (LLMs) to find the optimal model for your application.

Build a pipeline for documents

To make use of RAG, you first need to ingest your documents into a vector database. In our example app, we embed a set of product documents in a Redis database. Since these documents change frequently, we can create a pipeline for this process that we’ll run periodically, so we always have the latest versions of the documents.

Browse the LLM catalog

NVIDIA AI Enterprise gives you access to a catalog of different LLMs, so you can try different choices and select the model that delivers the best results. The models are hosted in the NVIDIA API catalog. Once you’ve set up an API token, you can deploy a model using the NVIDIA NIM model serving platform directly from OpenShift AI.

Choose the right model

As you test different LLMs, your users can rate each generated response. You can set up a Grafana monitoring dashboard to compare the ratings, as well as latency and response time for each model. Then you can use that data to choose the best LLM to use in production.

Download pdf icon

An architecture diagram shows an application built using Red Hat OpenShift AI and NVIDIA AI Enterprise. Components include OpenShift GitOps for connecting to GitHub and handling DevOps interactions, Grafana for monitoring, OpenShift AI for data science, Redis as a vector database, and Quay as an image registry. These components all flow to the app frontend and backend. These components are built on Red Hat OpenShift AI, with an integration with ai.nvidia.com.

Red Hat AI in the real world

Photo of hands with a calculator

Ortec Finance logo

Ortec Finance accelerates growth and time to market 

Ortec Finance, a global technology and solutions provider for risk and return management is serving ML models on Microsoft Azure Red Hat OpenShift and is adopting Red Hat AI.

Figure looking at device
Phoenix logo

Phoenix systems offers next-level cloud computing

Find out how Phoenix Systems is collaborating with Red Hat to offer customers greater choice, transparency, and AI innovation.​

Generic graphic chart

DenizBank logo

Denizbank empowers its data scientists

DenizBank is developing AI models to help identify loans for customers and potential fraud. With Red Hat AI, its data scientists gained a new level of autonomy over their data.

purple circles

icon of stacked servers

RHEL icon

icon of pink cube next to cloud

OpenShift icon

icon of browser with with sparkles representing AI

OpenShift icon

icon of cog

OpenShift icon

Build on a reliable foundation

Enterprises around the world trust our broad portfolio of hybrid cloud infrastructure, application services, cloud-native application development, and automation solutions to deliver IT services on any infrastructure quickly and cost effectively.

Red Hat Enterprise Linux

Support application deployments—from on premise to the cloud to the edge—in a flexible operating environment.

Learn more 

Red Hat OpenShift

Quickly build and deploy applications at scale, while you modernize the ones you already have.

Learn more 

Red Hat Ansible
Automation Platform

Create, manage, and dynamically scale automation across your entire enterprise.

Learn more 

Red Hat AI
 

Tune small models with enterprise-relevant data, and develop and deploy AI solutions across hybrid cloud environments.

Learn more 

Explore more AI resources

How to get started with AI at the enterprise

Get Red Hat Consulting for AI

Maximize AI innovation with open source models

Red Hat Consulting: AI Platform Foundation

Talk to a Red Hatter about Red Hat AI