Red Hat Inc. has announced new Red Hat OpenShift certifications and capabilities aimed at accelerating the delivery of intelligent applications across the hybrid cloud. These enhancements, which include Red Hat OpenShift’s certification with NVIDIA AI Enterprise 2.0 and the general availability of Red Hat OpenShift 4.10, are intended to help organizations confidently deploy, manage, and scale artificial intelligence (AI) workloads.
According to Gartner®, global artificial intelligence (AI) software revenue is expected to reach $62.5 billion in 2022, a 21.3% increase from 2021. As enterprises integrate AI and machine learning capabilities into cloud-native applications to provide more insight and value to customers, they require a more agile, flexible, and scalable platform for rapidly developing and deploying ML models and intelligent applications into production. Red Hat OpenShift is designed to provide this foundation, and new updates make it easier for organizations to integrate AI workloads into the industry’s leading enterprise Kubernetes platform.
While AI is changing the way businesses operate, implementing an AI infrastructure can be complicated, time-consuming, and resource-intensive. Red Hat OpenShift is now certified and supported by the NVIDIA AI Enterprise 2.0 software suite, an end-to-end, cloud-native suite of AI and data analytics software that runs on mainstream NVIDIA-Certified Systems. The integrated platform contains NVIDIA’s flagship AI software, the NVIDIA AI Enterprise suite, which has been optimized for Red Hat OpenShift. Data scientists and developers can train models faster, build them into applications, and deploy them at scale with NVIDIA AI Enterprise on Red Hat OpenShift.
Customers can now deploy Red Hat OpenShift on NVIDIA-Certified Systems running NVIDIA Enterprise AI software, as well as previously supported NVIDIA DGX A100 systems, which are universal high performance compute systems for AI workloads. This enables organizations to consolidate and accelerate the MLOps lifecycle, which includes data engineering, analytics, training, software development, and inference, into a unified, more easily deployable AI infrastructure. Furthermore, the integrated DevOps and GitOps capabilities of Red Hat OpenShift enable MLOps to accelerate the continuous delivery of AI-powered applications.
Red Hat OpenShift 4.10 extends the platform’s support for a wide range of cloud-native workloads across the open hybrid cloud, allowing organizations to run AI/ML workloads in more environments. The latest version of OpenShift adds support for new public clouds and hardware architectures, giving organizations the freedom to choose where to run their applications while making development as simple and consistent as possible. Among the new features and capabilities designed to speed up AI/ML workloads are:
Managing diverse, modern workloads often necessitates additional oversight and governance. Red Hat OpenShift 4.10 includes three new compliance operators to help users support their regulatory standard enforcement programs. These operators allow users to check their cluster for compliance and remediate identified issues. Among the compliance profiles are:
Red Hat OpenShift 4.10 also includes the general availability of sandboxed containers. Sandboxed containers add an extra layer of isolation for workloads with stringent application-level security requirements. OpenShift has also been improved in disconnected or air-gapped environments, simplifying the installation of disconnected OpenShift clusters. This makes it easier to maintain OpenShift image mirrors and keep them up to date as if they were a connected cluster.