Artificial Intelligence projects are now easier with Red Hat OpenShift 4.10

News Desk -

Share

Red Hat Inc. has announced new Red Hat OpenShift certifications and capabilities aimed at accelerating the delivery of intelligent applications across the hybrid cloud. These enhancements, which include Red Hat OpenShift’s certification with NVIDIA AI Enterprise 2.0 and the general availability of Red Hat OpenShift 4.10, are intended to help organizations confidently deploy, manage, and scale artificial intelligence (AI) workloads.

According to Gartner®, global artificial intelligence (AI) software revenue is expected to reach $62.5 billion in 2022, a 21.3% increase from 2021. As enterprises integrate AI and machine learning capabilities into cloud-native applications to provide more insight and value to customers, they require a more agile, flexible, and scalable platform for rapidly developing and deploying ML models and intelligent applications into production. Red Hat OpenShift is designed to provide this foundation, and new updates make it easier for organizations to integrate AI workloads into the industry’s leading enterprise Kubernetes platform.

Streamlining AI innovation

While AI is changing the way businesses operate, implementing an AI infrastructure can be complicated, time-consuming, and resource-intensive. Red Hat OpenShift is now certified and supported by the NVIDIA AI Enterprise 2.0 software suite, an end-to-end, cloud-native suite of AI and data analytics software that runs on mainstream NVIDIA-Certified Systems. The integrated platform contains NVIDIA’s flagship AI software, the NVIDIA AI Enterprise suite, which has been optimized for Red Hat OpenShift. Data scientists and developers can train models faster, build them into applications, and deploy them at scale with NVIDIA AI Enterprise on Red Hat OpenShift.

Customers can now deploy Red Hat OpenShift on NVIDIA-Certified Systems running NVIDIA Enterprise AI software, as well as previously supported NVIDIA DGX A100 systems, which are universal high performance compute systems for AI workloads. This enables organizations to consolidate and accelerate the MLOps lifecycle, which includes data engineering, analytics, training, software development, and inference, into a unified, more easily deployable AI infrastructure. Furthermore, the integrated DevOps and GitOps capabilities of Red Hat OpenShift enable MLOps to accelerate the continuous delivery of AI-powered applications.

A comprehensive platform to run AI/ML workloads

Red Hat OpenShift 4.10 extends the platform’s support for a wide range of cloud-native workloads across the open hybrid cloud, allowing organizations to run AI/ML workloads in more environments. The latest version of OpenShift adds support for new public clouds and hardware architectures, giving organizations the freedom to choose where to run their applications while making development as simple and consistent as possible. Among the new features and capabilities designed to speed up AI/ML workloads are:

  • Installer provisioned infrastructure (IPI) support for Azure Stack Hub as well as Alibaba Cloud and IBM Cloud, both available as a technology preview. Users can now use the IPI process for fully automated, integrated, one-click installation of OpenShift 4.
  • Running Red Hat OpenShift on Arm® processors. Arm support will be available in two ways: full stack automation IPI for Amazon Web Services (AWS) and user provisioned (UPI) for bare-metal on pre-existing infrastructure. This provides users with the same experience they’ve come to expect from Red Hat OpenShift on AWS, backed by the latest Arm-based instances.
  • Red Hat OpenShift availability on NVIDIA LaunchPad. NVIDIA LaunchPad provides free access to curated labs for enterprise IT and AI professionals to experience NVIDIA-accelerated systems and software. With Red Hat OpenShift now available on LaunchPad, enterprises can get hands-on lab experience configuring, optimizing and orchestrating resources for AI and data science workloads using NVIDIA AI Enterprise with Red Hat.  

Better oversight and compliance features for diverse, modern workloads 

Managing diverse, modern workloads often necessitates additional oversight and governance. Red Hat OpenShift 4.10 includes three new compliance operators to help users support their regulatory standard enforcement programs. These operators allow users to check their cluster for compliance and remediate identified issues. Among the compliance profiles are:

  • The Payment Card Industry Data Security Standard (PCI DSS), a set of security standards focused on helping companies that accept, process, store or transmit credit card information with greater confidence.
  • North American Electric Reliability Corporation Critical Infrastructure Protection (NAERC CIP), a set of requirements to address the security needs associated with operating North America’s bulk electric system.
  • FedRAMP Moderate impact level, the standard for cloud computing security for controlled, unclassified information across federal government agencies. 

Red Hat OpenShift 4.10 also includes the general availability of sandboxed containers. Sandboxed containers add an extra layer of isolation for workloads with stringent application-level security requirements. OpenShift has also been improved in disconnected or air-gapped environments, simplifying the installation of disconnected OpenShift clusters. This makes it easier to maintain OpenShift image mirrors and keep them up to date as if they were a connected cluster.


Leave a reply