Hewlett Packard Enterprise (HPE) has unveiled an innovative turnkey solution in collaboration with NVIDIA, aiming to expedite the training and optimization of artificial intelligence (AI) models. Tailored for large enterprises, research institutions, and government entities, this supercomputing solution employs a software suite facilitating the training and tuning of AI models with proprietary datasets. It encompasses liquid-cooled supercomputers, accelerated compute, networking, storage, and services, providing a comprehensive platform to swiftly unlock AI value.
Justin Hotard, Executive Vice President and General Manager of HPC, AI & Labs at HPE, emphasized the necessity for purpose-built solutions to effectively drive AI innovation. The collaboration with NVIDIA has resulted in a turnkey AI-native solution that delivers dedicated performance and scalability akin to supercomputing, essential for efficient AI model training.
Key components of the supercomputing solution include software tools enabling AI application development, customization of pre-built models, and code modification. Leveraging HPE Cray supercomputing technology and powered by NVIDIA Grace Hopper GH200 Superchips, this solution offers unparalleled scale and performance for substantial AI workloads, including training large language models and deep learning recommendation models.
Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, highlighted the transformative impact of Generative AI across industries and scientific endeavors. The turnkey AI training and simulation solution, fueled by NVIDIA GH200 Grace Hopper Superchips, is poised to provide the performance necessary for breakthroughs in generative AI initiatives.
The purpose-built supercomputing solution for generative AI comprises AI/ML acceleration software, designed for scalability using the HPE Cray EX2500 system and NVIDIA GH200 Superchips, a high-performance network with HPE Slingshot Interconnect, and turnkey simplicity with HPE Complete Care Services. Notably, the solution aligns with HPE’s commitment to energy efficiency in computing, addressing the anticipated surge in AI workloads by 2028 and minimizing environmental impact.
With a focus on sustainability, HPE’s liquid-cooling capabilities contribute to a 20% performance improvement per kilowatt over air-cooled solutions and a 15% reduction in power consumption. The integration of direct liquid cooling (DLC) in the supercomputing solution further enhances efficiency, positioning HPE as a unique enabler for organizations seeking powerful compute technology while mitigating energy consumption.