Hewlett Packard Enterprise (HPE) has made a significant move into the AI cloud market with the launch of its HPE GreenLake for Large Language Models (LLMs). This expansion of the HPE GreenLake portfolio aims to provide enterprises of all sizes, from startups to Fortune 500 companies, with on-demand access to large language models in a multi-tenant supercomputing cloud service.
HPE GreenLake for LLMs allows enterprises to privately train, tune, and deploy large-scale AI using HPE’s AI software and market-leading supercomputers. To deliver this service, HPE has partnered with Aleph Alpha, a German AI startup, to offer a field-proven and ready-to-use LLM that can power use cases involving text and image processing and analysis.
This launch is just the beginning of HPE’s plans to release industry and domain-specific AI applications in the future. These applications will cover a range of sectors, including climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.
Antonio Neri, the president and CEO of HPE, emphasized the transformative potential of AI and stated that HPE is democratizing AI by making it accessible to organizations of all sizes. HPE aims to enable innovation, market disruption, and breakthroughs through an on-demand cloud service that trains, tunes, and deploys models at scale and responsibly.
HPE is a global leader in supercomputing and has achieved unprecedented levels of performance and scale in AI, including breaking the exascale speed barrier with Frontier, the world’s fastest supercomputer.
Unlike general-purpose cloud offerings, HPE GreenLake for LLMs is built on an AI-native architecture specifically designed to handle large-scale AI training and simulation workloads at full computing capacity. It can support AI and high-performance computing (HPC) jobs on hundreds or thousands of CPUs or GPUs simultaneously, providing greater efficiency and speed in training AI models.
HPE GreenLake for LLMs includes access to Luminous, a pre-trained large language model developed by Aleph Alpha. Luminous supports multiple languages and enables customers to leverage their own data, train and fine-tune customized models, and gain real-time insights based on proprietary knowledge.
Jonas Andrulis, the founder and CEO of Aleph Alpha, highlighted the efficiency and quick training of Luminous using HPE’s supercomputers and AI software. Luminous serves as a digital assistant for businesses such as banks, hospitals, and law firms, speeding up decision-making processes and saving time and resources. Aleph Alpha is proud to be a launch partner for HPE GreenLake for Large Language Models and looks forward to extending their collaboration with HPE to bring Luminous to the cloud as a service.
HPE GreenLake for LLMs will be available on-demand, running on HPE Cray XD supercomputers, the world’s most powerful and sustainable supercomputers. This eliminates the need for customers to invest in and manage their own supercomputers, which can be costly and complex. The offering leverages the HPE Cray Programming Environment and HPE’s AI/ML software suite to optimize HPC and AI applications, providing developers with a comprehensive set of tools for efficient development and deployment of AI models.
HPE is committed to delivering sustainable computing solutions, and HPE GreenLake for LLMs will run in colocation facilities that prioritize renewable energy sources. The first region to support this service with nearly 100% renewable energy is North America, in collaboration with QScale.
With the launch of HPE GreenLake for Large Language Models, HPE aims to empower enterprises to integrate AI applications into their workflows, unlock business value, and drive research initiatives. By providing accessible and sustainable AI cloud services, HPE is poised to contribute to the ongoing AI revolution and support organizations in their pursuit of innovation and market success.