• AIPressRoom
  • Posts
  • HPE discovers new vision for an AI-native architecture

HPE discovers new vision for an AI-native architecture

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

Hewlett Packard Enterprise (HPE) is expanding its artificial intelligence (AI) efforts with a series of new initiatives announced today at the HPE Discover Barcelona 2023 event.

The updates include an expanded partnership with Nvidia that involves both hardware and software to optimize AI for enterprise workloads. The HPE Machine Learning Development Environment (MLDE) which was first released in 2022, is being enhanced with new features to help enterprises consume, customize and create AI models. HPE is also extending MLDE to be available as a managed service, running on AWS and Google Cloud. Additionally, HPE is boosting its own cloud efforts for AI, with new AI-optimized instances in HPE Greenlake and increased performance for file storage in support of AI workloads.

The new updates are all designed to support HPE’s vision for a full-stack AI-native architecture optimized from hardware to software. Modern enterprise AI workloads are extremely computationally intensive, require data as a first-class input, and need massive scale for processing. 

“Our view at HPE is that AI requires a fundamentally different architecture, because the workload is fundamentally different than the classic transaction processing and web services workloads that have become so dominant in computing over the last couple of decades.” Evan Sparks, VP/GM, AI Solutions and Supercomputing Cloud at HPE said in a briefing with press and analysts.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

Learn More 

How to enable enterprises to lean into generative AI workflows

With the new updates to HPE MLDE, a key goal is to make it easier for enterprises to run AI workloads as part of business operations.

Sparks said that the new features are designed to enable customers who want to “lean into” generative AI workflows. This includes new features for tasks like prompt engineering, retrieval augmented generation (RAG) and fine-tuning pre-trained models. 

The goal according to Sparks is to help enterprises close the gap between what they’re reading about from some of the top AI research labs and introduce those kinds of features to their own user bases. 

As part of the updates the HPE Ezmeral Unified Analytics software suite is also getting a boost with improved model training and optimization through deep integration with the HPE MLDE.

“The goal is again to help accelerate time to value for organizations who are looking to deploy AI in their enterprises,” Sparks said.

Enterprise AI usage relies on data

For AI to be really useful for enterprises there is a clear need for organizations to be able to use its own data to help train models and derive insights. In order to do that, it’s critical to have the right type of data storage that supports the speed and scale that AI requires.

That’s where updates to the HPE GreenLake for File Storage service come into play.

“We’re announcing significantly greater performance, density and throughput for our customers to address some of these very, very challenging workloads,” Patrick Osborne, SVP/GM, HPE Storage said. “We’re announcing a 1.8x capacity expansion and then starting in Q2, we’re significantly growing that for customers that want to scale up to the realm of 250 petabytes of data.”

Osborne said that demand for hundreds of petabytes of data storage capacity is something that is needed by organizations that are producing the largest of large language models. He also noted that HPE is getting requests for that type of massive scale ‘daily’.

“We are evolving HPE GreenLake for File Storage to address our customer’s most challenging needs in the AI workloads space,” Osborne said.

Nvidia brings new power to HPE enterprise users

HPE is also growing its partnership with Nvidia, with new integrated hardware solutions.

HPE and Nvidia announced a collaboration in June of this year, with a series of optimized HPE hardware systems for AI inference using Nvidia GPUs. That collaboration is now being expanded to tackle a broader set of AI workloads, including training.

Neil MacDonald, EVP and GM at HPE Compute commented that most organizations are not going to be building their own foundational models. They’re going to be taking a foundational model that has been developed elsewhere and they’re going to deploy it into their business to transform their business processes. To do that, he noted that one of the challenges is building and deploying the infrastructure that enables fine-tuning, experimentation and deployment.

Part of the expanded partnership are new HPE systems purpose-built for AI. One of the new systems is the HPE ProLiant Compute DL380a, which is integrated with Nvidia L40S GPUs, Nvidia BlueField-3 DPUs (data processing units) and the Nvidia Spectrum-X Ethernet technology. HPE MLDE and Ezmeral Software will also benefit from optimization for Nvidia GPUs. Additionally HPE will be working with Nvidia AI Enterprise and the NeMo framework software to help support enterprise user needs.

“Ultimately, we feel like enterprises are either going to become AI powered, or they’re going to become obsolete,” MacDonald said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.