The spring season of tech conferences ended with a bang at HPE Discover in Las Vegas last week. With a keen focus on AI, cloud and liquid cooling (yes, liquid cooling is cool—pardon the pun), the company wowed customers, partners and analysts with the first-ever keynote delivered in the Las Vegas Sphere.
After watching virtually every technology company in the IT solutions space make claims around differentiated AI offerings, I was curious to see how HPE would respond. This piece includes my analysis of what the company revealed and how it stacks up competitively.
HPE Set The Keynote Bar High
Before getting into the technical details, it’s worth spending a minute on CEO Antonio Neri’s keynote at the Las Vegas Sphere. If I had to use one word to describe the presentation, it would be “Wow.” It’s easy to get a little keynote fatigued after a few months of nonstop conferences. That’s not a slight to anybody or any company—it’s just a reality. But Neri’s presentation, combined with the use of The Sphere’s jaw-dropping technical capabilities, kept me and everybody else engaged from beginning to end.
Working in a venue like The Sphere, it would be easy for a creative team to overindex on the wow factor. The Sphere delivers an immersive experience through enormous wraparound LED screens, delivering what amounts to a sugar high for attendees. We could sit and marvel over the experience and walk away, not retaining any substance of what was discussed. By contrast, the HPE creative team did a fantastic job of using the venue to help tell the story instead of being the story. I predict that from now on every vendor hosting a conference at The Venetian Conference Center will find a way to incorporate The Sphere into its conference, budget permitting.
During his keynote, Neri focused on how AI is changing the world, and how he says HPE is bringing AI to the enterprise in an easy, consumable way. Further, he said, offering a turnkey AI experience is the result of the work that HPE has done with Nvidia and ecosystem partners—whether it’s implemented in the cloud, in a co-lo or in the enterprise datacenter.
AI Everywhere With NVIDIA
Before we dig into more specifics on HPE’s announcements, we need to talk about the partner company that pervaded this show. Nvidia has become synonymous with AI. This is demonstrated in the company’s sky-high valuation, the demand for its GPUs and, during conference season, CEO Jensen Huang or one of his delegates sharing the main stage with seemingly every server, storage and software company on the planet.
While Nvidia’s silicon is compelling, its software portfolio has ascended to what seems like an unassailable leadership position in AI. The CUDA platform creates a moat that is both deep and wide for the company, and Nvidia is leveraging this advantage in the market quite effectively. It has done so through its own libraries and frameworks and by integrating into popular frameworks such as PyTorch and TensorFlow.
Further, the company has built its enterprise enablement through a suite of software tools, such as its NeMo and NIM offerings. While NeMo enables end-to-end enterprise AI, NIM is specifically targeted at enabling GenAI applications through its inferencing microservices. Additionally, NIM supports inference enablement tailored for specific domains such as healthcare.
I write this not to promote Nvidia but to demonstrate its relevance in the AI market and explain why virtually every technology vendor not building silicon is lined up to partner with it.
Nvidia AI Computing By HPE
This brings me to what HPE unveiled at Discover during its analyst summit and Neri’s keynote. It is not an exaggeration to say the company is wedded to Nvidia for AI as it announced platforms, software, services and co-marketing activities aimed at driving enterprise adoption.
On the platform front, HPE announced a new class of its ProLiant server—the DL384—powered by Nvidia Grace Hopper processors and NVLink. (NVLink is an Nvidia high-speed interconnect that ties together its GPUs and CPUs for better performance.) Additionally, HPE announced a version of its most popular server—the ProLiant DL380a—with the Nvidia H200 GPU and NVLink.
Interestingly, the company did not announce anything new for its AMD-based platform, the DL385. I’m not entirely sure what to read into the absence of an AMD offering. While both Intel and AMD offer competitors to the H200 (the Gaudi and MI325X AI accelerators, respectively), I suspect that Nvidia views AMD as the more significant present threat. Or perhaps AMD didn’t want to provide non-recurring engineering budget to a platform that enables its competitor? Either way, AMD is currently out of a solution stack that will likely be a hot seller for HPE.
While HPE’s announcements of the DL384 and DL380a were significant, its intent to integrate the entire Nvidia AI computing stack through HPE’s cloud platform, GreenLake, caught my attention because it brings Nvidia-based AI to be consumed as a turnkey offering for enterprise organizations. This will be consumed as a service not simply in financial terms but in its delivery as what’s effectively a cloud service. And if a customer needs some help, select systems integrators that partner with HPE will deliver curated support services.
What I found even more compelling was how the two companies have worked to deliver deeper levels of management through their partnership. HPE OpsRamp can dig deep into the entire HPE-Nvidia solutions stack as an observability platform through a co-engineering effort. This allows IT organizations to monitor their environments and uncover failures and irregularities—from silicon to platform to software and networking—via the GreenLake interface.
It’s smart for HPE to differentiate through its integration, and to make enterprise AI as turnkey as possible. Perhaps what holds the biggest promise, however, is the joint go-to-market efforts for the two companies. If executed, this can be a real win for HPE as it will drive the entire sales motion, from awareness and consideration through to deployment and support.
The key, however, is execution. Many tech partners aim to achieve aligned GTM, only to see such efforts either defunded or misfunded. In other cases, we see such efforts stumble in the field as competing priorities dictate different GTM efforts. It is important that these joint activities share a single goal and execution plan supported and funded by resources that won’t be reassigned based on budget constraints or other organizational priorities. Not that either of these companies has a tendency to do so, but given the level of economic uncertainty here in mid-2024, these joint activities would make easy targets for scaling back.
HPE Is A Virtualization Player
As part of its private cloud offerings, HPE also announced its own virtualization stack, built on the open-source KVM hypervisor. This stack includes its own management solution and is designed to run alongside containerized and bare-metal environments. It is intended to deliver enterprise customers a fully integrated HPE private cloud experience that spans from the edge to the datacenter to the cloud.
I understand what the company is doing. Delivering a cloud stack wholly comprised of its technology allows HPE to optimize for performance, security, resilience and cost. It makes sense. Additionally, the company is delivering this cloud stack at a time when some enterprise organizations are rethinking their virtualization strategies.
However, I am uncertain how partners such as Red Hat and Nutanix will view this move, and whether enterprise IT organizations will put HPE on equal footing with these partners, who are now sort of competitors for HPE. In addition to being conservative in adopting new technology, IT organizations operate most efficiently through consistency. Because of this, I believe HPE will be challenged in introducing a new virtualization environment with its own control plane into an established market. This is especially true in organizations where Red Hat OpenShift or Nutanix Cloud Platform already has an established footprint.
With that said, I do believe that current GreenLake customers, who have by definition bought into the abstraction of the technology stack as a solution, will likely be very interested in this offering.
Liquid Cooling Everywhere
Finally, I was quite surprised by how often liquid cooling was brought up as a key differentiator for HPE. I was mainly surprised because I had not previously heard the company talk so publicly about this technology (which I’ve been following for years).
In its last earnings call, HPE discussed the importance of its more than 300 liquid cooling patents. At Discover, both HPE and Nvidia spoke to the importance of liquid cooling and HPE’s portfolio of IP in that area. HPE certainly believes it has a distinct advantage over Lenovo and Supermicro when it comes to cooling technology, which has become a critical element in enabling AI infrastructure.
I have not yet performed a deep dive on HPE’s liquid cooling technology, but one will be forthcoming. What I have done is look at the market as a whole and find that there’s an entire ecosystem vying for relevance (read my coverage here). Some of the more innovative technology providers, such as Jetcool, deliver genuinely unique solutions. The question is whether HPE and others with proprietary offerings will enable this ecosystem of providers. (Note: Dell relies exclusively on the liquid-cooling partner ecosystem.)
A Bold Bet On AI Clouds For Enterprises
HPE delivered a bold message at Discover 2024. It laid out an aggressive vision and plan for delivering the AI cloud to enterprises. Additionally, HPE focused on delivering a strategy and solution that differentiates it from its main competitors through deep software integration and coordinated GTM.
This is not to say that Dell, Lenovo, Supermicro and Cisco are sitting still—in fact, anything but that. AI has ushered in the most dynamic era of computing I’ve seen, and I can’t wait to see what comes next.