YOU ARE AT:AI InfrastructureHPE wins $1 billion deal with X for AI infrastructure

HPE wins $1 billion deal with X for AI infrastructure

HPE’s liquid cooling technology for AI infrastructure may have helped it secure the 10-figure deal

HPE Senior Director Rupin Mohan this week on LinkedIn confirmed reports that HPE has won a $1 billion deal to supply Elon Musk’s X, formerly Twitter, with AI servers. Neither company has commented on the deal, which Bloomberg reported was closed last year with Dell Technologies and Super Micro also trying to win the business. 

Bloomberg Analyst Woo Jin Ho suggested HPE’s liquid cooling tech could have helped close this major deal. Worth noting, X is working with Dell Technologies and Super Micro on its Memphis-area AI data center. 

Futurium Senior Analyst Mary Jander wrote that the deal shines a light on HPE’s “fundamental strength in its server segment. The company’s acquisition of Juniper Networks cloud distract HPE from this primary focus.” 

In early 2024, HPE confirmed it was acquiring Juniper Networks for $14 billion. At the time, the company called the acquisition an “important step” in its “portfolio shift towards higher-growth solutions and a higher-margin business.” It said the combined business will create a “new networking leader.” 

To the margin point, Jander wrote that “a large order can eat into a company’s margin, and that’s not something HPE can easily afford…Since AI servers require expensive components from the likes of NVIDIA and AMD, their production can negatively effect gross margin.” 

And back to liquid cooling: HPE has been touting this tech lately with the November announcement it was providing a “100% fanless direct liquid-cooled system” for the Department of Energy’s El Capitan super computer. The company also noted its direct liquid cooling tech is present in the world’s three fastest super computers, including El Capitan. 

Last year HPE updated its AI infrastructure portfolio, including the HPE Cray Supercomputing EX solutions, and two systems designed to support large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The updated portfolio features the liquid cooling architecture “and spans every layer of HPE’s supercomputing solutions, including compute nodes, networking and storage, which are supplemented by a new software offering,” according to the company. 

“Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation,” HPE’s Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions, said in a statement. “Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems.”

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.