YOU ARE AT:AI-Machine-LearningHPE and Nvidia combine compute and cloud to drive gen AI for...

HPE and Nvidia combine compute and cloud to drive gen AI for Industry 4.0

Hewlett Packard Enterprise (HPE) and Nvidia have introduced a portfolio of joint AI solutions and integrations, together with joint sales tactics, to accelerate adoption of generative AI in Industry 4.0. They are presented under the banner Nvidia AI Computing by HPE, and include, as the headline act, bundled integration of the two firm’s respective AI-related technology offers, in the form of Nvidia’s computing stack and HPE’s private cloud technology. They have been integrated and combined under the name HPE Private Cloud AI, available in the third quarter of 2024.

The new portfolio solution offers support for inference, fine-tuning, and retrieval-augmented generation (RAG) of AI workloads that utilise proprietary data, the pair said, as well as for data privacy, security, and governance requirements. Various portfolio offerings and services will be available through a joint go-to-market strategy, covering sales and training, including with internal teams and external partners. System integrators Deloitte, HCLTech, Infosys, TCS, and Wipro have been named from the start.

HPE Antonio Neri, president and chief executive at HPE, said: “Generative AI holds immense potential for enterprise transformation, but the complexities of fragmented AI technology contain too many risks and barriers that hamper large-scale enterprise adoption and can jeopardise a company’s most valuable asset – its proprietary data. To unleash the immense potential of generative AI in the enterprise, HPE and Nvidia [have] co-developed a turnkey private cloud for AI that will enable enterprises to focus their resources on developing new AI use cases that can boost productivity and unlock new revenue streams.”

Jensen Huang, founder and chief executive at Nvidia, said: “Generative AI and accelerated computing are fueling a fundamental transformation as every industry races to join the industrial revolution. Never before have Nvidia and HPE integrated our technologies so deeply – combining the entire Nvidia AI computing stack along with HPE’s private cloud technology – to equip enterprise clients and AI professionals with the most advanced computing infrastructure and services to expand the frontier of AI.”

The Nvidia/HPE proposition uses the Nvidia AI Enterprise software platform, which “streamlines development and deployment of production-grade copilots and other gen AI applications”. It includes Nvidia’s new inference microservices product (NIM), which bundles inference engines, industry APIs, and LLM support into containers for simpler prototyping and deployment of AI models. HPE’s is offering its AI Essentials software into the bargain, which delivers a “ready-to-run set of curated AI and data foundation tools with a unified control plane”.

These offer enterprises support to ensure their AI models are compliant, transparent, and explainable. The private AI cloud product integrates Nvidia’s Spectrum-X Ethernet networking platform, HPE’s GreenLake cloud and file storage, plus its ProLiant servers (with support for Nvidia’s L40S and H100 NVL Tensor Core GPUs, and its GH200 NVL2 ‘super-chip’ platform for high-performance computing applications). The new private AI cloud product is being offered as a self-service cloud with the HPE’s GreenLake platform.

HPE is offering observability for the Nvidia computing stack, including inference microservices, AI software, and GPUs and AI clusters via its OpsRamp business. The OpsRamp operations copilot uses the Nvidia platform to render insights on large data sets via a gen AI chatbot assistant. “IT administrators can gain insights to identify anomalies and monitor their AI infrastructure and workloads across hybrid, multi-cloud environments,” it stated.

HPE has also added support across its server portfolio for NVIDIA’s latest GPUs and CPUs. The pair called the new Nvidia AI Computing by HPE proposition a “first-of-its-kind turnkey private-cloud AI solution”, and said it is the deepest integration of their computing and cloud portfolios.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.