YOU ARE AT:AI InfrastructureVerizon looks to leverage distributed infrastructure for AI inferencing

Verizon looks to leverage distributed infrastructure for AI inferencing

The new AI Connect suite from Verizon comes as McKinsey estimates 60% to 70% of AI workloads will be for inference by 2030 

It’s been a busy two weeks in the world of AI. OpenAI and partners announced a planned $500 billion investment in AI infrastructure while Chinese AI startup DeepSeek is claiming to have spent $6 million on training an LLM with comparable capabilities to OpenAI’s latest. Ensuing tech sell-off aside, communications service providers (CSPs) have been looking to capitalize on the AI investment supercycle we’re in; we’ve seen some early moves on GPUaaS but Verizon, which has long been touting the importance of edge computing, seems to have a fairly ambitious and comprehensive plan in AI Connect. 

To restate the obvious, large centralized cloud computing facilities, like the ones Stargate is building, are used to train LLMs and, in some cases, conduct inference where ChatGPT answers your question, for instance. But there’s increasing interest in edge AI for inference for a number of reasons but, primarily, because it reduces latency, potentially improves data privacy, and is much more cost efficient than transporting data to and from those centralized clouds. You could define edge AI as your personal devices and the infrastructure that’s one hop from your personal devices, so a handset and mobile network basestations. 

Verizon this week announced its AI Connect product suite “designed to enable businesses to deploy…AI workloads at scale,” the company said in a press release. In its announcement, Verizon highlighted McKinsey estimates that by 2030 60% to 70% of AI workloads will be “real-time inference…creating an urgent need for low-latency connectivity, compute and security at the edge beyond current demand.” Throughout its network, Verizon has fiber, compute, space, power and cooling that can support edge AI; Google Cloud and Meta are already using some of Verizon’s capacity, the company said. 

Verizon also gave a brief update on its AI-related “partnerships and collaborations,” including with NVIDIA, Google and Meta as mentioned, and Vultr, a GPUaaS and cloud computing provider. 

“We are seeing significant demand for reliable network infrastructure that can support existing AI workloads,” Verizon Business CEO Kyle Malady said in a statement. “As the technology evolves, our industry leadership, best-in-class edge-to-cloud connectivity, programmable network and assets will enable us to meet these needs and accelerate innovation.”

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.