Virgin Media O2 on the “open heart surgery” behind its push to build an end-to-end network automation platform
At Mobile World Congress 2025, executives from Pure Storage, Red Hat and Virgin Media O2 gathered to unpack key themes from the show. The conversation brought to light that while operator investments into network automation predate the rise of telco-grade artificial intelligence (AI) solutions, the hype around AI has raised the stakes—and, in turn, has shone a light on the cloud, data and infrastructure needed to effectively use AI for network automation.
Alex Boyd, who looks after telco core infrastructure and cloud for Virgin Media O2, reflected on his team’s experience with “engineering-heavy work” over the last three to five years. He likened it to “open heart surgery on our network…We replaced everything from one end to the other. That’s done, effectively. We’ve got those platforms in. Now we can actually leverage the capabilities that are in there.”
With that groundwork laid, Boyd said, VMO2 has created a dedicated team focused on network automation to guide the company from point solutions toward a full platform approach. “We’re just about to start from the ground up to build a fully-automated, end-to-end system,” he said. “If you want a lower cost to serve, you’ve got to change things.”
Andy Douglas, senior director for Pure Storage’s global telco vertical, framed AI as inseparable from future network strategy. “AI is key. It’s part of everyone’s strategy.” While compute and data storage are widely regarded as commodities, “you’re now placing that commodity really at the kind of center of what your future strategy is.”
In that context, Douglas emphasized the importance of a robust data management approach that spans network domains and supports increasingly complex and multi-vendor tech stacks. “This will be an area, I believe, of focus…for the industry,” he said.
Red Hat’s Mark Longwell, director of telco and edge alliances, highlighted how automation platforms like Ansible are enabling feedback loops that make the network capable of healing, rebooting, and optimizing itself—essential precursors to AI-native operations. “AI is a broad subject,” he said. “You have to slice it into something serviceable and manageable.”
“Data is the new oil,” he said. But it has to be more than abundant—it must be actionable.
A central theme the panelists agreed on was that the best near-term applications for AI in network automation—load balancing, energy management, traffic steering—are likely to come from small language models (SLMs), not large language models (LLMs). More targeted and easier to operationalize, SLMs offer a pragmatic on-ramp to AI deployment.
“LLMs are something you buy,” said Boyd. “SLMs are something you can run internally and train on your expertise.” Longwell pointed to the company’s InstructLab project as one way to make SLMs accessible and operational without requiring massive GPU farms.
Ultimately, AI success will depend less on the model itself and more on the foundation beneath it. With the right platforms in place, Boyd said, the time for action has arrived: “We see a lot of that network slicing, network-as-a-service, private 5G…are coming to the fore now. We’ve been talking about them probably for too long, and now we’re starting to put our money where our mouth is.”