In sum – what you need to know
AI for internal telco efficiencies – AI is being used so successfully in customer care and network maintenance that service agents are solving most problems without raising tickets, and network engineers are up to 70% time-richer
New roles in the AI ecosystem – telcos are looking outwards to rent their networks and properties for AI transport and workloads; they also propose a new role as AI ‘middle-men’ brokers between AI service providers and SMEs.
AI issues with diversity and scale – with the AI buzz, telcos are looking to scale AI to more numerous and varied use cases; but problems with data organisation, oversight, and software interoperability persist.
Last chance to sign up for the RCR Wireless webinar tomorrow (Tuesday April 8) on AI in telecoms – about ‘supporting AI, using AI’, and getting to pervasive intelligence in telecoms networks. You can do that here. In the meantime and ahead of the session, here are three broad talking points about AI in telecoms – from interviews separately with ABI Research, Red Hat, Spirent Communications, TUPL, and Verizon Business, who will all be on hand during the session tomorrow to extend discussion of the same, plus more. Should be good; don’t miss it. Keep your eyes peeled as well for the attendant editorial report, out late April.
1 | HOW IT’S GOING – OPERATIONAL AUTOMATION
If you want to know where the telecoms industry with deployment and usage of different AI disciplines, then you could do worse (and hardly better, actually) than to ask Spain-based telecoms-AI provider TUPL, which has been working with T-Mobile in the US to bring some form of intuitive AI automation to customer care (with its AI Care product) and network maintenance (Network Advisor) since about 2017. “We have never looked back,” says Petri Hautakangas, chief executive at the firm. More recently, it has sold an AI service to Deutsche Telekom in Germany to optimise network energy usage across its international op-co footprint.
He says: “We do anything related to the customer experience – which is really about technical care, rather than standard BSS-style vendor offers around price plans and products. And we have grown the functionality of the product and our stake in the portfolio over the last seven or eight years. All our R&D is to make the system low-code/no-code so we can expand use cases and build use cases, based on a similar flow. Which is how we developed [our network product], which automates the repetitive tasks of engineers handling the radio, core, and transport networks – and our energy savings product, as well.”
Its flagship product, AI Care, initially ran on six data feeds (“low-hanging fruit categories”) within the network, he says; it now uses more than 40. “Nothing is static,” he says. “An ML system doesn’t just run by itself, without oversight and new inputs. Engineers or managers, even VPs, always have new data streams and new use cases – to enhance models, improve granularity, deliver new root causes and decisions. And we always say, ‘yep; makes sense – let’s do it’. So it gets better every month, every quarter, every year.” TUPL is enabling ‘ticket avoidance’, so customer queries do not need to be passed to specialist departments, in 95 percent of cases at T-Mobile US.
Which is good for efficiency – and also for placating disgruntled customers, and improving promoter scores and churn rates. Its spin-off product, Network Advisor, reduces the time it takes for engineers to resolve issues by 30-40 percent, claims Hautakangas – “almost from the get-go, based on the first low-hanging fruit”. Time savings rapidly jump to 50-70 percent, he reckons, as the models are tuned to the network environment. These are major operational jumps, then – hard to calculate, definitively, as they tend to be used to augment reduced engineering teams. Its energy optimisation tool, which orchestrates proprietary systems from the network vendors, is newer.
2 | WHERE IT’S GOING – ECOSYSTEM MONETIZATION
And so AI, in some form at least, is working hard and working well in telecoms. If you want a view from the sharp-end, directly from an operator, then Verizon Business has a good line on it – as talked-up and written-down in a separate post last week, about its service automation, product personalisation, and new adventures in compute and network rentals for the broader AI market. So too does test company Spirent Communications, which responds to a question about whether the RCR webinar title (‘supporting AI, using AI’) is in fact the wrong way around – on the grounds AI is for internal usage first – by saying the industry is looking outwards, at last.
“I get you totally, and I would have agreed until Barcelona,” says Stephen Douglas, head of market strategy at the firm, reflecting on how the narrative around “AI-networks-for-AI” unfolded at MWC last month. “There are a couple of dimensions,” he says. “For some, it is about offering a sovereign or private AI capability using MEC infrastructure – to optimise traffic for AI workloads via a regional breakout point. On top, they are looking to partner with big hyperscaler or specialist service partners to offer GPU as-a-service as well. But a number of operators are also looking to be a broker or federator of AI for small and mid-sized enterprises – which don’t often have resources to understand AI.”
These strategies will be discussed in the upcoming report; the logic, reasons Douglas, is that smaller firms (SMEs), often customers already, want a familiar service provider to somehow abstract complexity from a modish new tech provision that is designed to bring automation and intelligence, and therefore simplicity. Douglas comments: “They are now saying: we will connect you and we will also host the right language model – whether that means the big foundation models or smaller domain models, developed with partners. And they will bundle it as a service with connectivity. Which is quite a unique role, and interesting because big enterprises are not the target.”
It takes the Verizon model, discussed here, even further – by making the same use of uniquely featured and distributed network assets, and offering a unique go-between service for a needy and enormous customer segment. “The beauty is that it utilises network capabilities to cope with AI traffic behaviours, and it utilises network assets to host AI workloads, and it also sets operators as the AI customer touchpoint. It is a valid play, and a really important one – because if they don’t make a move now, they will discover in five years they are just a dumb pipe again, with somebody else just running over the top,” he says.
3 | WHAT TO LOOK FOR – DIVERSIFATION & INTEROPERABILITY
Douglas notes how the telecoms industry’s new successes with AI have come through familiarity and focus. “It feels like business as usual now. Operators have gone through that first iteration, and narrowed down 10,000 things they could do into a blueprint of four or five they will do, which have a real tangible benefit.” To a degree, this explains the momentum TUPL is finding with T-Mobile in the US, and various others, unnamed. Douglas even uses the same terminology. “It’s the low hanging fruit,” he says; “things they know they can get immediate value from.” But for ABI Research, also on the webinar, progress will come as use cases multiply, and the industry’s reach extends.
This is the case, especially, as AI is given voice with generative AI, increasingly, and agency with agentic AI, provisionally. “There are so many use cases,” says Nelson Englert-Yang, industry analyst for strategic technologies at the analyst firm; he has a list of about a hundred of them, he says. But they are of-a-type – plucked by carriers as “low-hanging fruit” for OSS/BSS applications, he says – and need to be multiplied across functions. “I want to see greater diversification, and also willingness to try agentic AI as it develops. We haven’t seen much commercial activity around that yet, even though it has been developed,” he says.
The pace of development is outpacing the rate of adoption, clearly. It’s brand new technology, effectively – practically every quarter. “The challenge, as always, is with data – about where it comes from, how it’s cleaned and organised. Because telco data is messy – and useless if it’s messy.” There are teething issues anyway, then, which are magnified as AI is put to work on critical infrastructure. The core network, for example, is “mostly hands-off”, he says. As it stands, AI is “mostly concentrated” on OSS/BSS functions, plus “some higher level apps” and, interestingly, around infrastructure design and optimisation. “It is slow and gradual – because of the critical nature of telecoms.”
If anyone knows about the complexity to deliver secure and reliable AI in critical networks, it is Red Hat, surely – on the grounds the whole principle of open-source software is to share and collaborate in the name of simplicity, scalability, and innovation. “The industry wants to scale AI, suddenly, and so the problems start,” says Fatih Nar, chief architect for application platform solutions at the firm. “That is where the conversation usually gets interesting – because things are easier as point solutions, and harder when they all have to talk together.” Red Hat has just published an excellent article on Medium, which gets into everything in this article; the webinar tomorrow will go further, as well.
Watch the RCR webinar on AI in telecoms – supporting AI, using AI, and getting to pervasive intelligence in telecoms – live and on-demand here.