YOU ARE AT:AI-Machine-LearningCapability vs. capacity: The duality of AI in telecom networks

Capability vs. capacity: The duality of AI in telecom networks

A survey from Ciena published earlier this year revealed the dual nature of the potential impact of artificial intelligence on telecom networks.

On one hand, more than half of the telecom and IT engineers surveyed reported that they thought the use of AI would improve network operational efficiency by 40% or more—let’s call that the “AI for the network” aspect. But when asked about the needs of the other aspect, “the network for AI”, nearly all of the respondents—99%—said that they believed fiber network upgrades will be required in order to support more AI traffic.

“The survey highlights the optimistic long-term outlook of CSPs regarding AI’s ability to enhance the network as well as the need for strategic planning and investments in infrastructure and expertise to fully realize the benefits,” said Ciena CTO Jürgen Hatheier. In a recent interview with RCR Wireless News, two other experts from Ciena discussed the dueling aspects of AI in telecom and the network specifically.

AI for the network

“AI, and things like ML and data analytics, have been embedded in assurance for many, many years, for both everyday requirements like faster trouble-shooting and fault isolation, and more recently, newer use cases around proactive fault identification, isolation and prevention,” reflected Kevin Wade, senior director of product marketing at Ciena’s Blue Planet division, which focuses on network automation and orchestration. He sees AI’s overall role within network operations as an extension of automation: Another way to leverage data to optimize operational processes. In terms of where he sees operators being interested in AI use cases, the majority at this moment are focused on assurance and optimizing network planning. Operators have a lot of data, he pointed out, and as they build more links across that distributed data, the more insights they get and the better they can plan the evolution of their services, their networks and their business. This is a similar, but perhaps more sophisticated, evolution of traditional telecom applications of AI and ML.

But over the last couple of years, Wade noted, service providers have also become highly interested in generative AI in particular, and its implications for their businesses and their networks. “That’s a fundamentally different approach,” Wade said. “Yes, it’s all AI, but it’s not necessarily an evolution of ML.”

Gen AI has been fundamentally built around natural language and large language models (LLMs)—not as an extension of AI/ML that lives in the world of data largely generated by network equipment and software. So the gen AI use cases that Wade sees the industry working towards that directly impact the network itself are essentially an extension of, or perhaps an intersection of, coding and orchestration and intent-based networking. “The idea might be, let’s use natural language for an end-customer to express their intent of what they want for a service, a connection from this point to this point, for this amount of time, this amount of bandwidth, with this type of security privilege attached—if you can just say that or write that down in simple language, and it automagically happens,” Wade explains.

That’s the goal of AI in telecom that service providers are looking toward, from Blue Planet’s view, “But it’s really still very much in the formative stage,” Wade adds. “It will take a couple of years, probably, to get there, because there are no standards for gluing this all together—those are also just being formulated.” The Ultra Accelerator Link (UALink) group, which is focused on standardizing interconnect interfaces for AI accelerators within data centers, was just established earlier this year—and also is seen as an effort to establish an alternative to Nvidia’s NVLink.

That desire to avoid proprietary technology is also carrying over to the models for gen AI, Wade said, and it may lead to some network operators treading lightly on gen AI until more interoperable or standardized frameworks for applying AI emerge. “Some service providers, right now, are always concerned with lock-in … around software vendors in particular,” he said. “They don’t necessarily want to be locked into one LLM either. So … there’ll be a mix of some waiting until some standards, guardrails, are in place for interoperability and so on. But others have initiated, and some of the larger European operation in particular initiated their own, telco-specific LLM activities.” (SK Telecom, Deutsche Telekom, e&, Singtel and SoftBank, after making a commitment at MWC Barcelona 2024, announced a joint venture in June of this year to jointly develop and launch a multi-lingual LLM specifically for telcos, with an initial focus that includes the use of gen AI in digital assistants for customer service.)

The network for AI

“AI infrastructure challenges lie in cost-effectively scaling storage, compute, and network infrastructure, while also addressing massive increases in energy consumption and long-term sustainability,” wrote Brian Lavallée, senior director of market and competitive intelligence at Ciena, in a recent blog post. He pointed out that “Traditional cloud infrastructure success is driven by being cost-effective, flexible, and scalable, which are also essential attributes for AI infrastructure. However, a new and more extensive range of network performance requirements are needed for AI.”

That includes both within the data center and outside it. Lavallée cited numbers from Omdia on expected traffic growth for AI, with monthly “AI-enriched” network traffic expected to see a 120% compound annual growth rate through 2030. He also touched upon the needs of generative AI to move massive amounts of data within a data center, over links operating at 400G, 800G and 1.6 Tb/s or more. Ciena has had two recent trials of 1.6 Tb/s capabilities, one with Telstra and Ericsson and another with global fiber backbone provider Arelion.

“We know inside the data center, traffic is exploding already today,” Lavallée said. “It’s going to spill out very quickly into campus networks.” He expects to see data centers begin to be build in that loose campus style, with multiple buildings within 10 kilometers of one another—leading to the virtualization of data centers. “You’re going to have multiple data centers acting as one larger, virtual data center. … for a whole bunch of reasons,” Lavalee added—primarily, power. “There’s not enough electricity in existing data centers to park all the AI hardware, which is 10 times more capacity per rack,” he continued. “So you may have 10 times less space consumed, but you’ve used 100% of the electricity coming into that building.”

Lavallée points out in his blog post that the gen AI models are “notoriously power-hungry in their LLM training phase” and consume “immense amounts of electricity.” Power usage in data-center hot spots such as Ashburn in northern Virginia, is expected to double over the next 15 years, driven mostly by data center growth. The GPU usage intensity wanes once a gen AI model is sufficiently trained and “pruned”, however, and offers an opportunity for algorithms to be moved to a more distributed edge location closer to end-users.

While the power needs and intensive high-speed links within the data centers are already becoming apparent, it is less certain what the needs of the rest of the network will be. As Lavallée told RCR Wireless News, while there are some estimates on the overall traffic impact that AI may have, that still leaves some significant uncertainty about exactly where in the network, and by how much, data transmission capacity will need to be bolstered. In metro rings? In long-haul links? Across submarine cables? That breakdown isn’t known yet. And Lavallée makes the point in his post that “AI will only scale successfully if data can move securely, sustainably, and cost-effectively” from core data centers to edge data centers.

He also thinks that the network performance demands of AI in telecom may ultimately mean that it is very well-suited to being supported by 5G—which, after all, is supposed to be a highly distributed, cloud-native transmission network that can support high data rates and low latency.

“I think 5G was a network upgrade in search of a use case. AI is the use case,” said Lavallée. “If we can marry the two together, I think some of the promise and opportunity of 5G can be enabled with artificial intelligence.”

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr