YOU ARE AT:Network InfrastructureEdge sustainability in the era of AI and disaggregated RAN

Edge sustainability in the era of AI and disaggregated RAN

As operators move on net-zero goals, the distribution of compute out of data centers could enable new use cases, new business models and new sustainability benefits

Mobile network architectures have been changing as operators look to gain deployment flexibility and vendor choice with radio access network (RAN) disaggregation. Simultaneously, to deliver on low latency 5G use cases that require near real-time data processing, centralized data center compute is being distributed throughout the network to the RAN edge and even to customer premises. With the addition of artificial intelligence (AI) for both internal- and customer-facing benefits, operators now have an additional vector of complexity. But, even amid all of this technological complexity, there’s one more important factor: how to do all of the above in a way that meets increasing demand for network capacity while also reducing carbon emissions. 

Speaking with RCR Wireless News during Mobile World Congress in Barcelona, Dell Technologies Edge Portfolio Messaging Director Bill Pfeifer said the vast majority of net new data is and will continue to be created at the edge. Cameras, autonomous vehicles, digital twin and other “instrumentation that’s reading or emulating the real world” will drive data’s center of gravity to the edge. “This creates a bit of a disconnect,” he said, “because the data is out in the world and our data centers and clouds are somewhere else.” 

This, Pfeifer said, raises the question of, “What does it cost to move that data…[and] how does that impact responsible computing?” To lay the research foundation for answering this question, Dell Technologies, partnered with Intel, worked with  GSMA Intelligence (GSMAi) to leverage data from a survey of 100 operators to understand “the impact of changing enterprise traffic flows across a range of industries,” according to the resultant report, “The next generation of operator sustainability: Greener edge and Open RAN.” 

GSMAi developed a model based on three primary considerations: growth in data traffic by fixed and mobile networks; how much of that data is processed in the cloud or at the edge with edge including on-prem and “between premises and central cloud”; and the associated power consumption of various scenarios. A big takeaway is that by 2030, according to the research, operators expect 70% of all “enterprise traffic processing” to occur either at the on-prem edge or in a centralized cloud. 

Once accounting for the power needed for running massive data centers, “There is a potential energy saving impact by retaining more data the edge versus the cloud,” the report authors wrote. By the numbers, keeping 20% of processing at the edge, instead of sending it to a data center, carries a potential 15% reduction in energy use; if 40% stays at the edge, energy saving could be more than 30%. 

Pfeifer broke it down: “If you create data out at the edge and you move it to your centralized data center and cloud to process it, [then] send it back, just transiting the data takes about ⅓ of the power. That’s a lot…We’re not saying all compute should be moved to the edge. But we are saying, based on what your situation is…we can start to make more intelligent decisions that will give you longer-term, sustainable design, architecture.” 

In terms of relevant architectural considerations, the presence of fiber backhaul to the cloud, or physical proximity to the cloud, could bolster the argument for less edge processing, more cloud-based compute. However, many enterprises with legacy cellular connectivity, or copper or microwave backhaul, would likely benefit from a site-by-site evaluation to inform IT modernization, connectivity and cloud investments. 

The role of Open RAN here is commoditization of network hardware (from purpose-built appliances to telco-grade server infrastructure running specialized software) and decomposition of an integrated radio system into a centralized unit (CU), distributed unit (DU) and radio unit (RU)—the CU and DU can, in effect, serve as edge computing nodes wherein network function are one of the workloads. Edge inferencing for AI is emerging as the other primary use case. 

With distributed AI capabilities, operators can do things like dynamically and automatically adjust radio resource provisioning to correspond to traffic demand. So instead of always provisioning a network for peak capacity, this can scale up or down to meet traffic demand, thereby saving energy consumption because network elements are essentially turned off or put to sleep when not needed. In an Open RAN or virtualized RAN, the network telemetry that informs sleep mode, as well as the functionality of turning elements on or off, resides in the RAN Intelligent Controller (RIC). 

“Digitization is a multi-year process, with network virtualization intertwined,” GSMAi concluded. “The modeling suggests interesting findings on power savings from working at the edge, helped by savings from backhaul and compute volumes. However, these are only projections. The proof will come from reporting on actual deployments.” 

ABOUT AUTHOR