YOU ARE AT:Network InfrastructureCan edge workload placement reduce capex and opex? Potentially.

Can edge workload placement reduce capex and opex? Potentially.

Recent research shows moving more workloads to the edge can optimize overall utilization and efficiency

Part of optimizing networks now means not just optimizing service performance and RF performance, but placement of compute workloads: In a data center or at the edge? This has implications not only for overall network or application performance but for costs, particularly cloud compute costs and energy use.

Recent analysis from Arctos Labs, Wind River and the Research Institutes of Sweden (RISE) involved the development of a model that takes into consideration the aggregated cost for all workloads on both cloud and edge data center nodes as well as the energy consumed by transporting data. (Usually, data center power usage is calculated more simply through a PUE factor or power usage effectiveness; as NIST explains, this is the ratio of the total amount of power used by a data center, to the power delivered to computing equipment.)

The research sought to answer this question: Where is the most resource-efficient location to place a specific workload at a particular time in a dynamic edge-to-cloud continuum with a fleet of edge locations?

That analysis found that an edge compute node has an “operational sweet spot”: Its efficiency is best when the compute load is between 60-80% utilization. While it depends on various factors including the server type and configuration as well as the ambient temperature, Arctos Labs explained in a blog post that this is also because “cooling and computation efficiency are not proportional to the computational work carried out.”

The research compared two placement strategies for workloads: Only putting workloads at the edge if they needed to be there, with all others placed in the central cloud; and a second scenario where “flexible workloads” were also placed at the edge to get those nodes to that operational sweet spot.

Placing additional flexible workloads at the edge resulted in 4-6% energy savings, spread across both the edge and central cloud nodes due to better utilization; plus a 50% average reduction in cloud DC hardware requirements, because fewer compute resources were needed in the central cloud. In a simulated example of 100 edge nodes with 24 servers per node, the central cloud capacity requirements dropped from 840 to 360 servers, according to Arctos Labs.

As operators around the world focus on increasing their energy efficiency, this hints that choosing to distribute workloads to the network edge can reap benefits not just in the performance or latency of an application or network function, but in reduced opex and capex and better overall utilization efficiency.

“On a … macro consumption level, the aspect of reducing the future expansion of data network capacity, from the expected data tsunami, by utilizing edge computing will have a much larger impact on the optimal placement favouring placement at the edge,” Arctos Labs concluded.

For additional resources on the Evolution of Test & Measurement:

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr