YOU ARE AT:Network InfrastructureFive challenges for multi-access edge computing

Five challenges for multi-access edge computing

Multi-access edge computing presents some unique challenges for deployment and operation

Multi-access edge computing is part of a major re-architecting of the network that involves placing compute and storage resources closer to the consumer or enterprise end user. It is expected to be a major enabler for 5G capabilities including ultra-low latency and ultra-reliable communications. 

As ETSI describes it, MEC is “an evolution of cloud computing [that] brings application hosting from centralized data centers down to the network edge, closer to consumers and the data generated by applications.” MEC aims to improve content delivery and application user experience by cutting out the often-long and imperfect network path between the end user’s device and the location where the data they are accessing is hosted, in order to lower latency, increase reliability and improve overall network efficiency. MEC is an undertaking that has been in the works for the past several years and will continue in phases as part of 5G preparation and deployment.

Here are some of the challenges that industry observers have laid out for MEC:

Real estate. Where will all of this edge compute go, and how much of it will be needed? For campus, manufacturing facility or other enterprises, compute may be hosted on-site. But consumer mobile applications, smart cities and autonomous vehicles, for instance, will need larger geographic availability. That’s why some companies are betting on tower companies to provide MEC, since they already have significant footprints for physical co-location of compute resources. The limited number of locations needed enterprise deployments make those type of on-premise deployments easier both in terms of the scale of the deployment, the location and the ROI.

The real estate issue gets more complicated, the closer to the user compute needs to go. So while outfitting a few dozen regional data centers or a couple hundred central offices is one thing, deploying computing hardware at thousands, or tens of thousand, of individual tower sites significantly pushes up the cost and complexity of deployment. 

Power. Data centers typically are built out with dual-power feeds to provide resiliency in case one feed goes down. The ability to get sufficient dual-power feeds to match up with existing real estate options for MEC installations may or may not be feasible, depending on the size of the edge compute installation and its location.   

Physical environment. Data centers are purpose-built for the needs of computing and storage machines, designed to provide sufficient space, cooling, a hardened environment and security. Pushing compute resources to central offices or customer premises may mean either a need to retrofit those environments or operating in less-than-ideal spaces. Micro-data centers with small footprints, particularly those that might be placed at the cell tower site itself, will have to operate in much more challenging physical conditions.  

Fiber and network co-location. Edge computing facilities will need access to fiber networks and most likely, to peering points with other networks. So backhaul is an important part of the equation — although the use of MEC can drastically reduce the amount of backhaul needed at a site. The point, after all, is to support 5G latency by doing necessary processing locally. Getting the right capacity and the necessary peering relationships at a given edge compute location could end up being much more complicated than at the relatively few number of large data center hubs.   

Operational challenges. The smaller edge data centers get, the more hands-off they will need to be. Remote management and remote monitoring and security will be important to bring down the cost of deployment. Arun Shenoy, data center colocation and in-command services leader for data center company Serverfarm, said that because the data center industry is relatively new, it’s “probably more human-intensive than it needs to be.”

“It’s a young industry that hasn’t really matured in terms of operational best practices and knowledge,” he said. “So being able to operate data centers still needs some degree of touch by humans, and that makes these edge environments — which are potentially quite small but many in number — potentially very expensive to operate.” Successful MEC deployments, he said, will hinge on three operational pieces: how edge data centers will handle asset management; change management (putting things in and taking them out) and capacity (of space, power, etc.).

Looking for more information on multi-access edge computing? Download the free editorial special report from RCR Wireless News, and check out our MEC webinar. 

 

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr