For ultra low latency services, as enabled by incoming 5G technologies, multi-access edge computing (MEC) is a must. It provides the means to reduce latency, by degrees, as the compute power comes closer to the user in the radio access network.
But the business case remains challenging. MEC is expensive. Who will pay for it? Which of the emerging MEC-enabled use cases, whether gaming or cars, might be considered as banker? What are the ‘killer apps’ that users will bolt-on to their subscriptions, in droves?
That was the big question on MEC for the ‘ask-the-experts’ panel at URLLC 2018 in London earlier this month, posed by the session chair, Mansoor Hanif, chief technology officer at UK regulator Ofcom and advisory board member at UK5G.
“The issue is always the business case. Why put so much out there – more kit, at more expense? It seems like a paradox. What are the trade-offs in terms of cost and performance?”
The London session brought together a band of industry commentators prepared to speak candidly about the various enabling fields for new 5G technologies, and for ultra-reliable, low-latency communications (URLLC) in particular.
Flanked by experts in networks slicing, software defined networking, and latency sync, Milan Lalovic, principal researcher for 5G mobile core research at BT, was the elected speaker on MEC.
The position of the operator community, he suggested, should be to foster innovation among the developer community around an open edge cloud ecosystem – essentially, to make it open and wait it out.
The message was the business case will surely come, as technologies are mashed up by nimbler innovators into brand new applications. The kind of developer model pioneered by AWS Greengrass, Amazon’s edge computing platform, shows the way, he said.
“It’s so important to promote the capabilities of the edge to application developers. It is a major step forward for the network operators to come closer to the developer community, and to encourage it to create ecosystem applications,” said Lalovic.
The operator community is caught between two stools, he suggested. “In an ideal world, for a network operator, it would be great to have some kind of standard we could follow. But we are in a position [in the middle],” he explained.
“On one hand, we have a well established data centre ecosystem, with services such as Amazon AWS running centralised applications over-the-top, with their own markets, and their own APIs and SDKs and so on.
“On the other, we are waiting for a 3GPP service-based architecture, and how it will work with early adopters.”
Lalovic praised the “tremendous work” by the European Telecommunications Standards Institute (ETSI) through its MEC specification group during the past four years, including its offer to operators of a set of APIs that can be made available to developers.
“Because we need good applications and business cases, and for that we need to be able to offer uniform APIs,” he said. Meanwhile, parallel projects, likes of the Open Edge Computing initiative, have produced complementary APIs. “The good news is they haven’t tried to reinvent the wheel.”
Mansoor checked the messaging. “So you’re saying developers will drive the business case for how many MECs go where?”
“Hopefully, with good applications, [vertical industries] will understand the capabilities of
What about the position of MEC nodes? “Is there an optimal distance to support URLLC?” asked Mansoor.
Earlier in the day at URLLC 2018, Softbank took the stage to reveal from its research into radio latency during 5G tests along roadways in Japan that edge nodes could be placed at a distance 100 kilometres for latencies of one millisecond.
Mansoor referenced the Softbank findings. “That’s a nice round number, but is it the distance, or does it depend on the use case?” It depends on the use case every tine, said Lalovic; the optimal distance can only be gauged this way.
“Our objective is to be able to deploy new network elements, like edge cloud, in any location in the network, which serves the purpose of the customer. If the customer wants something really low latency, that it will mean the edge cloud is at the base station,” he explained.
“Where reliability is more important, and latency less so, then the network economics will drive that edge cloud into the middle of the network, or further back.”
The point is for operators is to have ability to deploy MEC nodes flexibly, move applications dynamically, and to rather worry about capacity and security at the edge, to guarantee services for industrial sectors, than to have to make the business case every time.
In London, a question came again about MEC distances from the antenna. “Is it 1m, 10m, 100m, 1km, 100km? Where is it going to end up closest to the antenna?”
Other panellists had their say. “The way I would put it is the distance from the remote radio head on a radio unit,” said Anthony Magee, principal engineer at ADVA Optical Networking.
“If you try to get your transmission time interval (TTI) down for low latency air interfaces, you are probably going to be at a maximum of 5km back from the radio head to get to the first part of the radio, the distribution unit.
“That is our expectation. So I think there will be some hardware there. Personally, I would say there will be some users that will want low latency at that location. So there will be some instances of compute there.”