YOU ARE AT:Network InfrastructureHow will edge computing impact service assurance?

How will edge computing impact service assurance?

What role will edge computing play in telecom networks that are quickly becoming more virtualized and software-centric, and what challenges does multi-access edge computing present for service assurance?

Network and service assurance needs are evolving rapidly, as more applications — both network functions that support telecom network operations, and enterprise applications — are becoming virtualized. SNS Research estimates that service provider SDN and NFV investments will have a compound annual growth
rate of about 45% through 2020. Such deployments need assurance for both the underlying infrastructure as well as service assurance for the applications they are handling. Meanwhile, networks are also becoming more distributed, with edge computing resources expected to help support the deployment of 5G and low-latency applications. Just this week, edge computing start-up MobiledgeX, founded by Deutsche Telekom, announced that it is live with its first public edge cloud network that aggregates existing network operator resources — from parent DT — to host application cloud containers.

RCR Wireless News asked a number of companies from around the service assurance space to weigh in, answering the question: What impact do you see multi-access edge computing having on visibility and observability of network functions and applications?  Responses have been lightly edited.

Sandeep Raina, product marketing director – service assurance, Infovista:  

“MEC enables the networks for real-time and low latency applications, while reducing network congestion. However, the MEC use cases will mostly be around 5G/IoT, and the significant increase of data needs to be managed by service assurance systems, which need to scale by an order of magnitude and produce automated analytics rather than just periodically collated and aggregated data.”

Heather Broughton, senior director of service provider marketing, NETSCOUT Systems: 

“As service providers are using multi-access edge computing and they want service assurance to tie it back in to the rest of the network, they need some type of device — like our monitoring devices — that they can roll in. They’re lightweight. They don’t cost a lot. … We currently have software-only probes that can be rolled out to the edge in small containers and not take a lot of CPU, and you can still get your visibility when it’s out at the edge. And we have a virtual manager interface that’s … part of the orchestration.”

Nicolas Ribault, senior product manager, visibility, Ixia solutions Group, Keysight Technologies: 

“Visibility into what happens at the edge, with packets and flow, is critical to stay in control. A lot of transactions stay at the edge and never go to data centers. Workloads also get dynamically distributed between the edge, data center or cloud. For the network load and impact, it becomes challenging to know where to collect packets, how to calibrate network requirements, and how to qualify when problems happen (if it is at the network or the application). Edge computing will also have less reliable links to central data centers in many remote and industrial use cases. This presents another challenge to the visibility tools.”

-Patrick McCabe, senior marketing manager, Nuage Networks:

“Multi-Access Edge Computing (MEC) moves the computing of traffic and services from a centralized cloud to the edge of the network and closer to the customer. Reasons for this are reduction in latency and bringing real-time performance to high-bandwidth applications. This is consistent with what we are seeing with the growth of the uCPE market driven by the growth of virtualized functions (vCPEs) such as firewalls, WAN optimizers, etc. At times, certain branches in an enterprise network would elect to process application flows with local network functions and they are doing so now by increasingly collapsing dedicated appliances into virtualized network functions hosted on uCPEs.

“The growth of MEC computing and vCPEs further underscores the previous response in terms of where computing may take place in the network – it really can be anywhere and everywhere, and service assurance techniques must be granular and agile enough to follow applications regardless of where compute resources are located. This is now becoming a task that no longer can be manually configured or analyzed, so a software-defined framework is needed to measure, react, and report on the performance of the new breed of applications in this new network paradigm.”

Paul Gowans, wireless strategy director at VIAVI Solutions: 

“MEC has a significant effect on visibility. With MEC, you can deliver virtualized network elements and applications servers in the RAN.

“With cloud-native computing near the network edge (in wireless, this is the RAN) and with the demands of ultra-reliable low-latency applications, there is a need to support application servers in the RAN. The benefit of this, apart from being part of a private virtualized IT cloud, is to bring resources closer to the subscriber when latency is key. You might say, ‘The RAN is the new core.’ The challenge is that if you are monitoring in the core, you will have no visibility of this data. Having virtual agents in the RAN with a small footprint that are scalable and open allows the operator to leverage the value in the edge network edge.”

Azhar Sayeed, global telecommunications chief architect with Red Hat:

“Edge computing implies additional deployment of compute infrastructure closer to the subscriber and further away from core data center. Edge computing also implies network functions and applications are distributed across the infrastructure. Both of these aspects increase the scale and complexity of the deployment model of network functions and applications. They have a compounding effect on system management and assurance.

“Monitoring applications, network functions health and observing them for performance requires a different paradigm. Traditional techniques of polling devices from centralized locations is recipe for disaster simply due to the scale and distribution of functions. New models where instead of transporting raw data, you compute local KPIs and take local action become necessary.”

 

Looking for more insights on network and service assurance in virtualized and hybrid networks? Check out the upcoming RCR Wireless News webinar featuring representatives from Vodafone and NETSCOUT, and download our free editorial report.

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr