YOU ARE AT:OpinionPerformance assured virtual networks: You can get there from here (Reader Forum)

Performance assured virtual networks: You can get there from here (Reader Forum)

The virtualized network train has left the station, and operators and vendors are having to lay track as they go on the way to their final destination – commercial-scale, self-optimizing, virtualized networks.

These networks might sound like science fiction, but in fact they’re already here. And the benefits will amaze even diehard, hardware-loving sceptics. Like all major breakthroughs, the path to achieving them is deceptively simple – elegant, even – once laid out in the open.

Trailblazers who have chosen to follow this path have proven that virtualized networks are more than simply viable; they’re exceptional. From squeezing out nearly 100% more throughput, to identifying and resolving nagging, disruptive outages that previously eluded operations teams for days, analytics and machine learning overlays turbocharge existing networks while exposing the invisible inner workings of virtualized infrastructure.

Korean operator SK Telecom was first in 2015, and has since been joined by its South Korean peers and Japanese service providers as well, including Softbank and NTT DoCoMo. Each operator has created fully automated networks that relieve humans from critical control duties and employ big data, machine learning and analytics in their place.

Far from being taken over by robots, however, these networks are masterpieces of human ingenuity. Their devops teams have each crafted their own state-of-the-art management and orchestration (MANO) and software-defined networking (SDN) control platforms that consult network analytics in real time.

And that’s the clincher – home-made. Currently, there is no off-the-shelf solution, or even combination of solutions — from any vendor, open source community or standards body — that can do all it takes. Any operator waiting for those stars to align and an OTS solution to come to market will likely face extinction within the next few years. The reasons for this are simple: (1) no two networks are built or managed the same way, so no off-the-shelf solution will (ever) work to accomplish these goals, and (2) once subscribers start to compare the performance and user experience between networks with and without “self awareness,” their churn will speak for itself.

Any DIY project is hard work. But the hardest work of all, the blueprints, are already done. The steps required to bring a network to enlightenment are: (1) establish a real-time data lake, (2) fill it continuously with quality data, (3) apply analytics to classify data and find correlations, (4) train machine learning to distinguish cause and effect using real-world events and actions, and, after sufficient training, (5) hand over the keys and let the machines do the work.

After many attempts to create automated networks, it’s clear that starting with high-quality data is the largest factor that determines success. It’s also the easiest to overlook, as operators already have tons of data. It’s just often not the “right data.”

“Garbage In, Garbage Out” is machine learning’s downfall. It’s hard to learn when you’re getting mixed signals.

So why would operators take data for granted when building out analytics? It’s easy, when the shining examples are cloud providers like Amazon, Twitter, Google and Facebook. Much of their code is open sourced, and their methods are well documented. But there’s a different goal at play for mobile networks. Different goals need different data. Instead of trying to track users’ friends, or which color socks they’ll buy next, networks depend on cold, hard numbers. Precise numbers. Like millisecond latencies, and microscopic packet loss that can drop calls in a heartbeat.

Operators have this data, but not at the precision, granularity, and ubiquity of location that machine learning needs to quickly converge on relationships, cause and effect, and the fastest path to optimize performance, prevent outages, or predict which customers are fed up with a sub-par experience. And finding out you don’t have the right data, after working on intelligent systems for years, is a major setback.

To overcome this obstacle, the first step is creating standardized, uniform, accurate visibility. The technology is up to the challenge. Virtualized, active monitoring provides an order of magnitude more precision, and measures hundreds of times more frequently, than legacy solutions. It’s effective because it “sees” over dynamic paths and functions, from physical or virtual vantage points, and is easily orchestrated on cheap x86 compute. It’s also proven at scale, like Accedian’s deployment at Reliance Jio, which covers over 70 million paying subscribers across India and delivers over 20 billion metrics daily.

Service providers and operators that embrace a leadership role in automated networks are taking on the responsibility of predicting the future, and also shepherding vendors and standards committees, together with their own staff, who must become devops experts along the way.

Without clear standards or complete open source solutions, vendors too need to take on a bigger role. They must seek out and build relationships with innovative operators, and be willing to go beyond simply cooperating with them. Instead, they should challenge them, innovate beyond their needs, and take risks of their own.

A major challenge? Perhaps – or perhaps not. But a journey that’s possible, proven, and certain to deliver significant rewards.

ABOUT AUTHOR

Reader Forum
Reader Forumhttps://www.rcrwireless.com
Submit Reader Forum articles to engageRCR@rcrwireless.com. Articles submitted to RCR Wireless News become property of RCR Wireless News and will be subject to editorial review and copy edit. Posting of submitted Reader Forum articles shall be at RCR Wireless News sole discretion.