Innovations in software and software disaggregation are driving telecom change on two levels, according to Per Kangru, technologist and business development expert in the CTO’s office at Viavi Solutions. Carriers can either do new things to drive automation or efficiencies within their own network; or, there are innovations going on beyond the network level that require new flexibility to support. It’s a matter of both adapting to a new environment and making sure that you can add value in that environment.
When it comes to the toolsets that are the necessary backbone of software environments, enterprises and cloud players have often turned to open-source options, Kangru says. Telecom players seeking to leverage the cloud may also want to do so, but they face a “scale difference in complexity”, he says, which means those tools only get them so far.
His go-to example for this is the disaggregation of the familiar and basic Policy and Charging Rules Function (PCRF), which is instrumental to the type of service a customer receives and how it is charged for. In legacy networks, a PCRF would have some ingress and egress, 3GPP interfaces that would be monitored, Kangru explains. In the cloud, however, a single PCRF might be made up of 20 individual micro services, each potentially providing data.
“In order to really understand the performance of that PCRF and what may limit its scale or functions, you have to look at all of those micro services, how they compose themselves together, to form that [cloud-native network function, or CNF],” Kangru says. You still have the 3GPP interfaces to monitor, he continues, but if you just take your methodology from the legacy world of using a virtualized packet monitor or probe that is only monitoring those interfaces, “you’re actually now going from seeing almost everything that there is to see in the legacy world, into seeing maybe 5% of the information in the cloud world.”
Accomplishing holistic, end-to-end visibility in this realm is a major challenge that operators often try to achieve by stitching together point solutions, he explains. “You need to both have the traditional visibility … coupled with the new visibility of the decomposed microservices role, plus obviously the visibility of your cloud infrastructure,” Kangru continues.
Multi-cloud or hybrid operations present yet another potential hurdle. That may be the right choice for supply chain security or reining in costs, as well as making a network more failover-resistant and potentially, portable between clouds: All good and valuable reasons to pursue a multi-cloud or hybrid strategy for telecom networks, Kangru acknowledges. But it’s also one where relying only on open-source tools will make it “relatively hard to manage,” he says. “You need to have that end-to-end visibility … and it grows significantly more complex than … in the legacy world.”
But the level of control is also in flux—which is not an easy thing for telecom operators to navigate, but is integral to the very nature of the cloud. Just look at service-level agreements, Kangru says. Asking a hyperscaler for on SLA on their compute or to do an apples-to-apples SLA on a cloud-native function in their cloud that is measured the same way as telcos have done with telecom NEMS and on-prem equipment comes with what he says is the “painful revelation” that either the hyperscaler simply won’t provide it, or that it comes with such onerous and static conditions on configuration and mapping to specific hardware (CPU pinning) that it throttles the benefits of moving to the cloud in the first place. Hyperscalers are selling availability of resources, which is not necessarily reflective of exposing the actual hardware availability. “If you are truly in the cloud world, you don’t care. You care that you get the right output, but if you measure that in the traditional SLA, you are most probably going to shoot yourself both in the foot and in the head at the same time.”
Kangru compares this transition to the jump to Ethernet and TCP/IP networks from SDH/PDH/SONET/ATM, where it took a number of years for people to trust a shared Ethernet medium and for TCP/IP to get information from point A to point B rather than a defined resource.
“It really then boils down to, are you ready to take the leap and let go of those legacy SLAs, which procurement departments often spend years and years negotiating … and people’s salaries or bonuses are based on that, are you ready to let go of them and go into measuring it in a new way, where you truly look at, how would my new 5G SA core deliver a good enough experience?” That should be the focus, he says: The outcome. The output. “If you’re not ready as an operator to change how you’re measuring yourself [and] measuring your vendors, then it’s very easy to get fundamentally lost in the cloud and spend an awful lot of money on something that you could do much more efficiently—or measuring your own organization in ways where it’s not measuring something toward a more successful outcome, but potentially to a worse outcome,” he warns.
Looking for more insights on how network testing is evolving as networks change? Watch the on-demand RCR Wireless News webinar on this topic, featuring Viavi Solutions and Infovista and download the companion report here.