YOU ARE AT:CarriersTelco cloud model strategies — what's going where?

Telco cloud model strategies — what’s going where?

Telefónica’s Multi-Cloud Tools Manager: ‘There is no one single type of a cloud that serves everything equally’

Panelists at the recent Telco Cloud and Edge Forum discussed the future of telco cloud strategies as they work to balance the need for innovation and the ability to scale with the very real risks associated with vendor lock-in and in giving up some amount of network management control.

Public vs. Private Cloud — What’s going where?

When it comes to the question of what to run on the public vs. private cloud, Independent Telco Technologist and ex Red Hat Chief Technologist Timo Jokiaho told told event attendees that there are “extremes” in telco cloud strategies. Some service providers, they just don’t touch public clouds,” he continued. “The other extreme, of course, is that operator wants to run pretty much everything related to Telco network on the public cloud.” Then there are those that want a healthy mix of the two — known as the hybrid approach.

Jokiaho’s recommendation to telcos is to “just leave” the user plane-related functionalities and entities — such as OpenRun and V-run — on a private, on-prem cloud platform, and then “carefully” offload control plane functions like Access and Mobility Management Function (AMF) and Session Management Function (SMF) onto the public cloud.

Telenor Director Cloud Strategy & Architecture Pål Grønsund offered the perspective of his telco on the matter: “We … have a public cloud first strategy,” he shared, adding that the majority of IT functions, in particular, are going to be on public clouds. He added, though, that functions on the network side are still running in private clouds. “Most of the network functions we have today are more on virtual machines, but now transitioning into cloud native,” he explained.

Francisco-Javier Ramón, the Multi-Cloud Tools Manager, GCTIO Unit and Chair of ETSI OSM at Telefónica, shared further that his company’s cloud strategy emphasizes the importance of developing the capabilities that enable it to deploy workloads in “different type[s] of infrastructure,” whether that’s public or private.

“There is no one single type of a cloud that serves everything equally,” he said, adding that the characteristics for the workloads and applications should help dictate where each should be run. “I think that this is the key in order to get the most from the different environments and that requires adopting a way that is cloud-agnostic for deploying, for monitoring, for managing the infrastructure setups.”

Ramón was clear about the importance of one other cloud consideration: “We need always to develop the ability to manage the workloads in a manner that is cloud-agnostic,” he said, explaining that doing so helps operators retain control of their supply chain. “You can’t trust your supply chain 100% in one provider anyway. It would be unsafe for a critical infrastructure like this, so we need always to develop the ability to manage the workloads in a manner that is cloud-agnostic to some degree because we are already also acquiring the other part of the supply chain that is the own software that is running on top … In the end, it’s a matter of having proper processes, proper modeling of those workloads so they can be managed in a cloud-agnostic manner. This is not about public or private clouds; it’s … having … a healthy relation[ship] with your ecosystem,” he stated.

Vendor lock-in and the role of hyperscalers

You can’t discuss the future of the public cloud without talking about the ones providing the public cloud, which, of course, are the hyperscalers. The big three are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), and according to Jokiaho, they have developed their cloud stack or cloud platform to “be suitable for pretty much any workload.”

He continued: “We have seen that there are cases where hyperscaler stacks are deployed on premises at Telco data centers even on the Edge or OpenRun, which proves that they do have — maybe not all, but some of them — technical functionality and technical features, which can host any workload. It’s only a matter of where to deploy that cloud platform, on premises or on the public data center. I think hyperscalers are very well on the way to be able to host any workloads either on premises, on Telco data center or public data centers.”

However, telcos remain concerned over the risk of vendor or hyperscaler lock-in as it creates additional complexity in the architecture and management of network functions. “When we look at the different cloud technology stacks from hyperscalers and other cloud platform vendors, from application point of view, when you need to onboard application and execute the application, they are not compatible between each other,” explained Jokiaho. “That’s probably one of the pain points to select the right hybrid model.”

Telenor, then, is careful to remain “mindful” of vendor lock-in and the risks associated with it, said Grønsund. “How can we move around? Who is controlling what? For instance, around observability, are you using some proprietary observability tooling, or is it more open source out of the cloud providers? These things need to be managed and who is controlling that management and orchestration layer?” he continued.

Grønsund added further that there must be orchestration capabilities that supports multi-clouds to enable enough flexibility to put the telco in control. “That can also ease the migration of workloads across when needed,” said. “We need to be in control of that orchestration on top of the cloud and orchestrating those network functions, multi-cloud wise.” This multi-cloud piece of particular importance to Telenor as it enables the use cloud service offerings from multiple cloud providers in multiple locations, including its hyperscaler and other cloud provider partners across the public and private cloud, depending on the unique needs and requirements of its customers and application providers.

The three panelists agreed that the issue of lock-in has yet to be fully addressed. Jokiaho, specifically, said that while there have been a few industry initiatives aimed at improving cloud platform compatibility, none of them have proven particularly successful. “We have quite a few going on as we speak, like a cloud native compute foundation and Linux Foundation Sylva project, which are addressing this point as we speak, but how successful will they be? Nobody really knows,” he said, adding, however, that even though it remains a “tough question,” he firmly believes it’s a “solvable” problem

ABOUT AUTHOR

Catherine Sbeglia Nin
Catherine Sbeglia Nin
Catherine is the Managing Editor for RCR Wireless News and Enterprise IoT Insights, where she covers topics such as Wi-Fi, network infrastructure and edge computing. She also hosts Arden Media's podcast Well, technically... After studying English and Film & Media Studies at The University of Rochester, she moved to Madison, WI. Having already lived on both coasts, she thought she’d give the middle a try. So far, she likes it very much.