YOU ARE AT:FundamentalsAI-native is about AI (obviously), but it’s also about change management

AI-native is about AI (obviously), but it’s also about change management

Brownfield operators rejoice!—cloud-native is not a prerequisite for AI-native

We’ve covered in these pages before an idea put forth in great detail by McKinsey and Company–and by others–that operators have to be cloud-native before they can be AI-native. If you subscribe to that line of thinking, you’ll quickly realize that there are more than two but less than five country-scale operators that are cloud-native today; and, drawing from that, you’ll realize then there’s little to no hope for everyone else to leverage AI in pursuit of this future state of AI-native. Fortunately, according to Per Kangru, technologist in the Office of the CTO at VIAVI Solutions, that’s not the case. 

He provided a clear-eyed assessment during the recent Telco AI Forum (available on demand here), though, saying, “If you start your AI journey not being cloud-native…then you will have a lot of technology debt to take care of later on.” Fortunately, taking care of technology debt stays at or near the top of operators’ to-do lists so that’s nothing new. But, “If you look at it from the perspective of do we require the underlying network that we’re trying to operate, do you require that one to be cloud-native? And the answer is, from my perspective…absolutely not.” 

Kangru continued: “Most of the operators have a significant brownfield. That brownfield needs to be managed.” And AIOps and attendant design patterns can help. It might not be easy to apply AI to 20-year-old networking technologies but, “We’re going to do as well as we can.” 

Data maturity and localized language models

In discussion at the forum, and in previous discussions, Kangru has stressed the idea of thinking holistically about AI in terms of assembling data, training models, and delivering applications that can be decomposed and recomposed in service of multiple use cases—essentially avoid redundancy, make the highest and best use of the assets you have, and deliver results cheaper and faster. He gave the example of industry-wide emphasis on AI for RAN energy saving which requires forecasting of expected traffic at a cell site or cluster of cell sites. This same forecasting could be used, for instance, to also do predictive anomaly detection. 

“When you start looking at it,” Kangru said, “if I’m doing it only for energy savings, I may end up rendering a pretty significant bill for doing that forecasting for every element all of the time and…I’m only able to recover it from the energy savings use case. But if I’m then able to say, ‘I’m going to do the forecasting and, based on this forecasting, I can run a number of different use cases in parallel using that data.’…When you’re building it in that way, you’re able in a pretty good way to figure out what are the most valuable components and what are the most valuable assets you have in your AI landscape…That’s where you really start to see the value of reusable assets and make sure they support whatever ecosystem you’re building up…That means as well that your return on investment doesn’t have to be all of the assets for a single use case. You can actually have multiple use cases driving that.” 

Going upstream of the AI application serving an operator’s particular use case is all of that precious data. This raises the question to what extent operators have the appropriate data platforms in place to feed it into models then use those models to do something that delivers net-new value. “Data maturity is really different between different operators,” Kangru said. Companies that realized in the not-too-distant past that they would someday soon be able to use that data have a “significant head start” in model training, he said. The ideal situation, he said, is data that’s so well structured and managed, with strong considerations around access control, privacy and security, that operators could begin exposing relevant data assets to vendors and other partners. He described a comprehensive digital twin of not just the network but the supply chain and other processes that feed into that production network. But, again, that’s very much an ongoing exercise in data maturity.

With the data structured the right way, the next step is model development. Kangru threw out a term that speaks to the dueling complexities of taking a multi-billion parameter general model like ChatGPT then adding proprietary data and fine-tuning (read: shrinking it) to make it functional for a particular domain or company, versus building from the ground up like what we’re seeing with the AI RAN Alliance or the joint venture between Deutsche Telekom, e&, Singtel, SK Telecom, and SoftBank. “The problem,” Kangru said, is “the more specific you want it to be used for, the more specific you want it to be trained for.” 

He analogized how some RAN experts know everything there is to know about Ericsson or Nokia or Samsung or whoever, but that company-specific knowledge doesn’t port from one to the other. Expanding on that, an LLM trained on the best available material from one vendor may yield awful results when you use it against a different vendor. Centrally-trained models that use public data can give decent outputs, but when it comes to your network and your settings, it’s important to have the model targeted to your desired outcomes, he said. “There’s many things around it where localized understanding is essential. You need to have it localized for your vendor permutations, your design decisions you took when you built it out, and then from that as well your configuration settings, your service matchings, and so on across it.” 

Doing AI isn’t as simple as buying AI

The clock on Kangru’s session ran out before he could go deeper on what it actually takes to make all of this wonderful technology work within the constraints of operator organizations, but he did make an important closing point. “It’s a multi-step journey. AI is great but you have to know what you want to do with it…It’s a fantastic journey [but]…it’s a journey broader than just buy a product and you get fully-fledged AI solutions…It’s extremely important to realize that and extremely important to realize how it turns into a change management journey of the organization.” 

Click here for more on how telco AI supports network automation and 5G monetization.

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.