YOU ARE AT:FundamentalsFor telco AI LLMs, is bigger necessarily better?

For telco AI LLMs, is bigger necessarily better?

The pros and cons of building an LLM from scratch or tuning an existing LLM for telco AI use cases

Whether for automated quality assurance on a manufacturing line, providing a conversational user interface for retail shoppers, or in support of a variety of of telecoms industry use cases, the effectiveness of artificial intelligence (AI) largely hinges on the underlying large language model (LLM). As operators and vendors stand up telco AI solutions, there are differing approaches to the LLM piece—should the industry use its own data and domain expertise to build fully customized LLMs or should it take existing LLMs (like OpenAI’s GPT or Meta’s Llama) and tune them for telecoms? 

At a high-level, the pros for building an LLM are full control and customization, data privacy, competitive differentiation and optimized performance. The cons are cost, a typically long development time, complexity, risk and scalability challenges. The case for tuning an existing LLM includes faster time-to-market, cost effectiveness, ability to leverage proven technology, access to pre-built capabilities, and ease of scalability. The contra arguments would be around limited ability for customization, data privacy concerns, dependency on third-party vendors, and sub-optimal performance. 

The Global Telco AI Alliance’s approach

Firmly in the build-your-own-LLM camp are Deutsche Telekom, e& Group, Singtel, SoftBank and SK Telecom, which collectively launched the Global Telco AI Alliance at MWC Barcelona in February 2024. During the launch event, the operators also announced plans to establish a joint venture, through which the companies plan to develop LLMs specifically tailored to the needs of telecommunications companies. The LLMs will be designed to help telcos improve their customer interactions via digital assistants and chatbots.

The partners also noted that among the main goals of the JV is to develop multilingual LLMs optimized for languages including Korean, English, German, Arabic and Japanese, with plans for additional languages to be agreed among the founding members. Compared to general LLMs, telco-specific LLMs are better at understanding user intent and context. The Global Telco AI Alliance at the time also announced plans to focus on deploying innovative AI applications tailored to the needs of the Global AI Telco Alliance members in their respective markets, enabling them to reach a global customer base of approximately 1.3 billion across 50 countries.

In an interview with RCR Wireless News, earlier this& Group Chief Strategy Officer Harrison Lung explained, “We feel like this JV really has tremendous upside opportunity because now we’re able to use this proprietary data to really enhance the current, I’ll call it more generic, large language models and really turbocharge it to make it super relevant and tailored for our customers.” 

Rakuten exec on building (and using) a telco AI LLM

In a wide-ranging conversation on telco AI, Rakuten Symphony Managing Director and President of OSS Rahul Atri laid out the group’s approach to AI. “We always believe in building platforms,” along with driving adoption and  fostering a culture of innovation. He laid out a three-legged stool problem around AI. First, do we have the data? Second, what’s the cost? Third, did this create new efficiency that would otherwise not be achievable? 

Rakuten Group maintains a unified data lake; “we knew data would be the new oil,” Atri said. On cost, “People do see that coming but I don’t think many people are even talking about cost—cost of training a model, cost of cloud resources, cost of investment in even figuring out the use case.” In terms of efficiency, a big first question is; “Do you even need AI? Could it be a data insight problem? Can it be solved by a typical workflow engine?” 

Talking through the use of LLMs, Atri enumerated distinct phases in a typical user journey starting with the use of a chatbot and refining natural language processing. The next phase introduces an LLM that can access proprietary data and become domain and business specific in an effort to combine data with context. He made the point that one operator may have an AI solution accessed by customer care agents, RF engineers or senior executives, all of whom would want different information from the tool based on the varied contexts of their positions. 

To the build vs. tune debate in both telecoms and, more broadly, amongst enterprise AI users, he said, “You can build telecom LLMs as much as you want to. Do you want to? I don’t think so.” He used the analogy of building a cloud-native telecom network as compared to building a cloud that supports a telecom network. Worth noting that Rakuten Group, with its shared data lake, uses multiple LLMs. 

AWS and IBM on aligning data platforms with practical use cases

“The key things to remember here are two things,” Ishwar Parulkar, CTO of Telecom and Edge Cloud at AWS, explained in an interview with RCR Wireless News. “Firstly, one model doesn’t fit all…Secondly, bigger is not always better. There is a tendency to think the more the number of parameters…it’s going to be better for your job. But that’s not really true.” Smaller models, dialed in with tuning—which can include prompt engineer, the use of retrieval-augmented generation (RAG) techniques and entering manual instructions—can give better results, he said. 

Parulkar laid out a three-step process for operators to follow, and added the need to consider price/performance, model explainability, language support and quality of that support as well. “Once you have the foundational model in place, you need to pick the right data sets, figure out the level of tuning you need to really serve your use case. It’s a three-step approach: learning the use case well…getting the right foundational model, and then the right set of data to tune it…That is what is really forming the bulk of the use cases which can be productized today. However, we do see an opportunity for building domain-specific foundation models. That’ll come a little bit later.” 

For IBM, AI and multi-cloud are key strategic priorities; for operators, this is about moving from manual processes to automated processes. IBM General Manager of Global Industries Stephen Rose delineated four broad categories of use cases: customer care, IT and network automation, digital labor and cybersecurity. 

In terms of consumer-grade AI versus enterprise-grade AI, specifically telco AI, he said the big issues are around where the data comes from, the security of it, understanding any biases and the general trustworthiness of the system. “If you actually look to enterprise-grade AI,” he said, “it starts foundationally with you know where the data is coming from, and therefore you can trust it and you can be more specific and unique in the way that you apply the AI because you know exactly where the data comes from. I think for [communications service providers] going forward, and for the industry as a whole, I think the main opportunity is two things.” 

He continued: “One is finding ways to be willing to share privileged data. So, we talk about a lot of the data was hidden behind firewalls or it was within an organizational constraint let’s say. But now we’re actually seeing as openness as a general concept is becoming sort of pervasive across the industry, the data fabric that you can actually build that underpins AI is becoming more accessible in ways that we’ve never seen before. So I think there’s not only an opportunity within organizational silos within a particular organization, but even within a particular ecosystem. So, I think there’s huge opportunity for us in both domains, but I think if we work to less proprietary but privileged data and then the openness within the privileged data, then you get to do really interesting things with AI.” Bottomline, Rose said, the question becomes “where does it practically become implementable?” 

For operators looking to infuse telco AI capabilities throughout their businesses, the decision between building a custom LLM or tuning an existing one largely depends on specific needs, resources, and strategic goals. If an organization requires a highly specialized model with complete control and have the resources to support it, building a custom LLM might be the better option. However, if they are looking for a quicker, more cost-effective solution with a strong foundation, fine-tuning an existing model is likely the more practical approach.

Click here for more on how telco AI supports network automation and 5G monetization.

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.