YOU ARE AT:Analyst AngleAn innovative call to synergize network AI with ICT (Analyst Angle)

An innovative call to synergize network AI with ICT (Analyst Angle)

Communication Service Providers (CSPs) are overhauling networks for the Artificial Intelligence (AI) future: investing heavily in AI solutions to address current challenges, collaborating across organizations (e.g., Artificial Intelligence Operations (AIOps)), and reforming within them (e.g., centralizing AI teams). As comprehensive as these accommodations for AI have been, CSPs have not yet consolidated network architectures around AI. Addressing this oversight, a new vision for network automation has emerged, aimed at improving synergy between intelligence and Information and Communications Technology (ICT). This is a call to infuse an End-to-End (E2E) network and service management layer with the latest intelligence capabilities, encompassing diverse AI models, but consolidating AI in the middle layer across the telco stack to allow continuous feedback between ICT and intelligence.

1. A future for AI 

1.1 Challenging the industry

As of now, generative AI promises network transformation as a general solution, yet it remains confined to specific applications. Models are fine-tuned to target applications such as the following:

  • Network Planning: Recommend routing, controlled digital-twin experimentation
  • Network Analytics: Recommend configurations, detect incidents
  • Network Operations: Suggest or document code, generate synthetic data
  • Resource Efficiency: Enhanced compression through encoding/decoding

Fine-tuning models to specific uses is necessary and will continue; but the need for model diversity is hardly a reason for confining AI resources to single applications. With its near-universal applicability across network domains, why are resources for AI still fragmented across network layers? This prevents the sort of pooling of AI capabilities and resources that will allow cost efficiency and synergy with ICT.  

Moreover, generative AI increases data traffic across the network, and traffic will further compound as AI evolves into new content, new modes of interaction (e.g., Virtual Reality (VR)), and new ways of connecting (native-full connection). To benefit from these developments, CSPs need intelligent, adaptive ICT architectures informed by AI use. So, why is ICT not informed by AI intelligence and practices? AI should stand as a constant intermediary in decisions about ICT upgrading.

These questions were brought to the fore at Mobile World Congress (MWC) Barcelona 2024, challenging the telecoms industry and testing its grip on AI.

1.2 An innovative strategy

Already, a new vision of AI’s place in the network is emerging to address these challenges.

In the first place, it involves augmenting the network and service management layer with the latest AI technology, creating an AI-powered and cost-efficient network core from which intelligence may be dispersed from end to end. It will allow diverse AI applications and models (both traditional and generative) across the entire network, but resources will be pooled for greater visibility, operational efficiency, and interaction with ICT. An intelligent network core will be focused on its most fundamental task of AI model development (i.e., training, tuning, inferencing). It will then be applied toward efficiently managing and scheduling data within the network. It will be further applied in servicing users as they interact with AI in ever more sophisticated ways.

To support this process, ICT must adapt to the emerging demands of AI, which are increasingly distributed across the network. That means bolstering cloud infrastructure used for model training, optimizing the network for transferring large model parameters to enable local processing, and establishing a scalable edge to support model inferencing. In these ways, ICT will be ready to support the intelligence layer, even amid the anticipated waves of user-generated data.

This architecture will also need to comply with industry standards. For the required agility and scalability, CSPs will foremost need End-to-End (E2E) cloud-native architecture, including automated container orchestration and microservices.

2. The promise of AI reform

2.1 Current market

Generative AI is still in its nascent stages for network management, and it will remain so until CSPs can effectively mitigate the risks of placing a probabilistic model in network. Continuous third-party innovation will be needed for long-term development. There is progress here, especially as solution providers learn to channel generative AI through more deterministic and rule-based environments. CSPs are also playing a role by identifying lower-risk use cases, such as generating synthetic data or pairing with traditional models for detecting network anomalies. CSPs exploring higher-risk uses cases are careful to retain human oversight. Despite these advances in the current market, customer care and the Operating Support System (OSS)/Business Support System (BSS) represent the bulk of generative AI use cases.

2.1 Strategy evaluation

At the current stage of the market, it might be expected that network applications of generative AI will be focused on specific network areas. For one, there simply are not enough network generative AI use cases for consolidation to drastically increase efficiency right now. Second, narrow applications are a basic way of limiting risk. 

Yet, signs exist that this current market stage is moving onward: network challenges are already being overcome; and user-generated data are already driving new demands in ICT. CSPs will not want to be left behind.

Network AI advances are tracking with outcomes in network automation.  In the present period of Level 2 and Level 3 autonomy (2022 to 2028), CSPs are challenged to build holistic AI strategy and operations: integrate generative and traditional models, build guardrails, overcome ICT challenges for model training/tuning, and establish norms for AI explainability and content trust. To advance to Level 4 and beyond (2028 onward), CSPs are challenged to distribute, scale, and automate hybrid AI solutions through 5G cloud-native infrastructure, saturating operations with generative AI content. Converting the network and services layer into a general intelligence layer, powered by the latest AI technology and enabling continuous ICT feedback, will support both short- and long-term objectives: holistic operations, network AI proliferation, and intelligent ICT scaling. This makes the vision a timely one. 

The relative advantage over current approaches toward AI are clear if we consider these trends:

  • Anomaly detection and natural language-based service orchestration are popular test applications for generative AI in the network right now. Anomaly detection is a foundational skill that could inform service lifecycle management; for example, by identifying service quality anomalies. If these two use cases are threaded through the same intelligence layer, this may support efficient model training and cohesive network orchestration compared to if they are separated by network domain. As more network use cases emerge for generative AI, they can be merged into the same intelligence fabric, further enhancing efficiency. It may also be noted that both of the given use cases may be distributed across the network.
  • ICT is already more scalable, elastic, and responsive to network demands in the present era of cloud transformation. Under these conditions, network capacity will adapt to the rise in network traffic due to user-generated AI data. However, scaling and augmenting the network is not the same as creating intelligent ICT. Smart and adaptive ICT requires more than cloud. Consolidating the intelligence layer allows CSPs to gain insight into AI-based network demands, required resources, and data assets, facilitating smarter responses across the network. Following the current trends in cloud transformation is necessary, but more is needed to optimize networks for (and with) generative AI.

We expect this vision to yield solutions that outperform standard approaches by improving network efficiency and, potentially, service quality. ABI Research finds that the most time-consuming stages of generative AI network implementation are 1) enhancing network transparency, building ontologies and policies, and formalizing this for model tuning; and 2) integrating generative AI into the broader network process and establishing guardrails. The proposed relation between AI and ICT targets both areas, retaining data within an intelligence loop for easier visibility and processing, and centralizing intelligence within the network and services layer.

2.3 Final thoughts

Consumers are a key stakeholder in this vision—they are, after all, the main consumers of intelligent AI services; and they are the ones interacting with AI and generating the data that are passed back through the network. Given the primacy of consumer AI services here, it remains to be seen how this vision of AI and ICT may be applied to alternative end users. Specifically, what are the expected trends in enterprise AI use, and are they enough to support both an active intelligence layer and smart and adaptive ICT? This is worth considering, especially for CSPs with enterprise AI solutions. Still, the dramatic trends in AI use among consumers and the resulting increase in network traffic will likely convince most CSPs of the need for AI reform.

Taking advantage of these consumer AI trends requires investments in ICT that exceed the standard investment course for cloud transformation. In addition to cloud capabilities, CSPs must ensure that their network architecture supports the formation, distribution, and optimization of AI through the cloud. Under the proposed approach, this means consolidating the latest AI resources within the centralized networks and services layer so that new generative AI applications are easily brought into the fold as they are introduced, dynamically reallocating existing AI resources as needed. It also means having ICT that can handle the resource-intensive tasks of training AI models at the network core and inferencing at the edge.

ABOUT AUTHOR