New Generative Artificial Intelligence (gen AI) use cases seem to emerge each day across geographies and verticals. Everyone is on the hunt for more and it’s easy to see why. A recent Deloitte GenAI Pulse survey found that 79% of respondents, all business and technology leaders, expect gen AI to transform the business within three years.
But while the expectation of AI-led transformation is as true of those in networking as it is of any other organization, the killer AI app in networking might not be quite so “generative.”
Networking organizations need to widen the search net and look at all forms of AI, not just the AI flavor of the moment.
Sure, vendors and network operators are looking into gen AI for their initial forays into AI. Early AI-related announcements commonly leverage the natural language query capabilities of large language models (LLMs) as applied to customer care use cases such as call center support.
Meanwhile, some Communication Service Providers (CSPs) state that they have been using AI for a long time in their own operations including customer care.
While these use cases can be incredibly useful, they’re unlikely to be the “killer app” that CSPs are looking for. It’s less common to see announcements where AI is used to detect and diagnose anomalous network events or optimize network infrastructure for performance or even power consumption: non-generative AI, in other words.
This is where the AI killer app is going to make a huge impact for network operators, as the driver of an Operations Support System (OSS).
And let’s be honest: legacy OSS is what’s holding CSPs back.
The shifting sands of OSS
A recent global study found that 60% of CSPs believe the use of AI will improve network operational efficiency by 40% or more. Today, operators use network trends to analyze historic performance and take steps to keep networks running well. But the current way of doing so is overly complicated, far too slow and reliant on manual process and analysis — in other words, “prone to human error.”
OSS is still highly customized and tightly coupled with network elements. Fragmented and siloed systems require complex, costly and time-consuming integration, which makes it difficult to move OSS applications to the cloud and implement end-to-end automation.
It’s a case of complexity begets more complexity — and now legacy OSS is often a hash of hardware, software and manual process resulting in obfuscated visibility.
Meanwhile, network architectures are changing to satisfy high scalability, performance and sustainability requirements — IP and optical are now converging, leading to even more operational complexity.
This can’t continue if tomorrow’s networks want to adapt quickly, self-remedy, optimize traffic in real-time and become more efficient. Operations teams now need improved insights to drive optimized decisions and workflows across their evolving, multi-layer, multi-vendor infrastructure, and they need it done in real time to maintain uptime and to stay ahead of, or at least at pace with, the competition.
They also need to monetize their infrastructure investments, engage better with customers and ensure a high-quality experience.
Thus, an AI-driven OSS makes sense — the sheer speed of analysis is something a human simply cannot match. It’s able to provide concise, real-time insights to optimize network performance and deliver services rapidly. It could identify faults and redirect traffic automatically or re-route traffic from a low-use region to one that is demanding more bandwidth. It can quickly analyze historical trends and use those to inform a decision, a decision it can make without human intervention.
AI in a networking management environment is not new — in fact, some use cases date as far back as 2018. Key prepackaged AI use cases include leveraging machine learning to proactively analyze optical network telemetry to identify anomalies and prevent failures, while another uses machine learning to analyze traffic flow patterns and determine cross-domain links.
For instance, a leading service provider in North America is leveraging an AI-driven solution for proactive service assurance, which enables the company to enhance the reliability of its optical and Ethernet networks. This was achieved by predicting potential Loss-of-Service (LoS) events within a seven-day window, allowing for preemptive resolution of issues before they could escalate to outages. The system was designed to automatically generate tickets for high-probability predictions, thereby streamlining the remediation process.
The benefit for this service provider is the ability to predict and prevent service issues before they impact their customers and prioritize and dispatch the most critical issues with confidence. This will lead to an improvement in long-haul network uptime, a reduction in outages reported by customers and a decrease in the need for on-site maintenance, all contributing to better operational efficiency and an improved customer experience.
But the next step is packaging use cases and AI technologies like this one with others into one killer application.
The OSS of tomorrow — AI designed into a suit of use cases
To get there, those in charge of developing the AI-driven OSS need an open approach to AI, leveraging the right AI for the right use case. And when it comes to generating revenue from AI, CSPs see multiple avenues to achieve it.
According to the study, 40% of respondents believe revenue will come from opening their networks to third-party integrations; 37% believe revenue will come from security and privacy services; the same number (37%) believe it will come from new product offerings; 35% believe it will be from the creation of tailored subscription packages; and 34% believe revenue will be from differentiation on quality of service for connectivity.
Simply put, there is no single AI solution that can address all of those potential offerings, and certainly no single vendor that can create all of the required AI applications for an OSS. OSS providers need to look beyond the potential for a silver bullet solution and understand that the killer AI-driven OSS is going to require best-in-class applications from multiple vendors.
It needs the ideal AI technology for specific use cases, including traditional unsupervised, supervised and reinforcement learning, as well as gen AI where it makes sense — such as for coding or customer inquiries.
From there, these AI use cases need to be woven together into the single OSS by providing SDKs that allow customers and partners to onboard homegrown, or third-party, AI capabilities and algorithms.
The key benefit to this approach is that CSPs don’t have to modernize their OSS stack all at once — the end goal is a single source of truth, but getting there can be done at the pace the CSP is comfortable with, only picking from the AI applications that best fit.
Those in the OSS game hunting for the killer gen AI application risk going down the wrong path if they take a myopic approach. Instead, taking an open and programmable approach to using AI is the only way to developing and implementing the Killer AI app every other CSP is racing to unearth.