YOU ARE AT:FundamentalsThree types of benchmarking for the O-RAN RIC

Three types of benchmarking for the O-RAN RIC

Mobile network operators are eager to explore the options that the RAN Intelligent Controller can provide, such as energy savings while maintaining end-user experience.

But there are still many questions that have to be answered about how the RIC will make decisions, said Owen O’Donnell, marketing manager for TeraVM and the wireless business unit at VIAVI Solutions, in a session at the Test and Measurement Forum virtual event.

What are some of those questions? First, he says, the basic question is how will operators know for sure that decisions made by the RIC and the resulting changes will benefit the RAN and not cause degradation to performance. Second is the question of how to make sure that artificial intelligence and machine-learning models which have been trained in a certain network environment will also perform well in different network environments—and how to trust such models, which are meant to be autonomous (which also means less human control). Operators also want to know how to port and combine components and solutions from different O-RAN vendors, xApps, rApps and RICs and feel confident that network performance and operation will be improved rather than negatively impacted.

Benchmarking the RIC is the strategy for answering those questions, O’Donnell said, and three types of benchmarking are currently being discussed. Those are:

App benchmarking. He gave three examples of app benchmarking: One in which an operator benchmarks two different app vendors side-by-side, to establish which one works better on the operator’s RIC. “The operator can run each app on its RIC, controlling an emulated version of its RAN and traffic mix, and then observe the output and decide which works best for them,” he explained. In a second example, the operator wants to see if an app works the same under different network usage scenarios, such as rural vs. urban. “They need to know how the app behaves with different traffic loads under different coverage scenarios, and then document this and be ready to present it to potential customers,” he said. A third scenario for app benchmarking would be the need for app developers to check how their app runs, and the outcomes, under different RAN parameters. All of that benchmarking needs to take place in the lab with emulated users, emulated traffic and emulated RAN scenarios being used to test out the workings without affecting real subscribers, O’Donnell noted.

Conflict mitigation, also known as “collision management.” When a RIC has to handle conflicting priorities, what will it do?

“A lot of x and rApps will be developed in isolation using RAN feeds to predict and affect the change to improve some aspect of the RAN,” O’Donnell explains. “But when it comes to a real network running tens of apps, making changes, who’s to say one change in one direction won’t be undone by another app reversing the change?” He gave the example of an app that seeks to maximize coverage/boost antenna power, versus an energy-saving app. “These two apps are in conflict, and if running together at the same time in the same region, could either cancel each other out or cause a flip-flop of changes going up and down continuously,” he said. Operators should have (and test) a policy that determines which app takes precedence, but that might depend on time of day, location or specific events, O’Donnell said. A tricker conflict would be a situation in which apps’ individual decisions compound each other in a way that results in negative impacts to subscribers—perhaps one app reducing antenna tilt to reduce coverage, and a separate app decreasing antenna power. “Different parameters are being altered, but both will have the consequence of reducing the coverage, and this can have quite a big impact on the subscribers,” he said, adding, “these types of app conflict will become widespread as more apps get onboarded to the RIC.” Testing various scenarios in the lab will help identify those situations and mitigation strategies.

Security benchmarking is the third area of focus for RIC testing. O’Donnell says that there a “huge concern among operators [about] the background and intent of some new app developers who have no history in the telecoms industry, and wondering how well can they be trusted.

“An app sitting on the RIC is exposed to a huge amount of data from the RAN and has quite a bit of power to influence the running of the RIC, including causing malicious actions,” he continued. “There is concern about apps having a backdoor with hostile software gaining access to network sensitive data, including subscriber data. So this area needs strong policing.” He went on to say that people expect to see the development of dedicated security apps that will use AI to detect rogue patterns or suspicious behavior in other apps, which warrant a closer look and potential intervention. “But benchmarking apps from a security viewpoint is … a very high priority for operators, and one that they’re speaking to us very clearly about,” he said.

View this session video on-demand as well as other content from Test and Measurement Forum here.

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr