Editor’s Note: Welcome to our weekly Reality Check column where C-level executives and advisory firms from across the mobile industry share unique insights and experiences.
Recently, the wireless industry has been up in arms regarding the results of various mobile network benchmarking performance reports. Why the uproar? It’s because the exact methods of benchmarking are highly debated and carriers have a lot at stake – especially when the results don’t turn out in their favor.
At its most basic definition, wireless network benchmarking refers to the process of evaluating operator network performance and comparing the quality of service against competitors. Benchmarking programs enable operators to substantiate market competitiveness, support claims for advertising and marketing campaigns, and optimize network performance.
This is why industry stakeholders vehemently defend the results of benchmarking tests and strongly believe that their deployed method of testing is best. The question is: when it comes down to it, is one approach better than the other?
Wireless network benchmarking basics
Above all, benchmarking programs must be scheduled at regular intervals. It is critical that testing is repeated at systematic times. Data collection processes must be recurring against a firm set of key performance indicators, including time and location, and must be statistically valid and technology agnostic.
Regardless of the testing approach, in order to support network performance claims, testing accuracy must be ensured. Robust benchmarking programs must have both validity and credibility. To ensure testing accuracy, it’s important to create a neutral testing environment for all wireless networks. It is vital that testing standards are able to withstand the legal implications of claims made.
Network testing methods
Today, operators through independent benchmarking companies focus on two types of testing to measure and rank the performance of networks: the first involves rigorous, controlled testing using sophisticated test equipment that is connected to mobile devices and housed in vehicles for drive testing or placed in backpacks for in-venue pedestrian testing; the second method, crowd-sourced testing, involves using mobile applications and consumers, who enable their own devices to be used as mediums to gather relevant data. Following is an explanation of each:
–Rigorous, controlled testing through drive tests: Drive tests are conducted to check coverage criteria of mobile networks so that operators can decide how to improve their network voice and data coverage in a certain geographical area. Using predetermined test routes, drive testing is conducted using a vehicle with a test engineer operating advanced equipment on-board to collect network performance data. Key performance indicators collected via drive tests, include: call setup successes and failures; call drops; call quality; handovers (transferring a connected cellular call or data session from one cell site to another without disconnecting); data network access (percentage of attempts that successfully initiated calls or connected to the wireless network); retainability (percentage of successfully initiated calls or data sessions that are normally terminated by the customer rather than the result of an operational issue); and throughput (average rate of megabytes transferred during data sessions). The main goal is to collect test data and analyze results, which are then used by mobile operators to assess the coverage, capacity and quality of service for networks, as well as make truthful marketing claims. For example, data is often collected to evaluate performance before and after major infrastructure improvements, such as when networks are upgraded from 3G to 4G.
–Rigorous, controlled testing through venue tests: Venue tests are deployed by mobile network operators to test coverage in densely populated, high traffic pedestrian areas – such as shopping centers/malls, mass transit, arenas and tall buildings. Similar to drive testing, venue tests involve predetermined walking routes, but instead of vehicles, custom designed backpacks armed with high-tech equipment are used to benchmark mobile network performance. Similar to drive tests, the primary objective is to collect key metrics and performance indicators and analyze the results, which are then used by mobile operators to assess the voice, coverage and data throughputs of networks in specific venues, such as stadiums and other high-traffic buildings.
–Consumer driven crowd-sourced testing: As the name implies, crowd-sourced testing allows consumers to install an open source software application on their device to collect mobile speeds to help paint a broad picture of network performance. Crowd-sourced testing records and aggregates the speeds experienced by those app-loaded devices already in the field; this includes the collection of upload and download speed, latency and other factors that contribute to device performance. These apps usually run in the background of a user’s device, automatically conducting basic speed and signal strength tests and collecting high level data related to a user’s location, device type and operating system. Today’s crowd-sourced testing results can be a helpful supplement to the more detailed performance assessments derived from the controlled, comprehensive drive and venue testing procedures. It should not be viewed as a viable alternative since it is limited in test capabilities, uncontrolled and more prone to random, spurious data.
Benchmarking mobile network performance is an exact science, not an art. To be meaningful, it must be extremely rigorous with testing methods that are resolute and steadfast, taking into account every factor that can affect performance and results. This is achieved through controlled drive and venue testing using equipment and tools specifically designed to collect and evaluate network performance. Wireless operators that are the most proactive and deploy the strictest testing methods will inevitably have the happiest and most satisfied customers – and the least likely to switch carriers.
Dr. Paul Carter is president and CEO of Global Wireless Solutions. With more than 22 years of experience in the cellular network industry, Dr. Carter founded Global Wireless Solutions to provide operators with access to in-depth, accurate network benchmarking, analysis and testing. To date, Dr. Carter has led Global Wireless Solutions to successfully grow from a handful of employees to working with some of the most established domestic and international network operators in the business, including AT&T and Sprint. Prior to GWS, Dr. Carter directed business development and CDMA engineering efforts for LCC, the world’s largest independent wireless engineering company.
Reality Check: A wireless network benchmarking breakdown
ABOUT AUTHOR