With America’s vast geography and unique competitor landscape, it is arguably one of the most intriguing battlegrounds for 5G rollouts globally. Therefore, as we enter the first inning of 5G in the U.S., it’s not shocking that it has become the most fiercely competitive 5G market in the world. It’s also not surprising that a lot of this competition is playing out in marketing and promotional hype ahead of actual 5G technology deployments. As operators rush to get their first 5G networks up and running, they are eager to illustrate any and all improvements in their wireless performance in the lead-up to 5G.
The use of network performance data and coverage maps are longstanding tools for logging these improvements, but consumers are often left to read the fine print to try and figure out the source of the data, how it has been interpreted and its real relevance. Looking ahead, with 5G and its higher speeds and lower latencies, coverage maps and data used to tout network performance will take on a new meaning. Consumers are looking to 5G to power next-gen applications and the expectations on the network performance – particularly as they are likely to have to pay a premium — will only increase.
Recently, the industry has started to move beyond the legacy method of drive-testing to newer approaches that measure actual network experience rather than trying to simulate it. These newer methods don’t just measure the network, but factor in everything that impacts the actual experience of the network including the handset choice and the variability of locations the network is used. They measure mobile experience at scale, wherever consumers go with their device in-hand: at work, at home, indoors and out, within both urban and rural areas, and all the locations in-between. Evolving the focus from measuring the network itself, to the actual experience of the network is a seismic shift in thinking and puts the focus on what matters most to the consumer.
However, given the magnitude of the 5G opportunity ahead of us and in an era where most Americans have already lost trust in the facts they read in the media, it’s also important to make sure people are aware that not all new data is created equal. As the head of a mobile analytics company, I’m painfully aware that much of the data being circulated in the first inning of 5G in the U.S. lacks authenticity, independence, and transparency. When carriers make claims and then clarifications have to be made that the “speed test results aren’t as fast as they seem” this leads to confusion for consumers and erodes trust. Access to connectivity is too important for this and consumers need independent information they can trust.
To help separate fact from fiction, I’ve compiled a list of seven questions I believe operators, media, analysts, and consumers should ask when they see new data offered on network improvements with the onset of 5G.
1. Does the data analysis reflect a typical user experience or the best-case scenario?
Tests that measure the experience a typical user receives rather than testing what a network is capable of under specific controlled conditions, give a more real-world view of the average consumer experience day-in and day-out. Controlled conditions could include using optimized test servers, not testing the full end-to-end connection (only testing a part of it), and restricting results to only include newer/top-end devices.
2. Is the data based on third-party influence or independent analysis?
Operator influence happens when an operator pays a company to conduct tests and publish results. It’s always worth questioning if any data was created on a pay-for-play basis.
3. How does the mix of user-initiated and automated tests impact the analysis?
User-initiated tests are when a user chooses when to run the test. Automated tests run without user intervention or prompting. Automatic testing is endorsed by official bodies such as the FCC as the best-practice methodology for measuring what a user would typically experience in everyday conditions. Comparatively, consumers often only run their own speed tests when they are experiencing a very good or really poor experience, or if they are prompted by changes they see on their device (e.g. a 5GE icon in the status bar). In these cases, data is skewed based on the consumer whims even though the underlying network has not changed at all.
4. Does the data reflect the end-to-end consumer experience, from device to content?
When companies use dedicated testing servers, results reflect optimized user experience. Comparatively, optimal measurement tests the full end-to-end user experience from the user device to the same locations (servers) that consumers use every day. Using a connection that tests right through to the CDN (Content Delivery Network) in the same manner a customer would use and experience the network is the most accurate representation of real-world experience.
5. Does the methodology accurately reflect a range of device manufacturers and devices smartphones?
Consumers use devices across a range of manufacturers and models, and it’s fair to say that not everyone is using the latest smartphone model. Accurate measurement of network speed and availability must account for a wide range of devices. Beware of results that are based on a single handset (as is common in drive-testing) or that limit the results to a selection of high-end devices which will have the effect of artificially inflating the measurements to reflect a subset of users who have a better experience than average.
6. Does the speed test evaluate true speed?
Some speed tests are based on downloading tiny files. This means they aren’t measuring the network speed but instead what’s known as the “ramp-up” time. If you really want to understand speed, you need to run a timed test that runs for long enough to get past ramp-up and actually measure the sustained speed a user receives. Ensuring the test is long enough is the only way to measure the actual speed a user will receive when doing something speed-sensitive such as downloading a large file or app.
7. How confident can we be in the conclusion of an analysis?
Scientists use confidence intervals to represent the level of precision in any measurement. These are like the margins of error typically disclosed in opinion polls and apply for any and every scientific measurement no matter the methodology. As no measurement is exact, confidence intervals show the range within which the true value is highly likely to be. Beware any analysis where the results are extremely close: if confidence intervals have not been disclosed then there is no way for you to determine if the difference between the networks is actually meaningful or if it’s just down to statistical noise and should be considered a tie. Confidence intervals are the standard scientific method of determining if a result is meaningful which is often referred to as testing for ‘statistical significance’.
I hope these questions will assist the industry in looking at statements on network experience with a more critical eye. I also hope a little more knowledge and transparency will help avoid confusion or distraction tactics and help us all to refocus on a vision for improving the true mobile experience that consumers receive with 4G today and 5G in the not so distant future.