YOU ARE AT:PolicyRobocalls surged ahead of the 2024 election — Now telcos must meet...

Robocalls surged ahead of the 2024 election — Now telcos must meet the moment (Reader Forum)

As the 2024 US Presidential Election approached, political robocalls and robotexts surged. Potential disinformation campaigns, scams and AI-generated deepfakes may hae been increasingly used to attempt to confuse and mislead the public, posing potential risks to voters and further eroding trust in the voice channel. According to the TNS (Transaction Network Services) Q3 2024 Robocall Investigation Report, political robocall traffic in the U.S. has increased in the first two weeks of October alone.

The playing field has changed

Because of the potential for AI deepfakes to spread potential disinformation more easily and believably, the state of play for telecommunications companies and regulators alike has rapidly evolved beyond the typical robocall identity or finance scams of years past. Key elections reflect this new environment, and the challenge posed to all stakeholders involved within the telco ecosystem, including carriers, who now must try to help protect subscribers from this mounting deluge of robocall bad actors.

The rise in political robocalls becomes more concerning considering how sophisticated these calls have become, thanks to increasingly accessible advanced AI tools. AI-driven robocalls can now convincingly mimic the voices of political figures, making it harder for the public to distinguish between legitimate and fake calls.

The potential impact of AI-powered potential disinformation is no longer hypothetical – it’s here. During the New Hampshire primary, robocalls using a deepfake audio of President Biden sparked new concerns around just how convincing these scams can be. Following this case, the FCC declared that AI-generated robocalls without the called party’s consent are illegal.

Given Americans’ apprehension around AI, it may be unsurprising that 70% of Americans are concerned about AI deepfake robocalls, while 64% of Americans believe an AI deepfake political robocall could be convincing enough to impact the outcome of the 2024 US Presidential election.

What carriers can do now

Combatting unwanted robocalls as part of day-to-day mitigation and for high-risk events such as election campaigns requires an expanded approach beyond STIR/SHAKEN – with a full and integrated effort from telcos, policymakers, regulators and technology leaders.

The surge in AI-driven robocall scams may seem overwhelming, but bad actors aren’t the only ones who can use advanced technology to their advantage. Telecom carriers can use solutions leveraging AI to detect and curtail bad actors, including:

  • AI-powered voice biometrics: This technology uses real-time AI to determine whether the voice on an incoming call is synthetic (a robocall) or legitimate, helping to identify and block robocalls more effectively.
  • Predictive AI-powered call analytics: By applying predictive algorithms across billions of calls, these tools can help carriers better understand robocall patterns and anticipate bad actors’ behaviors, enabling them to block malicious calls before they reach consumers.
  • AI SMS detection: AI models can detect and prevent machine-generated text messages from reaching their intended recipients, curbing the flood of automated scam texts that continue to inundate consumers today.

Another key priority for stakeholders is to educate and communicate with consumers about the ongoing political robocall threat. This means timely updates on common scam tactics and disinformation efforts can help them “pre-bunk” disinformation by encouraging them to be skeptical of calls or texts that follow a known pattern. It’s clear that subscribers want to be informed – 77% of Americans agree that policymakers and regulators should educate the public on the risks of political AI deepfakes and how to recognize fake robocalls.

This season, key messages to share with subscribers should include:

  • Communicating how advanced voice cloning has become: Modern generative AI technology has made distinguishing between real and cloned voices challenging. These deepfake calls can sound exactly like the person they’re imitating, using common phrases and speech patterns.
  • Timely alerts on common scams and disinformation schemes: Scammers often chase trends of new methods that the public are not yet familiar with. Staying up to date helps subscribers remain vigilant. In 2024, TNS identified that the most common political robocall scams include fake voter registration, fundraising donation requests and misleading claims that people can vote over the phone. With the election looming for example, voters in battleground states are being disproportionately targeted by political robocalls.
  • The importance of verifying information through official channels: Always confirm election-related information through official government websites or trusted sources. Scammers can lure victims to fraudulent URLs disguised as voter registration websites, where citizens are prompted to enter sensitive information.

The road ahead

If history is any guide, political robocalls and disinformation attempts will endure past Election Day. By leveraging advanced tools and strengthening existing protocols like STIR/SHAKEN, carriers can play a pivotal role in curbing disinformation and protecting their subscribers.

However, full effectiveness depends on addressing network interconnectivity challenges, especially for smaller carriers. Continued collaboration between operators, regulators and innovators is essential to safeguarding voice channels and restoring public trust as political robocall threats evolve. With election-related scams ramping up, the time to act is now, ensuring that communication remains secure for Americans during this potentially vulnerable period.

ABOUT AUTHOR