Artificial intelligence is being explored to improve customer service with smarter chatbots, dispatch network technicians more efficiently and automate simple, highly manual network operations tasks. It’s also being co-opted by bad actors to fuel smarter robocall and text scams, according to Transaction Network Services.
TNS tracks robocall trends and provides software that authenticates calls and helps to cut off scam and spam robocalls, including support for Secure Telephony Identity Revisited/Signature-based Handling of Asserted information using toKENS protocols—more commonly known as the STIR/SHAKEN framework, which requires voice providers to “sign” calls, attesting to their origination and making it easier to identify and flag illegal scams and unwanted calls for consumers.
A major deadline for implementation of STIR/SHAKEN passed in June of this year. What does that mean in the market? “There’s a lot more signed, or attested, traffic that is going across, especially among the Tier 1 carriers, given the fact that they have IP networks,” said Mike Keegan, CEO of TNS. “We’re still seeing Tier 1 traffic to small-to-medium carriers not getting attested, because you’re going from an IP network to potentially a TDM network. There are peering issues, there are network issues from that perspective, network evolution issues.” TNS, he said, has been working on both assisting in small-to-medium carrier network upgrades to IP as well as overall anti-robocall efforts.
TNS concluded in its most recent robocall report for the third quarter of 2023 that STIR/SHAKEN is “helping to segregate real and spoofed traffic when the call path is all IP,” and that overall, telecom spam decreased slightly in 2023 and “remains steady”.
But whenever industry shuts down one avenue for scammers, they move on to the next. That includes leveraging the hottest new technology: generative artificial intelligence. TNS said in its robocall report that it is seeing “refreshed spam attacks relating to student loan debt relief, AI voice cloning fraud, retail refund tricks, and an increase in political-related spam. As we enter 2024, tense with elections and AI-equipped bad actors, expect these types of scams to accelerate in concert with financial and charity spoofing.”
“The one area that I think we have to all be careful about is generative AI,” said Keegan. “What’s happening in the marketplace today is that bad actors are using AI to create new content, whether it be text or video or audio, and it’s based upon an analysis of content that’s already out there, the structure of that content, the pattens of that content—so if they get three seconds of your voice, they can ultimately create sentences that sound like you are saying it,” he continued. “They’re using AI to ultimately go back to some of the scams they’ve had before—think about an imposter-grandchild scam, they call a grandparent and say, ‘look, I got arrested, I need bail money’ or ‘I lost my wallet’ or ‘I was in a car accident’ or even ‘I’ve been kidnapped, I need to pay ransom’. You’re starting to see all of those scams come about, and ultimately, how does the industry deal with that?”
Scammers don’t even have to go to the trouble of matching someone’s voice exactly—although there have been cases where they do—in order to get an emotional reaction that can push someone into giving away their personal information. Using AI-generated voice can expand their capabilities by disguising a scam caller’s gender or accent, things that might tip off victims that the caller is not who they claim to be or calling from the location that they claim to be in. In text form, AI can correct grammar, spelling and other language mistakes that might otherwise provide clues to the recipient that an SMS or email is from a spammer or scammer.
So how do you fight AI-generated scams? With more AI. “There are ways that we think that this can be dealt with. … You can use AI, like voice biometrics, to figure out whether something is a synthetic call and probably coming from a fraudster, or whether it’s actually a real call,” Keegan said. “We are in trials today with industry leaders like the carriers, and we’re also talking to government agencies … to be able to use AI to actually identify that voice cloning, or text, or whatever it is, that is generated that’s false.
While there are not many such tools available currently, he said, AI biometrics will be “super important for us, moving forward, to detect synthetic voice.” However, it’s not a capability that is currently implemented in networks, he added—so people need other strategies, like a specific safe word, phrase or question, agreed upon ahead of time, that scammers wouldn’t know. “We tell our carriers to get a message out that everyone should have a safe word,” Keegan says. “So if someone calls and you feel like it’s someone on your network and they’re asking you something, you can ask for the safe word. The AI-cloned, generated call won’t be able to tell you that.”
Meanwhile, as the success of STIR/SHAKEN helps curb the growth of voice-based scam calls, text-based scams have been ramping up, with hundreds of millions sent every day: The Amazon package that is “unable to be delivered”; claims of having identified “fraudulent activity on your account”; a prize that you can claim in exchange for some basic information. “As we’re attacking the voice issue, bad actors move to text. And again, with generative AI, the ability to create text scams has been accelerated,” Keegan said. They’re also more sophisticated, he added. The Federal Communications Commission last year passed first-of-their-kind rules requiring carriers to block suspicious text messages, and again, Keegan said, this is a case where Tier 1 carriers can and have taken measures to combat scam texts, he expects to see much the same migration pattern that was seen with voice: As the big carriers put effective tools in place, the suspicious traffic moves to small-to-medium-sized networks which have fewer resources to keep bad actors out.
As he considers the current trend landscape, Keegan says, “The most concerning thing is generative AI. No doubt about it. It is by far the most concerning thing, the thing that we are handling.” He declines to give further specifics on that, except to hint that TNS will soon have more to say on combatting GenAI scams. “The industry is more focused than people realize on this, and understand that they are working hard to use AI biometrics … to really understand what’s a synthetic call and what’s a true call,” Keegan said. “You’re going to start seeing a lot of that built into the dialogue from carriers and from regulators.”