Article Details

Scrape Timestamp (UTC): 2025-05-02 15:09:02.862

Source: https://www.theregister.com/2025/05/02/gen_ai_spam/

Original Article Text

Click to Toggle View

Generative AI makes fraud fluent – from phishing lures to fake lovers. Real-time video deepfakes? Not convincing yet. RSAC Spam messages predate the web itself, and generative AI has given it a fluency upgrade, churning out slick, localized scams and letting crooks hit regions and dialects they used to ignore. One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop. "I'm assuming at this point that probably half of the spam we get is being written by generative AIs, the quantity of spelling and grammar errors has fallen precipitously," Chester Wisniewski, global field CISO of British security biz Sophos, told The Register during this week's RSA Conference. "I've joked about this a few times, but if the grammar and spelling is perfect, it probably is a scam, because even humans make mistakes most of the time." AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages. He gave an example from his native Canada. Residents of the French-dominated province of Quebec can peg spam notes quickly because they're often written in traditional French, rather than Québécois. But AI systems can easily generate convincing Québécois, making it easier to snare victims. A similar trend is observed with Portuguese-language spam. Given that Brazil's population is about 20 times larger than Portugal's, scammers have historically favored Brazilian Portuguese in their campaigns. Now, with AI capable of producing content in European Portuguese, residents in Portugal are finding it increasingly difficult to discern phishing attempts crafted in their local linguistic style. "From the criminal enterprise perspective, it's opened the world," Kevin Brown, chief operating officer at security consultancy NCC Group, told The Register. "What is all the phishing training that we've done over the years? The obvious things, the poor grammar, the urgency, the obvious. Overnight AI has said, 'You know what, I'm going to write something that is written in good language, with good punctuation, and it will be written in a local language.'" The same is also true with romance scams, also known as pig butchering. AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases. Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes. Trust none of what you hear On the subject of deepfakes, Wisniewski said that audio versions of AI avatars are already tricking victims at companies. For instance, scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs. "You can do real-time audio deepfakes for pennies," he said. But Wisniewski expressed skepticism about real-time video deepfakes, specifically referencing a widely reported case from last February in which a Hong Kong employee was allegedly tricked into transferring $25 million to scammers using a video call featuring a deepfake of the CFO. He suggested it's much more likely that someone had just pressed the wrong button and was looking to blame the latest trend rather than admit incompetence. If we follow the same trajectory of the audio deep fakes, we're about two years out from the criminals having it at an economical price He noted that even the big AI companies, with billion-dollar budgets, have yet to crack the challenge of creating convincingly interactive real-time video avatars. The idea that some criminals could build a model to do this themselves wasn't realistic. But it's only a matter of time. "If we follow the same trajectory of the audio deep fakes, we're about two years out from the criminals having it at an economical price, and three years out from your least favorite uncle doing them for a joke on Facebook," Wisniewski said. Brown disagreed, however, saying that NCC's Group's pentesters had had some success in the area of video fakery. "We've been able to do some video deepfakes on specific use cases. But these are professionals that have been doing this for years," he said. "We are able to do that, but it will become industrialized in due course." Both Brown and Wisniewski agreed that there is going to be a pressing need for personal verification in communications that goes beyond the established systems.

Daily Brief Summary

CYBERCRIME // Generative AI Transforms Spam, Heightens Global Phishing Risks

Generative AI has significantly improved the quality and localization of phishing and scam messages, reducing spelling and grammatical errors that were typical identifiers of spam.

Scammers are now able to target non-English speaking regions more effectively by crafting messages in local dialects, like Québécois and European Portuguese, which previously helped residents identify spam.

The conversational capabilities of AI systems are enhancing the effectiveness of romance scams by managing initial interactions before human scammers take over for financial exploitation.

Real-time audio deepfakes are currently being used to impersonate individuals in sensitive positions, misleading employees into revealing confidential information.

Skepticism exists around the state of real-time video deepfakes as truly convincing versions are not yet affordable or technologically feasible without significant investment, though this is expected to change within a few years.

Future threats are anticipated to require strengthened personal verification processes to counter sophisticated AI-enabled scams and impersonations.