Article Details
Scrape Timestamp (UTC): 2025-05-15 18:28:09.047
Original Article Text
Click to Toggle View
FBI: US officials targeted in voice deepfake attacks since April. The FBI warned that cybercriminals using AI-generated audio deepfakes to target U.S. officials in voice phishing attacks that started in April. This warning is part of a public service announcement issued on Thursday that also provides mitigation measures to help the public spot and block attacks using audio deepfakes (also known as voice deepfakes). "Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts. If you receive a message claiming to be from a senior US official, do not assume it is authentic," the FBI warned. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts." The attackers can gain access to the accounts of U.S. officials by sending malicious links disguised as links designed to move the discussion to another messaging platform. By compromising their accounts, the threat actors can gain access to other government officials' contact information. Next, they can use social engineering to impersonate the compromised U.S. officials to steal further sensitive information and trick targeted contacts into transferring funds. Today's PSA follows a March 2021 FBI Private Industry Notification (PIN) [PDF] warning that deepfakes (including AI-generated or manipulated audio, text, images, or video) would likely be widely employed in "cyber and foreign influence operations" after becoming increasingly sophisticated. One year later, Europol cautioned that deepfakes could soon become a tool that cybercriminal groups may routinely use in CEO fraud, non-consensual pornography creation, and evidence tampering. The U.S. Department of Health and Human Services (HHS) also warned in April 2024 that cybercriminals were targeting IT help desks in social engineering attacks using AI voice cloning to deceive targets. Later that month, LastPass revealed that unknown attackers used deepfake audio to impersonate Karim Toubba, the company's Chief Executive Officer, in a voice phishing attack targeting one of its employees. Top 10 MITRE ATT&CK© Techniques Behind 93% of Attacks Based on an analysis of 14M malicious actions, discover the top 10 MITRE ATT&CK techniques behind 93% of attacks and how to defend against them.
Daily Brief Summary
The FBI issued a public service announcement alerting that AI-based voice deepfakes have been used in phishing attacks against U.S. officials since April 2025.
Perpetrators impersonate senior U.S. officials using AI-generated audio to establish rapport and subsequently gain access to personal and governmental accounts.
The agency highlighted the use of smishing (text-based) and vishing (voice-based) techniques that appear to originate from high-ranking officials to deceive targets.
Once access is obtained, attackers exploit the breached accounts to gather sensitive information from, and about, other government individuals and potentially fund transfers.
The warning aligns with a historical pattern, referencing a 2021 FBI notification regarding the increasing sophistication and expected proliferation of deepfakes in cyber operations.
Concerns about deepfakes' role in cybersecurity have been escalating, with Europol and the U.S. Department of Health and Human Services noting its potential misuse in various frauds and social engineering since 2021.
The recent misuse of deepfake technology in an attack on LastPass, involving a deepfake audio of the CEO, underscores the tangible threats posed by these technologies.
The announcement aims to raise awareness and encourage vigilance, providing mitigation strategies to identify and defend against such deceptive tactics.