Article Details
Scrape Timestamp (UTC): 2024-04-11 22:05:02.212
Original Article Text
Click to Toggle View
LastPass: Hackers targeted employee in failed deepfake CEO call. LastPass revealed this week that threat actors targeted one of its employees in a voice phishing attack, using deepfake audio to impersonate Karim Toubba, the company's Chief Executive Officer. However, while 25% of people have been on the receiving end of an AI voice impersonation scam or know someone who has, according to a recent global study, the LastPass employee didn't fall for it because the attacker used WhatsApp, which is a very uncommon business channel. "In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp," LastPass intelligence analyst Mike Kosak said. "As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally." Kosak added the attack failed and had no impact on LastPass. However, the company still chose to share details of the incident to warn other companies that AI-generated deepfakes are already being used in executive impersonation fraud campaigns. The deepfake audio used in this attack was likely generated using deepfake audio models trained on publicly available audio recordings of LastPass' CEO, likely this one available on YouTube. Deepfake attacks on the rise LastPass' warning follows a U.S. Department of Health and Human Services (HHS) alert issued last week regarding cybercriminals targeting IT help desks using social engineering tactics and AI voice cloning tools to deceive their targets. The use of audio deepfakes also allows threat actors to make it much harder to verify the caller's identity remotely, rendering attacks where they impersonate executives and company employees very hard to detect. While the HHS shared advice specific to attacks targeting IT help desks of organizations in the health sector, the following also very much applies to CEO impersonation fraud attempts: In March 2021, the FBI also issued a Private Industry Notification (PIN) [PDF] cautioning that deepfakes—including AI-generated or manipulated audio, text, images, or video—were becoming increasingly sophisticated and would likely be widely employed by hackers in "cyber and foreign influence operations." Additionally, Europol warned in April 2022 that deepfakes may soon become a tool that cybercriminal groups routinely use in CEO fraud, evidence tampering, and non-consensual pornography creation.
Daily Brief Summary
LastPass disclosed a failed cyberattack involving a deepfake audio impersonation of its CEO aimed at an employee via WhatsApp.
The company employee recognized the unusual communication platform and characteristics of a social engineering scam, thus ignored and reported the incident.
No company data was compromised or affected by the attempted security breach.
The deepfake audio was likely created from public recordings of the CEO available online, demonstrating the feasibility of such attacks with accessible data.
LastPass chose to publicize the incident to alert other organizations about the rising use of AI in cybersecurity threats, especially deepfake technologies used for executive impersonation.
Similar warnings about the exploitation of AI-generated deepfakes in cyberattacks have been issued by other entities, including the FBI and Europol, highlighting a broader industry concern.
The incident underlines the importance of employee training in recognizing and mitigating unconventional cyber threats.