Article Details
Scrape Timestamp (UTC): 2024-08-27 06:15:11.470
Source: https://thehackernews.com/2024/08/microsoft-fixes-ascii-smuggling-flaw.html
Original Article Text
Click to Toggle View
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot. Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling. "ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said. "This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!" The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps - The net outcome of the attack is that sensitive data present in emails, including multi-factor authentication (MFA) codes, could be transmitted to an adversary-controlled server. Microsoft has since addressed the issues following responsible disclosure in January 2024. The development comes as proof-of-concept (PoC) attacks have been demonstrated against Microsoft's Copilot system to manipulate responses, exfiltrate private data, and dodge security protections, once again highlighting the need for monitoring risks in artificial intelligence (AI) tools. The methods, detailed by Zenity, allow malicious actors to perform retrieval-augmented generation (RAG) poisoning and indirect prompt injection leading to remote code execution attacks that can fully control Microsoft Copilot and other AI apps. In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Copilot into providing users with phishing pages. Perhaps one of the most novel attacks is the ability to turn the AI into a spear-phishing machine. The red-teaming technique, dubbed LOLCopilot, allows an attacker with access to a victim's email account to send phishing messages mimicking the compromised users' style. Microsoft has also acknowledged that publicly exposed Copilot bots created using Microsoft Copilot Studio and lacking any authentication protections could be an avenue for threat actors to extract sensitive information, assuming they have prior knowledge of the Copilot name or URL. "Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots," Rehberger said.
Daily Brief Summary
Microsoft recently patched a vulnerability in Microsoft 365 Copilot which could enable the theft of sensitive data using ASCII smuggling.
ASCII smuggling involves using special Unicode characters that appear invisible to the user but can be embedded within clickable hyperlinks for data exfiltration.
This security flaw could potentially allow for the theft of email contents and multi-factor authentication codes by sending them to an adversary-controlled server.
A series of attack techniques were chained together to create a reliable exploit, which was disclosed responsibly in January 2024.
Additional potential threats include using AI for spear-phishing by mimicking user communication styles and exploiting publicly exposed Copilot bots.
Enterprises are advised to enable Data Loss Prevention and enhance security controls to mitigate risks associated with Microsoft Copilot and other AI applications.
These discoveries underscore the persistent security threats and the need for ongoing vigilance in monitoring and protecting AI-driven tools from cyber exploits.