Article Details

Original Article Text

Click to Toggle View

Google won’t fix new ASCII smuggling attack in Gemini. Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model’s behavior, and silently poison its data. ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs). It’s similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations. While LLMs’ susceptibility to ASCII smuggling attacks isn’t a new discovery, as several researchers have explored this possibility since the advent of generative AI tools, the risk level is now different [1, 2, 3, 4]. Before, chatbots could only be maliciously manipulated by such attacks if the user was tricked into pasting specially crafted prompts. With the rise of agentic AI tools like Gemini, which have widespread access to sensitive user data and can perform tasks autonomously, the threat is more significant. Viktor Markopoulos, a security researcher at FireTail cybersecurity company, has tested ASCII smuggling against several widely used AI tools and found that Gemini (Calendar invites or email), DeepSeek (prompts), and Grok (X posts), are vulnerable to the attack. Claude, ChatGPT, and Microsoft CoPilot proved secure against ASCII smuggling, implementing some form of input sanitization, FireTail found. Regarding Gemini, its integration with Google Workspace poses a high risk, as attackers could use ASCII smuggling to embed hidden text in Calendar invites or emails. Markopoulos found that it’s possible to hide instructions on the Calendar invite title, overwrite organizer details (identity spoofing), and smuggle hidden meeting descriptions or links. Regarding the risk from emails, the researcher states that “for users with LLMs connected to their inboxes, a simple email with hidden commands can instruct the LLM to search the inbox for sensitive items or send contact details, turning a standard phishing attempt into an autonomous data extraction tool.” LLMs instructed to browse websites can also stumble upon hidden payloads in product descriptions and feed them with malicious URLs to convey to users. The researcher reported the findings to Google on September 18 but the tech giant dismissed the issue as not being a security bug and may only be exploited in the context of social engineering attacks. Even so, Markopoulos showed that the attack can trick Gemini into supplying false information to users. In one example, the researcher passed an invisible instruction that Gemini processed to present a potentially malicious site as the place to get a good quality phone with a discount. Other tech firms, though, have a different perspective on this type of problems. For example, Amazon published detailed security guidance on the topic of Unicode character smuggling. BleepingComputer has contacted Google for more clarification on the bug but we have yet to receive a response. The Security Validation Event of the Year: The Picus BAS Summit Join the Breach and Attack Simulation Summit and experience the future of security validation. Hear from top experts and see how AI-powered BAS is transforming breach and attack simulation. Don't miss the event that will shape the future of your security strategy

Daily Brief Summary

VULNERABILITIES // Google Opts Out of Fixing ASCII Smuggling Flaw in Gemini

Google has decided not to address an ASCII smuggling vulnerability in its Gemini AI assistant, which can manipulate the AI into providing false information or altering its behavior.

ASCII smuggling uses special characters to introduce invisible payloads, exploiting the gap between user-visible content and machine-readable data in large-language models.

Security researcher Viktor Markopoulos demonstrated the attack's effectiveness on AI tools like Gemini, DeepSeek, and Grok, while others like ChatGPT and Microsoft CoPilot remain secure.

The vulnerability poses a significant risk due to Gemini's integration with Google Workspace, potentially allowing attackers to embed hidden instructions in Calendar invites or emails.

Google dismissed the issue as a non-security bug, suggesting it requires social engineering to exploit, but the potential for autonomous data extraction remains a concern.

Other tech companies, such as Amazon, have issued guidance on Unicode character smuggling, indicating varying industry perspectives on the threat.

The findings were reported to Google on September 18, yet the company has not provided further clarification or changes in its security approach.