Article Details

Scrape Timestamp (UTC): 2025-09-30 13:21:12.093

Source: https://thehackernews.com/2025/09/researchers-disclose-google-gemini-ai.html

Original Article Text

Click to Toggle View

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits. Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - Tenable said the vulnerability could have been abused to embed the user's private data inside a request to a malicious server controlled by the attacker without the need for Gemini to render links or images. "One impactful attack scenario would be an attacker who injects a prompt that instructs Gemini to query all public assets, or to query for IAM misconfigurations, and then creates a hyperlink that contains this sensitive data," Matan said of the Cloud Assist flaw. "This should be possible since Gemini has the permission to query assets through the Cloud Asset API." Following responsible disclosure, Google has since stopped rendering hyperlinks in the responses for all log summarization responses, and has added more hardening measures to safeguard against prompt injections. "The Gemini Trifecta shows that AI itself can be turned into the attack vehicle, not just the target. As organizations adopt AI, they cannot overlook security," Matan said. "Protecting AI tools requires visibility into where they exist across the environment and strict enforcement of policies to maintain control." The development comes as agentic security platform CodeIntegrity detailed a new attack that abuses Notion's AI agent for data exfiltration by hiding prompt instructions in a PDF file using white text on a white background that instructs the model to collect confidential data and then send it to the attackers. "An agent with broad workspace access can chain tasks across documents, databases, and external connectors in ways RBAC never anticipated," the company said. "This creates a vastly expanded threat surface where sensitive data or actions can be exfiltrated or misused through multi step, automated workflows."

Daily Brief Summary

VULNERABILITIES // Google Patches Critical Vulnerabilities in Gemini AI Assistant

Cybersecurity researchers identified three vulnerabilities in Google's Gemini AI, potentially exposing users to privacy risks and data theft through prompt injection and cloud exploits.

The vulnerabilities, named the Gemini Trifecta, affected the Search Personalization Model, Cloud Assist, and Browsing Tool, enabling unauthorized data exfiltration.

Attack scenarios included using prompt injections to manipulate Gemini into querying sensitive data and embedding it into malicious requests.

Google responded by ceasing hyperlink rendering in log summarization and enhancing security measures to prevent prompt injection attacks.

The incident emphasizes the need for robust security measures as AI tools become integral to business operations, highlighting AI's dual role as both target and attack vector.

The case follows a broader trend of exploiting AI agents, as seen in a separate attack using Notion's AI for data exfiltration through hidden prompt instructions.

Organizations are urged to maintain visibility and enforce strict policies to secure AI environments against evolving threats.