Article Details

Scrape Timestamp (UTC): 2025-07-16 07:47:30.905

Source: https://thehackernews.com/2025/07/google-ai-big-sleep-stops-exploitation.html

Original Article Text

Click to Toggle View

Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act. Google on Tuesday revealed that its large language model (LLM)-assisted vulnerability discovery framework discovered a security flaw in the SQLite open-source database engine before it could have been exploited in the wild. The vulnerability, tracked as CVE-2025-6965 (CVSS score: 7.2), is a memory corruption flaw affecting all versions prior to 3.50.2. It was discovered by Big Sleep, an artificial intelligence (AI) agent that was launched by Google last year as part of a collaboration between DeepMind and Google Project Zero. "An attacker who can inject arbitrary SQL statements into an application might be able to cause an integer overflow resulting in read off the end of an array," SQLite project maintainers said in an advisory. The tech giant described CVE-2025-6965 as a critical security issue that was "known only to threat actors and was at risk of being exploited." Google did not reveal who the threat actors were. "Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand," Kent Walker, President of Global Affairs at Google and Alphabet, said. "We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild." In October 2024, Big Sleep was behind the discovery of another flaw in SQLite, a stack buffer underflow vulnerability that could have been exploited to result in a crash or arbitrary code execution. Coinciding with the development, Google has also published a white paper to build secure AI agents such that they have well-defined human controllers, their capabilities are carefully limited to avoid potential rogue actions and sensitive data disclosure, and their actions are observable and transparent. "Traditional systems security approaches (such as restrictions on agent actions implemented through classical software) lack the contextual awareness needed for versatile agents and can overly restrict utility," Google's Santiago (Sal) Díaz, Christoph Kern, and Kara Olive said. "Conversely, purely reasoning-based security (relying solely on the AI model's judgment) is insufficient because current LLMs remain susceptible to manipulations like prompt injection and cannot yet offer sufficiently robust guarantees." To mitigate the key risks associated with agent security, the company said it has adopted a hybrid defense-in-depth approach that combines the strengths of both traditional, deterministic controls and dynamic, reasoning-based defenses. The idea is to create robust boundaries around the agent's operational environment so that the risk of harmful outcomes is significantly mitigated, specifically malicious actions carried out as a result of prompt injection. "This defense-in-depth approach relies on enforced boundaries around the AI agent's operational environment to prevent potential worst-case scenarios, acting as guardrails even if the agent's internal reasoning process becomes compromised or misaligned by sophisticated attacks or unexpected inputs," Google said. "This multi-layered approach recognizes that neither purely rule-based systems nor purely AI-based judgment are sufficient on their own."

Daily Brief Summary

MISCELLANEOUS // Google AI Detects Critical SQLite Vulnerability Before Exploitation

Google's AI, Big Sleep, preemptively identified a critical vulnerability in the SQLite database engine, preventing potential exploitation.

The detected issue was a memory corruption flaw, categorized under CVE-2025-6965, with a high-risk CVSS score of 7.2.

This AI-driven discovery was part of a collaboration between Google's DeepMind and Project Zero, highlighting the use of AI in cybersecurity.

The vulnerability could allow attackers to cause significant damage through SQL statement injections, such as integer overflow and unauthorized data access.

Google cited the incident as the first known instance where an AI directly prevented a cyber attack by predicting and addressing a software vulnerability before it was exploited.

Concurrently, Google released a white paper outlining the implementation of secure AI systems, emphasizing a balanced approach combining traditional security controls with AI reasoning capabilities.

The white paper stressed the importance of enforced operational boundaries for AI agents to prevent adverse outcomes from sophisticated attacks or unexpected inputs.

Google aims to refine AI security measures by incorporating multiple layers of defense to ensure robust protection against emerging cyber threats.