Article Details

Scrape Timestamp (UTC): 2025-10-07 15:21:05.100

Source: https://thehackernews.com/2025/10/googles-new-ai-doesnt-just-find.html

Original Article Text

Click to Toggle View

Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them. Google's DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. The efforts add to the company's ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz. DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process. "By automatically creating and applying high-quality security patches, CodeMender's AI-powered agent helps developers and maintainers focus on what they do best — building good software," DeepMind researchers Raluca Ada Popa and Four Flynn said. "Over the past six months that we've been building CodeMender, we have already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code." CodeMender, under the hood, leverages Google's Gemini Deep Think models to debug, flag, and fix security vulnerabilities by addressing the root cause of the problem, and validate them to ensure that they don't trigger any regressions. The AI agent, Google added, also makes use of a large language model (LLM)-based critique tool that highlights the differences between the original and modified code in order to verify that the proposed changes do not introduce regressions, and self-correct as required. Google said it also intended to slowly reach out to interested maintainers of critical open-source projects with CodeMender-generated patches, and solicit their feedback, so that the tool can be used to keep codebases secure. The development comes as the company said it's instituting an AI Vulnerability Reward Program (AI VRP) to report AI-related issues in its products, such as prompt injections, jailbreaks, and misalignment, and earn rewards that go as high as $30,000. In June 2025, Anthropic revealed that models from various developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals, and that LLM models "misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real." That said, policy-violating content generation, guardrail bypasses, hallucinations, factual inaccuracies, system prompt extraction, and intellectual property issues do not fall under the ambit of the AI VRP. Google, which previously set up a dedicated AI Red Team to tackle threats to AI systems as part of its Secure AI Framework (SAIF), has also introduced a second iteration of the framework to focus on agentic security risks like data disclosure and unintended actions, and the necessary controls to mitigate them. The company further noted that it's committed to using AI to enhance security and safety, and use the technology to give defenders an advantage and counter the growing threat from cybercriminals, scammers, and state-backed attackers.

Daily Brief Summary

VULNERABILITIES // Google's CodeMender AI Agent Automates Vulnerability Detection and Patching

Google's DeepMind introduced CodeMender, an AI agent that detects, patches, and rewrites vulnerable code, aiming to prevent future exploits and enhance software security.

CodeMender is designed to be both reactive and proactive, addressing new vulnerabilities and securing existing codebases to eliminate entire classes of vulnerabilities.

Over six months, CodeMender has upstreamed 72 security fixes to open-source projects, demonstrating its capability to handle large codebases.

Utilizing Google's Gemini Deep Think models, CodeMender identifies root causes of vulnerabilities and ensures changes do not introduce regressions.

The AI agent employs a large language model-based tool to critique code modifications, verifying changes and self-correcting as needed.

Google plans to engage maintainers of critical open-source projects for feedback on CodeMender-generated patches, enhancing the tool's effectiveness.

An AI Vulnerability Reward Program is being launched to report AI-related issues in Google products, with rewards up to $30,000.

Google's Secure AI Framework continues to evolve, focusing on agentic security risks and using AI to counter threats from cybercriminals and state-backed actors.