Article Details
Scrape Timestamp (UTC): 2025-03-18 15:50:03.517
Source: https://thehackernews.com/2025/03/new-rules-file-backdoor-attack-lets.html
Original Article Text
Click to Toggle View
New 'Rules File Backdoor' Attack Lets Hackers Inject Malicious Code via AI Code Editors. Cybersecurity researchers have disclosed details of a new supply chain attack vector dubbed Rules File Backdoor that affects artificial intelligence (AI)-powered code editors like GitHub Copilot and Cursor, causing them to inject malicious code. "This technique enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions into seemingly innocent configuration files used by Cursor and GitHub Copilot," Pillar security researcher Ziv Karliner said in a technical report shared with The Hacker News. "By exploiting hidden unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews." The attack vector is notable for the fact that it allows malicious code to silently propagate across projects, posing a supply chain risk. The crux of the attack hinges on the rules files that are used by AI agents to guide their behavior, helping users to define best coding practices and project architecture. Specifically, it involves embedding carefully crafted prompts within seemingly benign rule files, causing the AI tool to generate code containing security vulnerabilities or backdoors. In other words, the poisoned rules nudge the AI into producing nefarious code. This can be accomplished by using zero-width joiners, bidirectional text markers, and other invisible characters to conceal malicious instructions and exploiting the AI's ability to interpret natural language to generate vulnerable code via semantic patterns that trick the model into overriding ethical and safety constraints. Following responsible disclosure in late February and March 2024, both Cursor and GiHub have stated that users are responsible for reviewing and accepting suggestions generated by the tools. "'Rules File Backdoor' represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice, potentially affecting millions of end users through compromised software," Karliner said. "Once a poisoned rule file is incorporated into a project repository, it affects all future code-generation sessions by team members. Furthermore, the malicious instructions often survive project forking, creating a vector for supply chain attacks that can affect downstream dependencies and end users."
Daily Brief Summary
Cybersecurity experts have unveiled a new type of supply chain attack called Rules File Backdoor that targets AI-powered code editors like GitHub Copilot and Cursor.
This attack manipulates AI code editors to inject harmful code by altering rule files, which guide the AI's coding suggestions.
Attackers use hidden unicode characters and complex evasion methods to embed malicious directives within these configuration files, deceiving the AI into generating compromised code.
This technique causes the AI to inadvertently create security vulnerabilities or backdoors within otherwise normal code, bypassing traditional code review processes.
The exploitation of these AI tools poses a significant supply chain threat, as the tampered code can spread across projects and persist through software forks, impacting downstream software dependencies.
Both Cursor and GitHub have responded by advising users to diligently review and verify all AI-generated code suggestions.
The potential impact of this vulnerability is vast, potentially affecting millions of end-users by propagating compromised software through trusted development tools.