Article Details

Scrape Timestamp (UTC): 2025-08-05 13:53:26.744

Source: https://thehackernews.com/2025/08/cursor-ai-code-editor-vulnerability.html

Original Article Text

Click to Toggle View

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval. Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution. The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations. "A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target's machine," Cursor said in an advisory released last week. "Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt." MCP is an open-standard developed by Anthropic that allows large language models (LLMs) to interact with external tools, data, and services in a standardized manner. It was introduced by the AI company in November 2024. CVE-2025-54136, per Check Point, has to do with how it's possible for an attacker to alter the behavior of an MCP configuration after a user has approved it within Cursor. Specifically, it unfolds as follows - The fundamental problem here is that once a configuration is approved, it's trusted by Cursor indefinitely for future runs, even if it has been changed. Successful exploitation of the vulnerability not only exposes organizations to supply chain risks, but also opens the door to data and intellectual property theft without their knowledge. Following responsible disclosure on July 16, 2025, the issue has been addressed by Cursor in version 1.3 released late July 2025 by requiring user approval every time an entry in the MCP configuration file is modified. "The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows," Check Point said. The development comes days after Aim Labs, Backslash Security, and HiddenLayer exposed multiple weaknesses in the AI tool that could have been abused to obtain remote code execution and bypass its denylist-based protections. They have also been patched in version 1.3. The findings also coincide with the growing adoption of AI in business workflows, including using LLMs for code generation, broadening the attack surface to various emerging risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, hallucinations, inappropriate responses, and data leakage - "As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly," Pillar Security's Dor Sarig said. "Modern jailbreaks can propagate through contextual chains, infecting one AI component and leading to cascading logic failures across interconnected systems." "These attacks highlight that AI security requires a new paradigm, as they bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability lies in the very language and reasoning the model is designed to emulate."

Daily Brief Summary

MALWARE // High-Severity RCE Vulnerability Found in AI-Powered Coding Tool

Researchers at Check Point have discovered a critical security flaw in Cursor, an AI-based code editor, which permits remote code execution through manipulation of MCP files.

The vulnerability, tagged as CVE-2025-54136 with a CVSS score of 7.2, is named MCPoison due to its method of attack involving Model Context Protocol (MCP) configuration modifications.

Attackers can exploit the flaw by altering a previously approved MCP configuration file in a GitHub repository or locally on a user's device, enabling the execution of malicious commands without detection.

The MCP, developed by Anthropic, facilitates standardized interaction between large language models and external resources, but this vulnerability exposes dangerous trust assumptions in its implementation.

The exploit could lead to significant risks such as data theft and intellectual property breaches, impacting the entire supply chain.

Cursor addressed the issue in their latest release, version 1.3, by mandating user approval for every change to the MCP configuration.

This incident underlines growing concerns about AI security as AI tools and large language models increasingly integrate into business and development processes.