August 4, 2025
5 min read
djohnson
A one-line prompt injection flaw in Cursor AI allowed attackers remote code execution by poisoning model data.
Cursor AI Coding Agent Vulnerability Allowed Remote Code Execution via One-Line Prompt Injection
Threat researchers at AimLabs disclosed a critical data-poisoning attack affecting Cursor, an AI-powered code editing software, that could grant attackers remote code execution privileges on user devices. The flaw was reported to Cursor on July 7, 2025, and patched the next day in version 1.3. However, all previous versions remain vulnerable to remote code execution triggered by a single externally-hosted prompt injection, according to AimLabs' blog post. Tracked as CVE-2025-54135, the vulnerability arises when Cursor interacts with a Model Contest Protocol (MCP) server. This server enables Cursor to access external tools such as Slack, GitHub, and other software development databases. Similar to the EchoLeak flaw discovered by AimLabs last month, Cursor’s AI agent can be hijacked through malicious prompts fetched from MCP servers. Using a single line of crafted prompting, an attacker can manipulate Cursor’s behavior silently and invisibly to the user. In a proof-of-concept, researchers injected a malicious prompt via Slack, which Cursor retrieved through a connected MCP server. This prompt altered Cursor’s configuration file to add a malicious server with a harmful start command. Crucially, Cursor executes these malicious commands immediately upon receiving them, without user approval. This vulnerability highlights the risks organizations face when integrating AI systems without fully understanding their exposure to external data manipulation. AI agents like Cursor, which operate with developer-level privileges, are susceptible to instructions from untrusted third parties. A single poisoned prompt can effectively "morph an AI agent into a local shell." AimLabs emphasized, "The tools expose the agent to external and untrusted data, which can affect the agent’s control-flow. This in turn, allows attackers to hijack the agent’s session and take advantage of the agent’s privileges to perform on behalf of the user." While Cursor’s vulnerability has been patched, the researchers warn that this class of flaw is intrinsic to the way large language models operate, as they ingest commands and directions via external prompting. They predict similar vulnerabilities will continue to surface across AI platforms. As AimLabs concluded, "Because model output steers the execution path of any AI agent, this vulnerability pattern is intrinsic and keeps resurfacing across multiple platforms."Source: Originally published at CyberScoop on August 1, 2025.