AI Market Logo
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
Cursor’s AI coding agent morphed ‘into local shell’ with one-line prompt attack
ai-security

Cursor’s AI coding agent morphed ‘into local shell’ with one-line prompt attack

A one-line prompt injection flaw in Cursor AI allowed attackers remote code execution by poisoning model data.

August 4, 2025
5 min read
djohnson

A one-line prompt injection flaw in Cursor AI allowed attackers remote code execution by poisoning model data.

Cursor AI Coding Agent Vulnerability Allowed Remote Code Execution via One-Line Prompt Injection

Threat researchers at AimLabs disclosed a critical data-poisoning attack affecting Cursor, an AI-powered code editing software, that could grant attackers remote code execution privileges on user devices. The flaw was reported to Cursor on July 7, 2025, and patched the next day in version 1.3. However, all previous versions remain vulnerable to remote code execution triggered by a single externally-hosted prompt injection, according to AimLabs' blog post. Tracked as CVE-2025-54135, the vulnerability arises when Cursor interacts with a Model Contest Protocol (MCP) server. This server enables Cursor to access external tools such as Slack, GitHub, and other software development databases. Similar to the EchoLeak flaw discovered by AimLabs last month, Cursor’s AI agent can be hijacked through malicious prompts fetched from MCP servers. Using a single line of crafted prompting, an attacker can manipulate Cursor’s behavior silently and invisibly to the user. In a proof-of-concept, researchers injected a malicious prompt via Slack, which Cursor retrieved through a connected MCP server. This prompt altered Cursor’s configuration file to add a malicious server with a harmful start command. Crucially, Cursor executes these malicious commands immediately upon receiving them, without user approval. This vulnerability highlights the risks organizations face when integrating AI systems without fully understanding their exposure to external data manipulation. AI agents like Cursor, which operate with developer-level privileges, are susceptible to instructions from untrusted third parties. A single poisoned prompt can effectively "morph an AI agent into a local shell." AimLabs emphasized, "The tools expose the agent to external and untrusted data, which can affect the agent’s control-flow. This in turn, allows attackers to hijack the agent’s session and take advantage of the agent’s privileges to perform on behalf of the user." While Cursor’s vulnerability has been patched, the researchers warn that this class of flaw is intrinsic to the way large language models operate, as they ingest commands and directions via external prompting. They predict similar vulnerabilities will continue to surface across AI platforms. As AimLabs concluded, "Because model output steers the execution path of any AI agent, this vulnerability pattern is intrinsic and keeps resurfacing across multiple platforms."
Source: Originally published at CyberScoop on August 1, 2025.

Frequently Asked Questions (FAQ)

AI Security and Prompt Injection

Q: What is prompt injection in the context of AI agents like Cursor? A: Prompt injection is a type of attack where malicious input is crafted to manipulate an AI agent's behavior. In Cursor's case, an attacker could use a specially designed prompt, potentially hosted externally, to trick the AI agent into executing unintended or harmful commands on the user's system. Q: How did the Cursor AI coding agent vulnerability work? A: The vulnerability exploited Cursor's interaction with Model Contest Protocol (MCP) servers, which are used to access external tools like Slack and GitHub. By injecting a malicious prompt through a connected MCP server (e.g., via Slack), attackers could alter Cursor's configuration to execute harmful commands without user consent. Q: What are the risks of AI agents interacting with external data sources? A: AI agents that integrate with external data sources are vulnerable to data poisoning and manipulation. If these external data sources are compromised or controlled by malicious actors, the AI agent can be tricked into executing harmful instructions, potentially leading to remote code execution or data breaches. Q: Is my data safe if I use AI coding tools? A: While AI tools can enhance productivity, it's crucial to be aware of potential security risks. Ensuring your AI software is up-to-date, understanding its integrations with external services, and being cautious about prompts from untrusted sources are important steps in protecting your data. Q: How can developers mitigate risks associated with AI agent vulnerabilities? A: Developers should implement robust input validation and sanitization for prompts, carefully vet third-party integrations and data sources, and adopt a principle of least privilege for AI agents. Regular security audits and prompt updates are also essential.

Crypto Market AI's Take

The vulnerability discovered in Cursor AI highlights a critical and evolving challenge in the integration of AI into development workflows. As AI agents become more sophisticated and interconnected with various services, the potential attack surface expands. This incident underscores the importance of secure coding practices and diligent security research within the AI development community. At Crypto Market AI, we understand the need for robust security measures in all aspects of technology, including AI-powered tools. Our focus on secure AI agents and leveraging AI for market analysis and trading aims to provide users with powerful yet secure solutions, recognizing that the broader landscape of AI technology demands continuous vigilance against emerging threats.

More to Read: