Check Point Research has identified a very serious vulnerability in the Cursor developer tool, which means that attackers can enter and change code without the responsible development team being notified.
Cursor is one of the fastest growing AI-powered coding tools among developers today. It combines local code editing with large, powerful language models (LLMs), which integrate with the tool to help development teams write, debug, and explore code more efficiently.
The vulnerability was identified in Cursors Model Context Protocol (MCP) that allows remote code execution (RCE). When a user has approved a MCP configuration The vulnerability allows attackers to secretly modify it, allowing malicious commands to be executed every time the project is opened without alerting the responsible teams.
The identified risk in Cursor is not just theoretical, but a real vulnerability. In shared coding environments, the flaw turns a trusted MCP into a hidden point of compromise. For businesses that rely on AI tools like Cursor, the consequences could be very serious, such as outsiders gaining continuous access to developers’ computers, logins, and codebases.

When this critical vulnerability was identified, Check informed Point Research immediately notified Cursor's development team about the problem, which occurred on July 16, 2025. Cursor then released a update (version 1.3) July 29. Although the release notes did not explicitly mention the vulnerability, the Checkpoint Research's independent testing confirms that the problem has been effectively addressed.
Risks of AI-powered development tools
As AI-powered development environments become increasingly integrated into software development work, Check Point Research chose to evaluate the security of this type of tool, especially in collaborative environments where code, configuration files, and AI-based plugins are often shared between teams and different environments.
The discovery of the vulnerability in Cursor highlights a critical security challenge with AI-powered development tools. As businesses increasingly rely on integrated AI workflows, it is important that these are secure and robust.
Check Point Research urges developers, security teams, and organizations to be vigilant, review their AI development environments, and collaborate with vendors to address new types of threats. It is only through proactive security efforts that the power of AI can be used in software development in a secure way.
For a more detailed technical description of the vulnerability in Cursor, see Check Point Research report.