A serious security vulnerability has been identified in the AI-powered code editor Cursor, which could allow malicious code execution when developers open a specially crafted repository. The issue arises because an essential security feature is disabled by default, leaving users exposed to attackers who can run arbitrary code with the same privileges as the user. “Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: ‘folderOpen’ auto-execute the moment a developer browses a project,” Oasis Security explained. “A malicious .vscode/tasks.json turns a casual ‘open folder’ into silent code execution in the user’s context.”
Cursor, a fork of Visual Studio Code, includes Workspace Trust to let developers safely browse and edit code from any source. Without this protection enabled, attackers can host a project on platforms like GitHub with hidden autorun instructions, causing the editor to execute malicious tasks as soon as a folder is opened. “This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks,” noted Oasis Security researcher Erez Schwartz. Users are advised to enable Workspace Trust, review untrusted repositories carefully, and consider opening them in other editors.
The discovery comes amid rising concerns over prompt injection and jailbreak attacks affecting AI coding assistants, including Claude Code, Cline, K2 Think, and Windsurf. Checkmarax highlighted that Anthropic’s automated security reviews in Claude Code could inadvertently expose projects to risk, allowing attackers to bypass safety checks. “In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe,” the company said, warning that test cases executed by AI could run malicious code against production environments.
Further vulnerabilities have been reported across AI-driven development tools, including WebSocket authentication bypasses, SQL injections, path traversal flaws, open redirects, XSS issues, and insufficient cross-origin protections. Imperva emphasized, “As AI-driven development accelerates, the most pressing threats are often not exotic AI attacks but failures in classical security controls.”
These incidents highlight the urgent need for developers and organizations to treat security as foundational in AI-enabled coding platforms, combining traditional safeguards with careful monitoring to mitigate risks in a rapidly evolving threat landscape.