Critical Security Flaws Exposed in AI-Powered Code Editors, Raising Supply Chain Risks

Critical Security Flaws Exposed in AI-Powered Code Editors, Raising Supply Chain Risks

A serious security vulnerability has been identified in the AI-powered code editor Cursor, which could allow malicious code execution when developers open a specially crafted repository. The issue arises because an essential security feature is disabled by default, leaving users exposed to attackers who can run arbitrary code with the same privileges as the user. “Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: ‘folderOpen’ auto-execute the moment a developer browses a project,” Oasis Security explained. “A malicious .vscode/tasks.json turns a casual ‘open folder’ into silent code execution in the user’s context.”

Cursor, a fork of Visual Studio Code, includes Workspace Trust to let developers safely browse and edit code from any source. Without this protection enabled, attackers can host a project on platforms like GitHub with hidden autorun instructions, causing the editor to execute malicious tasks as soon as a folder is opened. “This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks,” noted Oasis Security researcher Erez Schwartz. Users are advised to enable Workspace Trust, review untrusted repositories carefully, and consider opening them in other editors.

The discovery comes amid rising concerns over prompt injection and jailbreak attacks affecting AI coding assistants, including Claude Code, Cline, K2 Think, and Windsurf. Checkmarax highlighted that Anthropic’s automated security reviews in Claude Code could inadvertently expose projects to risk, allowing attackers to bypass safety checks. “In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe,” the company said, warning that test cases executed by AI could run malicious code against production environments.

Further vulnerabilities have been reported across AI-driven development tools, including WebSocket authentication bypasses, SQL injections, path traversal flaws, open redirects, XSS issues, and insufficient cross-origin protections. Imperva emphasized, “As AI-driven development accelerates, the most pressing threats are often not exotic AI attacks but failures in classical security controls.”

These incidents highlight the urgent need for developers and organizations to treat security as foundational in AI-enabled coding platforms, combining traditional safeguards with careful monitoring to mitigate risks in a rapidly evolving threat landscape.

- Advertisement -

Disclaimer: The above press release has been provided by NewsVoir. CXO Digital Pulse holds no responsibility for its content in any manner.
Reproduction or Copying in part or whole is not permitted unless approved by author.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch