Anthropic, the creator of the Claude AI chatbot, has revealed that its technology was misused in a wave of cyberattacks involving theft, extortion, and fraud. The company said hackers exploited Claude to generate malicious code, orchestrate cyberattacks, and draft extortion demands — including ransom amounts — in what it called an “unprecedented degree” of AI-assisted cybercrime. One such campaign reportedly compromised at least 17 organizations, including government entities.
In a striking example, Anthropic reported cases of “vibe hacking,” where threat actors used AI not only for technical tasks but also to guide decision-making throughout the attack. This included selecting which data to steal and crafting psychological pressure tactics to manipulate victims. Anthropic said it has since disrupted these operations, strengthened its security systems, and notified the relevant authorities.
The company also issued warnings about a growing trend of employment fraud, particularly involving North Korean actors. According to Anthropic, scammers used Claude to create fake resumes and job applications, and even to help them perform technical tasks after securing employment with U.S. Fortune 500 tech firms. This development marks a new stage in job fraud, allowing sanctioned individuals to bypass cultural, linguistic, and technical barriers more effectively.
Cybersecurity experts have warned that AI is dramatically compressing the timeline for exploiting vulnerabilities, forcing defenders to stay ahead of increasingly sophisticated threats. While phishing continues to be a major entry point for ransomware, the growing use of agentic AI systems — tools that can plan and execute multi-step tasks autonomously — is raising the stakes for organizations worldwide.
Anthropic’s disclosures add to a growing list of concerns about the dual-use nature of AI, where the same systems designed to boost productivity can be repurposed for malicious activities at scale. The company stressed the need for continued innovation in detection and monitoring tools to prevent misuse while balancing the benefits of generative AI.