Every groundbreaking innovation comes with its shadow. The printing press gave us literature and propaganda. The internet brought us connectivity and cybercrime. Now, artificial intelligence has given us ChatGPT – and its malicious cousin, WormGPT.
While millions admired ChatGPT’s ability to write poetry and solve homework problems, cybercriminals were busy developing their own AI assistant. WormGPT has becomes a prime example of what happens when artificial intelligence meets criminal enterprise, now promoted on Telegram channels and dark web forums as the “jailbreak-free, unfiltered” alternative to mainstream AI.
The Criminal’s AI Assistant
Unlike ChatGPT, which politely refuses to help you build a bomb or write malicious code, WormGPT is designed specifically to be helpful to hackers. For subscription fees ranging from $60 to $120 per month – roughly the cost of a decent gym membership – cybercriminals can access an AI that will:
- Write advanced malware, including key loggers, ransomware, and Trojans
- Generate convincing phishing emails that could fool even security-conscious users
- Create exploits and reverse shells for system infiltration
- Analyze code in real-time to identify vulnerabilities
- Analyze websites to pinpoint security weaknesses and automate information extraction or defacing homepage.
- Help bypass antivirus detection through advanced obfuscation techniques
Think of it as having a criminal mastermind available 24/7, except this one never sleeps, never asks for a cut of the profits, and accepts payment in Bitcoin.
Built for Mayhem
Technically, WormGPT is a fine-tuned version of GPT-J, a 6-billion-parameter language model that lacks the ethical guardrails built into commercial AI systems. The developers have trained it using custom datasets filled with malware code, social engineering templates, jailbreak prompts, and red teaming examples. It’s essentially ChatGPT’s evil education – all the technical capability with none of the moral constraints.
The service is hosted anonymously on cheap VPS servers and GPU rental platforms, with payments processed through crypto currencies like Bitcoin or Monero. Like ChatGPT’s evolution from version 3 to 4o, WormGPT continues to evolve rapidly, with new versions offering enhanced capabilities and improved performance. It operates like a SaaS business model; except instead of helping companies manage customer relationships, it helps criminals manage their victims.
India in the Crosshairs
The impact isn’t theoretical. India, with its rapidly digitizing economy and growing online population, has become a prime testing ground for AI-powered cybercrime. The numbers tell a dark story: India faced 135,173 phishing attacks related to financial matters in the first half of 2024 alone, representing a staggering 175% increase compared to the same period the previous year.
The sophistication of these attacks is equally concerning. As per reports in early 2024, a leading Indian bank was hit by a highly complex AI-supported phishing operation that used advanced natural language processing to create emails that perfectly mimicked internal communications between executives. The attackers analyzed social media content, LinkedIn profiles, and email patterns to craft messages so convincing that multiple bank executives surrendered their credentials, compromising customer databases and transaction records.
Even more alarming, recent data shows that 80% of phishing attacks in India now use AI-generated content, with criminals stealing the equivalent of $112 million in a single state between January and May 2025.
The attacks demonstrate knowledge of local languages, cultural references, and current events that suggest AI-powered research capabilities far beyond traditional cybercriminal operations. QR code phishing has experienced a sharp rise, with attackers using fake posters and WhatsApp messages to redirect victims to malicious UPI portals – particularly effective in India’s mobile-first payment landscape.
The Persuasion Engine
Security researchers who analyzed WormGPT found it to be “remarkably persuasive” in crafting business email compromise (BEC) attacks. This AI could generate emails that closely mimicked corporate communication styles, complete with the appropriate jargon, formatting, and cues of urgency. Even individuals with little cybersecurity knowledge could suddenly execute sophisticated scams that would have previously required years of experience in social engineering.
It’s akin to giving someone who can barely write a text message the ability to create emails capable of deceiving a Fortune 500 company. The democratization of cybercrime has found its perfect tool.
The Pandora’s Box Problem
The FBI has raised alarms about the situation, warning that AI tools are enabling more targeted and automated attacks. However, the story of WormGPT highlights a deeper issue—the Pandora’s Box effect associated with malicious AI. Once the technology was proven effective, variations began to emerge rapidly.
When mainstream media attention ultimately compelled WormGPT’s original developer to shut down operations and disavow the project, the criminal ecosystem adapted swiftly. New operators took over, and the service continues to evolve. Variants like FraudGPT and others have emerged alongside WormGPT, forming an entire ecosystem of criminal AI tools with names reminiscent of rejected superhero movie titles. The genie was out of the bottle, and it had learned to replicate and improve itself.
The Cat-and-Mouse Game Evolves
The evolution of threats in cybersecurity necessitates a fundamental reassessment. Traditional defenses, which were designed to counteract human-created threats, are struggling against AI-generated attacks that can adapt, learn, and iterate at unprecedented speeds. Static code analyzers and signature-based detection systems, once considered reliable, now find themselves facing adversaries capable of generating thousands of unique variants of the same attack.
The cybersecurity landscape is transforming into an arms race between algorithms, with human defenders and attackers increasingly taking on the role of coaches rather than players.
The Uncomfortable Truth
The ongoing evolution of WormGPT reveals an uncomfortable truth about artificial intelligence: the same capabilities that make AI revolutionary also render it potentially dangerous. The technology does not differentiate between assisting a student in writing an essay and aiding a criminal in crafting malware. It lacks judgment and does not moralize; it simply optimizes for the task it is assigned.
This incident also demonstrates how swiftly criminal markets adapt to new technologies. Within months of ChatGPT’s public release, underground communities had developed, marketed, and monetized their own versions optimized for illegal activities. As legitimate AI models advance, these criminal variants continue to evolve in tandem, often incorporating new techniques and capabilities shortly after they emerge in mainstream AI research.
Looking Forward
As AI capabilities keep advancing, the WormGPT phenomenon is a critical reminder that innovation and regulation must progress together. The upcoming months will likely witness even more sophisticated criminal AI tools, but there will also be more advanced defensive technologies developed.
The pressing question is not whether criminals will continue to weaponize AI—that is already occurring. The real question is whether the defenders can stay ahead in a race where the stakes are continually rising and the pace is accelerating. In this game of digital chess, WormGPT was merely the opening move.