Artificial Intelligence (AI) has made a significant impact on the world, holding promises of streamlining daily life. However, the gap between theoretical potential and reality has also revealed certain challenges. In recent times, we’ve seen the enormous capabilities of AI, while simultaneously being exposed to its possible risks, such as the spread of misinformation, deepfakes, and threats to employment.
From the controversy surrounding ChaosGPT to the illicit uses of AI on the dark web, numerous stories have flooded the news over the past few months. A new facet has been added to AI’s potential risks following WormGPT, an AI known for assisting cybercriminals. Reports indicate the emergence of a more dangerous AI tool called FraudGPT, promoted by various entities on the dark web marketplaces and Telegram channels as a generative AI for cybercrime.
FraudGPT, as per the reports, is a bot implicated in activities such as creating cracking tools, crafting phishing emails, and more. It’s capable of writing malicious code, developing undetectable malware, and spotting leaks and vulnerabilities. Circulating on Dark Web Forums and Telegram since July 22, the bot is priced at $200 for a monthly subscription, and the cost escalates to $1000 for half a year and $1700 for an annual subscription.
What Exactly is FraudGPT?
An image of the bot circulating online displays a chatbot screen with the text ‘Chat GPT Fraud Bot | Bot without limitations, rules, boundaries’. The bot is positioned as an alternative to Chat GPT, offering an array of exclusive tools, features, and capabilities with no restrictions, as per the text on the screen.
It’s been claim that FraudGPT has already sold over 3000 units.
What can FraudGPT do?
FraudGPT is being touted as a comprehensive tool for cybercriminals, with capabilities ranging from generating phishing pages to writing malicious code. Such a tool could allow fraudsters to appear more realistic and persuasive, potentially causing large-scale damage. Security experts have repeatedly stressed the importance of innovation in countering threats posed by rogue AI like FraudGPT, which can cause significant harm. However, many experts believe this is merely the tip of the iceberg, and the misuse of AI by malicious actors could have no bounds.
Earlier this month, another AI cybercrime tool, WormGPT, was unveiled. Dark web forums advertised it as a tool for launching sophisticated phishing and business email compromise attacks. As of February, it was discovered that cybercriminals were circumventing ChatGPT’s restrictions by exploiting its APIs. The unbounded operation of both FraudGPT and WormGPT underscore the potential threats posed by unregulated generative AI.