ESET Uncovers AI-Powered Ransomware “PromptLock” Using OpenAI Model

ESET Uncovers AI-Powered Ransomware “PromptLock” Using OpenAI Model
Cybersecurity firm ESET has revealed the discovery of a new artificial intelligence (AI)-driven ransomware strain, codenamed PromptLock, which leverages OpenAI’s recently released gpt-oss:20b model to generate malicious scripts in real-time. Written in Golang, this proof-of-concept malware operates through the Ollama API and creates Lua scripts dynamically to carry out its attacks.

According to ESET, “PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption.” Because these Lua scripts are platform-agnostic, the ransomware can impact Windows, Linux, and macOS environments. The malware is also designed to generate tailored ransom notes depending on the types of files compromised, whether on personal systems, enterprise servers, or even industrial control devices such as power distribution controllers.

While the identity of the attackers remains unknown, forensic evidence indicates that samples of PromptLock were uploaded to VirusTotal from the United States on August 25, 2025. ESET researchers cautioned that the use of AI-generated scripts adds a layer of unpredictability. “PromptLock uses Lua scripts generated by AI, which means that indicators of compromise (IoCs) may vary between executions,” the company explained. “This variability introduces challenges for detection. If properly implemented, such an approach could significantly complicate threat identification and make defenders’ tasks more difficult.”

Currently assessed as a proof-of-concept rather than a fully deployed campaign, PromptLock relies on the SPECK 128-bit encryption algorithm to lock victim files. Researchers also found hints of data exfiltration and file destruction capabilities, though the destructive function appears to be unfinished. Importantly, PromptLock doesn’t require downloading the entire large language model. Instead, “the attacker can simply establish a proxy or tunnel from the compromised network to a server running the Ollama API with the gpt-oss-20b model,” ESET clarified.

The emergence of PromptLock highlights growing concerns over AI misuse in cybercrime. Criminals with minimal expertise can now weaponize large language models to develop malware, phishing campaigns, and advanced ransomware. The news follows Anthropic’s disclosure earlier today that it banned threat actors who used Claude AI for extortion schemes and ransomware creation.

This development also underscores a broader security challenge: large language models remain susceptible to prompt injection attacks, which can bypass safety guardrails and enable malicious actions like data theft or file deletion. New research even exposed a downgrade exploit called PROMISQROUTE, showing how attackers can trick AI systems into falling back on less secure models to evade restrictions.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch