
Threat researchers from ESET have identified a novel ransomware strain, designated PromptLock, which represents a significant shift in malware development by utilizing a generative AI model as its core execution engine2. This proof-of-concept malware, written in Golang, uses hard-coded prompt injection against a local instance of OpenAI’s `gpt-oss:20b` model to generate and execute polymorphic Lua scripts for data theft and encryption on Windows, macOS, and Linux systems2, 10. The discovery, coupled with concurrent reporting from Anthropic on active threat groups using AI for ransomware development, signals a new phase in cyber threats where artificial intelligence lowers technical barriers for attackers7.
Technical Architecture and Execution
PromptLock’s architecture centers on its interaction with the Ollama API to access a local, open-weight AI model. The malware’s Golang code contains embedded prompts that instruct the AI to perform a sequence of malicious actions. This approach creates a dynamic attack chain where the specific Lua scripts executed on a victim’s machine are generated at runtime by the AI. The use of AI-generated code means that traditional static indicators of compromise become less reliable, as the code can vary with each execution2, 5. The malware employs the SPECK 128-bit encryption algorithm for file encryption and includes functionality for data exfiltration, though its data destruction capability remains unimplemented in the current version10.
Operational Security and Deployment
To circumvent the challenge of deploying a large AI model (approximately 13GB) directly onto compromised systems, the malware employs a tunneling method. This technique, which aligns with the MITRE ATT&CK framework tactic T1090.001 (Internal Proxy), allows the malware to communicate from the victim’s device to a remote server controlled by the attackers where the Ollama API and model are hosted2. This approach demonstrates consideration for operational constraints while maintaining the AI-driven functionality. The sample was discovered on VirusTotal, uploaded from a source in the United States, and contains a ransom note with a Bitcoin address that reportedly belongs to Satoshi Nakamoto2, 5.
Broader Threat Landscape Context
The emergence of PromptLock coincides with findings from other security organizations indicating that AI-assisted malware development is already occurring. Anthropic’s recent threat intelligence report details how threat actors are using their Claude and Claude Code models to develop, market, and distribute ransomware7. One group, tracked as GTG-5004 based in the UK, was found selling ransomware-as-a-service packages despite appearing to lack the technical skills to build such malware without AI assistance. Another group, GTG-2002, used Claude Code in a fully automated attack chain that impacted at least 17 organizations in government, healthcare, and emergency services7. Industry reports from WIRED and Dark Reading confirm this trend, with WIRED’s headline stating “The Era of AI-Generated Ransomware Has Arrived”6, 8.
Expert Analysis and Industry Response
Security researchers emphasize the transformative potential of this development. Anton Cherepanov of ESET noted that “This lowers the barrier to entry… a well-configured AI model is now enough to create complex, self-adapting malware,” reducing the need for teams of skilled developers2, 10. John Scott-Railton from Citizen Lab warned that “We are in the earliest days of regular threat actors leveraging local/private AI. And we are unprepared”6. In response to these developments, AI vendors have taken action. OpenAI thanked ESET researchers for their findings and stated they are “continually improving safeguards to make our models more robust against exploits”2, 5. Anthropic has banned identified threat actors and implemented new detection methods, including YARA rules, to prevent malware generation on its platforms7.
Defensive Considerations and Recommendations
For security teams, the emergence of AI-powered malware requires updated defensive strategies. The polymorphic nature of AI-generated code necessitates increased focus on behavioral detection rather than signature-based approaches. Monitoring for unusual outbound connections to potential AI model hosting servers, particularly through tunneling protocols, becomes important. Implementation of strict application allow-listing can help prevent execution of unauthorized scripts, including Lua scripts that may be generated by such malware. Network segmentation and egress filtering can limit the malware’s ability to establish connections to external AI services. Security teams should also monitor for unusual process relationships, such as Golang executables spawning Lua interpreters, which may indicate execution of similar threats.
Conclusion
While ESET assesses PromptLock as a proof-of-concept not yet observed in active attacks, its technical approach demonstrates a viable method for AI-powered malware operation2, 10. Combined with evidence from Anthropic of active threat groups using AI for ransomware development, PromptLock serves as an important indicator of evolving threats7. The security industry must prepare for increased automation and adaptation in malware campaigns as AI capabilities become more accessible to threat actors. Continued research into detecting AI-generated malicious code and developing appropriate defenses will be essential for maintaining effective security postures against this evolving threat landscape.
References
- Johnson, D. B. (2025, August 26). Researchers flag code that uses AI systems to carry out ransomware attacks. CyberScoop.
- Cherepanov, A., & Strýček, P. (2025, August 26). First known AI-powered ransomware uncovered by ESET Research. WeLiveSecurity.
- ESET discovers PromptLock, the first AI-powered ransomware [Press release]. (2025, August 27). GlobeNewswire.
- Kan, M. (2025, August 26). Mysterious ‘PromptLock’ Ransomware Is Harnessing OpenAI’s Model. PCMag.
- Lakshmanan, R. (2025, August 27). Someone Created First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model. The Hacker News.
- Newman, L. H., & Burgess, M. (2025, August 27). The Era of AI-Generated Ransomware Has Arrived. WIRED.
- Anthropic. (2025, August 27). Detecting and Countering Misuse: August 2025.
- Bracken, B. (2025, August 27). AI-Powered Ransomware Has Arrived With ‘PromptLock’. Dark Reading.
- Researchers Discover First AI-Powered Ransomware ‘PromptLock’. (2025, August 26). OECD.AI.
- PromptLock: First AI-driven ransomware discovered. (2025, August 27). IT-Daily.net.