
The AI-powered website creation platform Lovable is facing widespread abuse by threat actors who are leveraging its capabilities to generate and deploy malicious infrastructure at an industrial scale. Security researchers from Proofpoint have observed tens of thousands of malicious URLs generated on the platform since February 2025, facilitating sophisticated phishing campaigns, malware distribution, and financial fraud1. This exploitation highlights a significant shift in the cybercrime landscape, where technical skill is supplanted by access to generative AI and effective prompting techniques.
Lowering the Barrier to Entry for Cybercrime
Lovable’s core functionality, which allows users to generate and instantly deploy functional web applications through descriptive language prompts (“vibe coding”), aligns perfectly with the needs of cybercriminals. The platform’s susceptibility to manipulation, termed “VibeScamming,” was quantified in research by Guardio Labs. Their benchmark analysis gave Lovable a concerning score of 1.8 out of 10 for vulnerability to these jailbreak attacks, a stark contrast to ChatGPT’s 8.0 and Claude’s 4.32. This low barrier allows even low-skill actors to create convincing fraudulent sites, complete with auto-deployment on `lovable.app` subdomains and integrated data exfiltration to services like Telegram and Firebase.
Sophisticated Campaigns and Techniques
The abuse of Lovable is not limited to simple phishing pages. Threat actors have orchestrated complex, multi-faceted campaigns. A prominent campaign used file-sharing lures to present CAPTCHAs on Lovable URLs, which then redirected victims to pages hosting the Tycoon Phishing-as-a-Service (PhaaS) kit. This setup was designed to perform adversary-in-the-middle (AiTM) attacks, effectively stealing credentials, multi-factor authentication (MFA) tokens, and session cookies to bypass security measures1. Other campaigns have included impersonating brands like UPS to harvest credit card information and personal data, which was then posted to Telegram channels, and creating fake DeFi platforms like Aave to trick users into connecting cryptocurrency wallets3.
State-Sponsored and Financial Threat Actor Adoption
While financially motivated criminals heavily abuse tools like Lovable, state-sponsored Advanced Persistent Threat (APT) groups are also actively experimenting with generative AI. According to a January 2025 report from Google’s Threat Intelligence Group (GTIG), which analyzed prompts from APT and Information Operation (IO) groups across over 20 countries, these actors are primarily using AI for productivity gains rather than developing novel capabilities4. Iranian APT groups were the heaviest users, employing AI for reconnaissance on defense experts, vulnerability research, and generating phishing content. Chinese APTs focused on researching US military and IT firms and scripting for post-compromise activities. A notable finding was North Korean groups using AI to support their clandestine IT worker schemes, including drafting cover letters and researching job markets.
The Evolving Threat: From Code Generation to Agent Hijacking
The threat landscape is rapidly evolving beyond simple code generation. New techniques like “PromptFix” demonstrate how attackers are targeting the next generation of AI tools. This sophisticated prompt injection method embeds malicious instructions inside seemingly benign webpage elements, like fake CAPTCHA checks. This can trick AI-powered browsers, such as Perplexity’s Comet, into automatically interacting with phishing pages, clicking invisible buttons, and even auto-filling saved credit card details on fraudulent sites without any user intervention5. This automation of the entire victimization process heralds a new era of efficiency for scams, a concept some researchers are calling “Scamlexity.”
Platform Response and the Cat-and-Mouse Game
In response to the widespread reporting of abuse, Lovable has implemented new security measures. The company has taken down malicious clusters and unveiled an AI-powered safety program, including a “Security Checker 2.0” system. Lovable claims these new protections now block approximately 1,000 malicious projects daily1. This represents a classic cat-and-mouse game in cybersecurity, where platforms race to implement safeguards while threat actors continuously adapt their methods to circumvent them. The effectiveness of these measures against determined and evolving adversarial tactics remains a critical area for ongoing observation.
Relevance and Remediation
The weaponization of generative AI platforms poses a direct challenge to defensive security postures. The volume and quality of malicious infrastructure that can be generated automatically reduce the time defenders have to react. To mitigate these risks, organizations should enhance user awareness training to identify sophisticated lures, implement robust email security solutions capable of detecting and blocking malicious links, and enforce strict application allow-listing policies where feasible. Monitoring network traffic for connections to newly registered domains and known malicious infrastructure, including `lovable.app` subdomains, is also advised. For platform developers, this underscores the non-negotiable requirement to build robust, AI-driven ethical guardrails and content moderation systems from the ground up.
The abuse of Lovable is a stark case study in the dual-use nature of powerful AI tools. While designed to democratize web development, they have simultaneously democratized cybercrime. This trend is unlikely to reverse, meaning the security community must adapt its strategies to counter AI-generated threats that are both scalable and sophisticated. Continuous monitoring, sharing of threat intelligence, and the development of AI-powered defensive solutions will be paramount in staying ahead of this evolving attack vector.