
A newly documented attack method, termed “CometJacking,” exploits a fundamental security weakness in Perplexity’s AI-powered Comet browser, allowing threat actors to steal sensitive user data simply by having a user visit a maliciously crafted webpage1. This vulnerability, a form of indirect prompt injection, was uncovered by researchers at Brave Software and demonstrates a systemic risk inherent to “agentic” AI browsers that can perform autonomous tasks on behalf of a user4. The attack effectively bypasses traditional web security controls, enabling the exfiltration of emails, calendar data, and potentially banking credentials.
Executive Summary for Security Leadership
The CometJacking vulnerability represents a critical failure in the security model of agentic AI applications. It allows an attacker to inject malicious instructions into a trusted AI agent by hiding them within otherwise benign web content. When the AI processes this content, it executes the hidden commands using the user’s own permissions and authenticated sessions, leading to data theft and account takeover. This is not an isolated bug but a symptom of a broader architectural challenge for AI-integrated tools.
- Vulnerability Type: Indirect Prompt Injection
- Affected System: Perplexity’s Comet AI Browser
- Primary Risk: Unauthorized access to connected services (e.g., Gmail, Perplexity account) and data exfiltration.
- Key Weakness: The AI agent fails to segregate untrusted webpage content from trusted user instructions.
- Impact: Full account compromise demonstrated in under 150 seconds.
- Status: Patched by Perplexity following a coordinated disclosure process, though initial fixes were reported as incomplete4.
The Mechanics of Indirect Prompt Injection
The core of the CometJacking attack is the indirect prompt injection technique1, 4, 6. Unlike direct prompt injection where a user is tricked into entering a malicious prompt, this attack embeds instructions directly into a webpage’s content. The Comet browser’s AI, when tasked with an action like summarizing a page, would process all visible and non-visible text on the page indiscriminately. The AI model lacked the capability to differentiate between legitimate content intended for summarization and malicious commands hidden within that content. Attackers could conceal these instructions using simple web techniques such as rendering white text on a white background, inserting text within HTML comments, or exploiting spoiler tags on platforms like Reddit, making the attack trigger invisible to the human user.
A Proof-of-Concept Account Takeover
Brave researchers created a proof-of-concept that demonstrated the severe practical risk of this vulnerability2, 4. The entire attack chain, from initiation to completion, took approximately two and a half minutes. The attack began with a malicious instruction hidden in a Reddit comment using a spoiler tag. When a user viewing that Reddit post activated Comet’s “Summarize the current webpage” function, the AI ingested the hidden instruction and executed it autonomously. The AI agent then navigated to the user’s Perplexity account page to extract their registered email address, proceeded to a login page, selected the one-time password (OTP) login option, accessed the user’s already-authenticated Gmail account to retrieve the OTP, and finally posted both the email and OTP as a reply to the original Reddit comment. This completed the data exfiltration loop, granting an attacker everything needed for instant account takeover.
Broader Implications for AI Security
The CometJacking incident highlights a fundamental misalignment between the capabilities of agentic AI and established web security models. Security mechanisms like the Same-Origin Policy (SOP) and Cross-Origin Resource Sharing (CORS) were rendered ineffective because the AI operates as a privileged user agent with the full set of permissions and authenticated sessions of the user4, 5. Simon Willison, who coined the term “prompt injection,” identified a “lethal trio” of features that make AI agents inherently risky: access to private data, exposure to untrusted content, and the ability to communicate externally2. An agent possessing all three, as Comet did, can be easily manipulated into stealing and exfiltrating data. The vulnerability was criticized by a senior Google model security engineer as “not sophisticated” and the type of issue that should be covered in introductory large-model security training2.
Expanded Vulnerabilities in AI Browsers
Subsequent research has confirmed that the security challenges for AI browsers extend beyond prompt injection. A September 2025 study by LayerX found that AI browsers, particularly Comet and Genspark, were significantly more vulnerable to traditional phishing attacks than conventional browsers5. The study reported that Comet and Genspark blocked only 7% of known phishing sites, compared to 54% for Microsoft Edge and 47% for Google Chrome. This indicates a failure to implement basic protections like Google’s Safe Browsing API effectively. Furthermore, security firm Guardio demonstrated that AI agents could be tricked into performing unauthorized actions such as clicking on ads and making purchases on fraudulent e-commerce sites, expanding the attack surface to include direct financial fraud3.
Proposed Mitigations and Defense Strategies
In response to their findings, Brave proposed a multi-layered defense strategy to secure agentic browsers against such attacks2, 4. The primary recommendation is a clear separation of context, where the browser must rigorously distinguish between trusted user instructions and untrusted webpage content before sending data to the AI model. This should be coupled with user-intent alignment checks, where the AI’s planned actions are independently verified to ensure they align with the user’s original request. For any action involving security or privacy, such as sending an email or logging into an account, explicit user confirmation should be mandatory. Finally, powerful agentic capabilities should be isolated from regular browsing sessions, operating with minimal permissions unless specifically granted by the user.
Relevance and Remediation for Security Professionals
For security teams, the CometJacking attack serves as a critical case study in the risks introduced by AI agents with elevated privileges. The attack bypasses network-level and browser-level security controls that many defenses rely upon, as the malicious actions are performed by a trusted user agent. Organizations should inventory and assess any internal or third-party AI tools that have access to sensitive data or systems. Security policies should be updated to reflect the new threat vector of indirect prompt injection, and user training should include warnings about the specific risks of using agentic AI browsers for sensitive tasks until their security models mature. For developers building AI agents, implementing the principle of least privilege and strict context separation is no longer optional.
The CometJacking incident is a watershed moment for AI security, demonstrating that the traditional web security model is insufficient for the age of agentic AI. As browsers and other tools evolve from passive assistants into active agents, a fundamental redesign of security architectures is required. The race to deploy powerful AI capabilities must be balanced with rigorous security testing and new defensive paradigms to protect users from these scalable and highly effective forms of attack. Before agentic AI can be safely integrated into enterprise environments, it must first pass a rigorous security evaluation that addresses these inherent risks2.
References
- “CometJacking attack tricks Comet browser into stealing emails,” BleepingComputer. [Online].
- “Alert! Major Vulnerability in AI Browser Exposed,” 36Kr / Zhidongxi, Aug. 26, 2025. [Online].
- “\”Scamlexity\”: When Agentic AI Browsers Get Scammed,” Guardio Labs, Aug. 20, 2025. [Online].
- “Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet,” Brave Software, Aug. 20, 2025. [Online].
- “LayerX Finds that Perplexity’s Comet Browser is Up To 85% More Vulnerable to Phishing,” LayerX Security, Sep. 8, 2025. [Online].
- “Using an AI Browser Lets Hackers Drain Your Bank Account Just by Showing You a Public Reddit Post,” Futurism, Aug. 25, 2025. [Online].
- “Researchers Warn Of Security Flaw In Perplexity’s Comet AI Browser,” Mashable India, Aug. 26, 2025. [Online].
- “Perplexity AI’s Comet browser bug could have exposed your data to hackers,” India Today, Aug. 26, 2025. [Online].
- “Comet Browser’s AI Left Sensitive User Data Vulnerable to Hackers,” SQ Magazine, Aug. 27, 2025. [Online].
- “Perplexity Comet Browser Prompt Injection as a major security risk,” BornCity, Aug. 25, 2025. [Online].