Article Details
Scrape Timestamp (UTC): 2025-10-04 14:40:48.096
Source: https://thehackernews.com/2025/10/cometjacking-one-click-can-turn.html
Original Article Text
Click to Toggle View
CometJacking: One Click Can Turn Perplexity's Comet AI Browser Into a Data Thief. Cybersecurity researchers have disclosed details of a new attack called CometJacking targeting Perplexity's agentic AI browser Comet by embedding malicious prompts within a seemingly innocuous link to siphon sensitive data, including from connected services, like email and calendar. The sneaky prompt injection attack plays out in the form of a malicious link that, when clicked, triggers the unexpected behavior unbeknownst to the victims. "CometJacking shows how a single, weaponized URL can quietly flip an AI browser from a trusted co-pilot to an insider threat," Michelle Levy, Head of Security Research at LayerX, said in a statement shared with The Hacker News. "This isn't just about stealing data; it's about hijacking the agent that already has the keys. Our research proves that trivial obfuscation can bypass data exfiltration checks and pull email, calendar, and connector data off-box in one click. AI-native browsers need security-by-design for agent prompts and memory access, not just page content." The attack, in a nutshell, hijacks the AI assistant embedded in the browser to steal data, all while bypassing Perplexity's data protections using trivial Base64-encoding tricks. The attack does not include any credential theft component because the browser already has authorized access to Gmail, Calendar, and other connected services. It takes place over five steps, activating when a victim clicks on a specially crafted URL, either sent in a phishing email or present in a web page. Instead of taking the user to the "intended" destination, the URL instructs the Comet browser's AI to execute a hidden prompt that captures the user's data from, say, Gmail, obfuscates it using Base64-encoding, and transmits the information to an endpoint under the attacker's control. The crafted URL is a query string directed at the Comet AI browser, with the malicious instruction added using the "collection" parameter of the URL, causing the agent to consult its memory rather than perform a live web search. While Perplexity has classified the findings as having "no security impact," they once again highlight how AI-native tools introduce new security risks that can get around traditional defenses, allow bad actors to commandeer them to do their bidding, and expose users and organizations to potential data theft in the process. In August 2020, Guardio Labs disclosed an attack technique dubbed Scamlexity wherein browsers like Comet could be tricked by threat actors into interacting with phishing landing pages or counterfeit e-commerce storefronts without the human user's knowledge or intervention. "AI browsers are the next enterprise battleground," Or Eshed, CEO of LayerX, said. "When an attacker can direct your assistant with a link, the browser becomes a command-and-control point inside the company perimeter. Organizations must urgently evaluate controls that detect and neutralize malicious agent prompts before these PoCs become widespread campaigns."
Daily Brief Summary
Cybersecurity researchers have identified a new attack, CometJacking, targeting Perplexity's Comet AI browser to extract sensitive data through malicious prompts embedded in URLs.
The attack leverages a crafted URL to trigger unauthorized data access from connected services like email and calendar, bypassing traditional security measures.
CometJacking operates without credential theft, exploiting the browser's existing authorized access to services, and uses Base64 encoding to obfuscate and transmit data.
The attack is initiated when a user clicks a malicious link, redirecting the AI browser to execute hidden commands that capture and exfiltrate data.
Perplexity has downplayed the security impact, but the incident reveals vulnerabilities in AI-native tools that can circumvent conventional defenses.
The attack underscores the need for security-by-design in AI browsers, focusing on agent prompts and memory access rather than just page content.
Organizations are urged to implement controls to detect and neutralize malicious agent prompts, as AI browsers become potential command-and-control points within enterprise environments.