Article Details

Scrape Timestamp (UTC): 2026-02-10 18:04:33.098

Source: https://www.theregister.com/2026/02/10/ai_agents_messaging_apps_data_leak/

Original Article Text

Click to Toggle View

AI agents spill secrets just by previewing malicious links. Zero-click prompt injection can leak data when AI agents meet messaging apps, researchers warn. AI agents can shop for you, program for you, and, if you're feeling bold, chat for you in a messaging app. But beware: attackers can use malicious prompts in chat to trick an AI agent into generating a data-leaking URL, which link previews may fetch automatically. Messaging apps commonly use link previews, which let the app query links dropped in a message to extract a title, description, and thumbnail to display in place of a plain URL. As discovered by AI security firm PromptArmor, link previews can turn URLs generated by an AI agent and controlled by an attacker into a zero-click data-exfiltration channel, allowing sensitive information to be leaked without any user interaction. As PromptArmor notes in its report, indirect prompt injection via malicious links isn't unheard of, but typically requires the victim to click a link after an AI system has been tricked into appending sensitive user data to an attacker-controlled URL. When the same technique is used against an AI agent operating inside messaging platforms such as Slack or Telegram, where link previews are enabled by default or in certain configurations, the problem gets a whole lot worse. "In agentic systems with link previews, data exfiltration can occur immediately upon the AI agent responding to the user, without the user needing to click the malicious link," PromptArmor explained.  Without a link preview, an AI agent or a human operator has to follow a link, triggering a network request after the AI system has been tricked into appending sensitive user data to an attacker-controlled URL. As mentioned, this type of prompt injection attack can extract various types of sensitive data, such as API keys and the like, by tricking an AI agent into appending the info onto the URL.  Because a link preview pulls metadata from the target website, that whole attack chain can be accomplished with zero interaction: once an AI agent has been tricked into generating a URL containing sensitive data, the preview system automatically fetches it. The only difference is where the data-exposing URL is found - in this case in the attacker's request log.  It won't shock you to learn that vibe-coded agentic AI disaster platform OpenClaw is vulnerable to this attack when using default configurations in Telegram, which PromptArmor notes can be fixed by making a change in OpenClaw's config file as detailed in the article, but it seems from the data that PromptArmor provided that OpenClaw isn't the biggest offender. The company created a website where users can test AI agents integrated into messaging apps to see whether they trigger insecure link previews. Based on reported results from those tests, Microsoft Teams accounts for the largest share of preview fetches, and in the logged cases, it is paired with Microsoft's own Copilot Studio. Other reported at-risk combinations include Discord with OpenClaw, Slack with Cursor Slackbot, Discord with BoltBot, Snapchat with SnapAI, and Telegram with OpenClaw. Reported safer setups include the Claude app in Slack, OpenClaw running via WhatsApp, and OpenClaw deployed "in Docker via Signal in Docker," if you really want to complicate things. While this is an issue with how AI agents handle the processing of link previews, PromptArmor notes that it's going to largely be up to messaging apps to fix the issue.  "It falls on communication apps to expose link preview preferences to developers, and agent developers to leverage the preferences provided," the security firm explained. "We'd like to see communication apps consider supporting custom link preview configurations on a chat/channel-specific basis to create LLM-safe channels."  Until that happens, consider this yet another warning against adding an AI agent into an environment where confidentiality is important. 

Daily Brief Summary

VULNERABILITIES // AI Agents in Messaging Apps Pose Data Exfiltration Risks

Researchers at PromptArmor identified a vulnerability where AI agents in messaging apps can be exploited to leak sensitive data via link previews.

This zero-click prompt injection flaw allows attackers to exfiltrate data without user interaction, affecting platforms like Slack, Telegram, and Microsoft Teams.

AI agents can be tricked into appending sensitive information, such as API keys, to URLs that are automatically fetched by link previews.

PromptArmor suggests that the responsibility lies with messaging apps to allow developers to customize link preview settings to mitigate this risk.

Vulnerable combinations include Microsoft Teams with Copilot Studio and Discord with OpenClaw, while safer setups involve Claude in Slack and OpenClaw via Signal in Docker.

Organizations using AI agents in messaging environments should reassess their configurations to prevent unintended data exposure.

The report serves as a caution against deploying AI agents in sensitive environments until messaging apps enhance their security frameworks.