Article Details

Original Article Text

Click to Toggle View

Unofficial Postmark MCP npm silently stole users' emails. A npm package copying the official ‘postmark-mcp’ project on GitHub turned bad with the latest update that added a single line of code to exfiltrate all its users' email communication. Published by a legitimate-looking developer, the malicious package was a perfect replica of the authentic one in terms of code and description, appearing as an official port on npm for 15 iterations. Model Context Protocol (MCP) is an open standard that allows AI assistants to interface with external tools, APIs, and databases in a structured, predefined, and secure manner. Postmark is an email delivery platform, and Postmark MCP is the MCP server that exposes Postmark’s functionality to AI assistants, letting them send emails on behalf of the user or app. As discovered by Koi Security researchers, the malicious package on npm was clean in all versions through1.0.15, but in the 1.0.16 release, it added a line that forwarded all user emails to an external address at giftshop[.]club linked to the same developer. This extremely risky functionality may have exposed personal sensitive communications, password reset requests, two-factor authentication codes, financial information, and even customer details. The malicious version on npm was available for a week and recorded around 1,500 downloads. By Koi Security's estimations, the fake package might have exfiltrated thousands of emails from unsuspecting users. For those who downloaded postmark-mcp from npm, it is recommended to remove it immediately and rotate any potentially exposed credentials. Also, audit all MCP servers in use and monitor them for suspicious activity. BleepingComputer has contacted the npm package publisher to ask about Koi Security’s findings, but we received no reply. The following day, the developer removed the malicious package from npm. Koi Security’s report highlights a broken security model where servers are implemented in critical environments without oversight or sandboxing, and AI assistants executing malicious commands without filtering for malicious behavior. Because MCPs run with very high privileges, any vulnerability or misconfiguration carries a significant risk. Users should verify the source of the project and make sure it's an official repository, review the source code and changelogs, and look carefully for changes in every update. Before using a new version in production, run MCP servers in isolated containers or sandboxes and monitor their behavior for suspicious actions like data exfiltration or unauthorized communication. Picus Blue Report 2025 is Here: 2X increase in password cracking 46% of environments had passwords cracked, nearly doubling from 25% last year. Get the Picus Blue Report 2025 now for a comprehensive look at more findings on prevention, detection, and data exfiltration trends.

Daily Brief Summary

MALWARE // Malicious npm Package Exfiltrates User Emails via Postmark MCP

A malicious npm package mimicking the legitimate Postmark MCP project was discovered exfiltrating user emails to an external address, affecting around 1,500 downloads.

The package, identical to the authentic version in appearance, added a harmful line of code in its 1.0.16 update, compromising sensitive communications and personal data.

Koi Security researchers identified the breach, which potentially exposed password reset requests, two-factor authentication codes, and customer details.

Users are advised to immediately remove the affected package, rotate exposed credentials, and conduct thorough audits of MCP servers for any suspicious activity.

The developer has since removed the malicious package from npm, but the incident reveals critical security lapses in server implementation and AI assistant command execution.

Recommendations include verifying project sources, reviewing code changes, and running MCP servers in isolated environments to prevent unauthorized data exfiltration.

This incident underscores the importance of stringent oversight and sandboxing in environments where AI assistants operate with high privileges.