Article Details

Scrape Timestamp (UTC): 2025-12-03 09:33:06.807

Source: https://thehackernews.com/2025/12/picklescan-bugs-allow-malicious-pytorch.html

Original Article Text

Click to Toggle View

Picklescan Bugs Allow Malicious PyTorch Models to Evade Scans and Execute Code. Three critical security flaws have been disclosed in an open-source utility called Picklescan that could allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections. Picklescan, developed and maintained by Matthieu Maitre (@mmaitre314), is a security scanner that's designed to parse Python pickle files and detect suspicious imports or function calls, before they are executed. Pickle is a widely used serialization format in machine learning, including PyTorch, which uses the format to save and load models. But pickle files can also be a huge security risk, as they can be used to automatically trigger the execution of arbitrary Python code when they are loaded. This necessitates that users and organizations load trusted models, or load model weights from TensorFlow and Flax. The issues discovered by JFrog essentially make it possible to bypass the scanner, present the scanned model files as safe, and enable malicious code to be executed, which could then pave the way for a supply chain attack. "Each discovered vulnerability enables attackers to evade PickleScan's malware detection and potentially execute a large-scale supply chain attack by distributing malicious ML models that conceal undetectable malicious code," security researcher David Cohen said. Picklescan, at its core, works by examining the pickle files at bytecode level and checking the results against a blocklist of known hazardous imports and operations to flag similar behavior. This approach, as opposed to allowlisting, also means that it prevents the tools from detecting any new attack vector and requires the developers to take into account all possible malicious behaviors. The identified flaws are as follows - Successful exploitation of the aforementioned flaws could allow attackers to conceal malicious pickle payloads within files using common PyTorch extensions, deliberately introduce CRC errors into ZIP archives containing malicious models, or craft malicious PyTorch models with embedded pickle payloads to bypass the scanner. Following responsible disclosure on June 29, 2025, the three vulnerabilities have been addressed in Picklescan version 0.0.31 released on September 9. The findings illustrate some key systemic issues, including the reliance on a single scanning tool, discrepancies in file-handling behavior between security tools and PyTorch, thereby rendering security architectures vulnerable to attacks. "AI libraries like PyTorch grow more complex by the day, introducing new features, model formats, and execution pathways faster than security scanning tools can adapt," Cohen said. "This widening gap between innovation and protection leaves organizations exposed to emerging threats that conventional tools simply weren't designed to anticipate." "Closing this gap requires a research-backed security proxy for AI models, continuously informed by experts who think like both attackers and defenders. By actively analyzing new models, tracking library updates, and uncovering novel exploitation techniques, this approach delivers adaptive, intelligence-driven protection against the vulnerabilities that matter most."

Daily Brief Summary

VULNERABILITIES // Critical Flaws in Picklescan Expose PyTorch Models to Code Execution

JFrog researchers discovered three critical vulnerabilities in Picklescan, an open-source tool designed to detect malicious code in Python pickle files used by PyTorch models.

These flaws allow attackers to bypass Picklescan's protections, enabling arbitrary code execution and potentially facilitating large-scale supply chain attacks.

Picklescan works by examining bytecode and checking against a blocklist, but this method fails to detect new attack vectors, leaving systems vulnerable.

The vulnerabilities can be exploited by embedding malicious payloads in PyTorch models, introducing CRC errors, or using common PyTorch extensions to evade detection.

Following responsible disclosure, the vulnerabilities were patched in Picklescan version 0.0.31, released on September 9, 2025.

The incident underscores the risks associated with relying on a single security tool and highlights the need for adaptive, intelligence-driven protection strategies in AI model security.

Organizations are advised to ensure they load only trusted models and consider additional security measures beyond existing scanning tools to mitigate emerging threats.