Article Details
Scrape Timestamp (UTC): 2024-12-06 11:31:31.243
Source: https://thehackernews.com/2024/12/researchers-uncover-flaws-in-popular.html
Original Article Text
Click to Toggle View
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks. Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors. "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said. "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to backdoor stored ML models or achieve code execution. The list of vulnerabilities is below - JFrog noted that ML models shouldn't be blindly loaded even in cases where they are loaded from a safe type, such as Safetensors, as they have the capability to achieve arbitrary code execution. "AI and Machine Learning (ML) tools hold immense potential for innovation, but can also open the door for attackers to cause widespread damage to any organization," Shachar Menashe, JFrog's VP of Security Research, said in a statement. "To safeguard against these threats, it's important to know which models you're using and never load untrusted ML models even from a 'safe' ML repository. Doing so can lead to remote code execution in some scenarios, causing extensive harm to your organization."
Daily Brief Summary
Cybersecurity researchers at JFrog have identified multiple vulnerabilities in various open-source machine learning frameworks including MLflow, H2O, PyTorch, and MLeap.
The security flaws could potentially allow attackers to execute code by hijacking machine learning clients within organizations.
Affected ML tools have inherent weaknesses in libraries that process model formats like Safetensors, increasing the risk of malicious interference.
Attackers exploiting these vulnerabilities can access machine learning services, model registries, and MLOps pipelines to perform lateral movements and leak sensitive information.
The disclosed flaws are part of a larger set of 22 security issues previously reported by JFrog, emphasizing the ongoing risks in machine learning supply chains.
JFrog advises against loading ML models from untrusted sources, even from repositories considered 'safe,' as this can lead to remote code execution.
Shachar Menashe, VP of Security Research at JFrog, highlights the importance of understanding which models are being used to prevent potential widespread damage through exploitation.