Article Details

Scrape Timestamp (UTC): 2026-02-03 16:49:17.587

Source: https://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html

Original Article Text

Click to Toggle View

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata. Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data. The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025. "In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. "Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture." Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications. The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution. With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been characterized as a case of Meta-Context Injection. "MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction," Levi said. "By embedding malicious instructions in these metadata fields, an attacker can hijack the AI's reasoning process." In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields.  While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows - The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon's Docker Desktop implementation to capture sensitive internal data about the victim's environment using MCP tools by taking advantage of the assistant's read-only permissions. The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology. It's worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions. "The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat," Levi said. "It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model."

Daily Brief Summary

VULNERABILITIES // Docker Patches Critical AI Flaw Allowing Code Execution via Metadata

Docker has addressed a critical vulnerability in its AI assistant, Ask Gordon, with the release of version 4.50.0, mitigating risks of code execution and data exfiltration.

The flaw, named DockerDash by Noma Labs, exploited unverified metadata in Docker images to execute malicious code through a three-stage attack process.

Attackers could leverage this vulnerability to perform remote code execution or exfiltrate sensitive data, impacting both cloud and desktop environments.

The issue stemmed from the AI assistant's failure to differentiate between standard metadata and executable commands, leading to a Meta-Context Injection risk.

The vulnerability was exploited by embedding malicious instructions in Docker image metadata, compromising the AI's reasoning process and security boundaries.

Docker's patch also addresses a related prompt injection vulnerability, highlighting the importance of zero-trust validation for AI systems.

Organizations are urged to treat AI supply chain risks seriously, as trusted input sources can be manipulated to control AI execution paths.