Security researchers have disclosed three critical vulnerabilities in mcp-server-git, the official Git-based Model Context Protocol (MCP) server maintained by Anthropic, exposing systems to unauthorized file access and potential remote code execution when combined with prompt injection techniques.
The flaws were identified by researchers at Cyata, an AI security firm, and responsibly disclosed to Anthropic in mid-2025. Patches were released later in the year, but experts warn the findings raise broader concerns about security assumptions in AI tooling ecosystems.
What Is mcp-server-git?
The mcp-server-git package is a Python-based MCP server that allows large language models (LLMs) to interact with Git repositories. It enables AI assistants to read, search, and modify repositories programmatically, making it a foundational component for agentic AI workflows.
Because it operates at the intersection of AI reasoning and system-level tooling, weaknesses in its design can have far-reaching consequences.
Vulnerabilities Exploitable via Prompt Injection
According to Cyata, the vulnerabilities can be triggered without direct system access. An attacker only needs to influence the content an AI assistant processes—such as a malicious README file, poisoned GitHub issue, or compromised webpage.
The disclosed vulnerabilities include:
- CVE-2025-68143 – A path traversal flaw in the
git_init function allowing arbitrary filesystem paths during repository creation, enabling attackers to convert any directory into a Git repository
- CVE-2025-68144 – An argument injection issue caused by unsanitized user input passed to Git CLI commands via
git_diff and git_checkout
- CVE-2025-68145 – A second path traversal flaw due to insufficient validation of repository paths when using the
--repository flag
Successful exploitation could allow attackers to read, overwrite, or delete files and gain access to repositories beyond intended boundaries.
Chaining the Flaws for Code Execution
Cyata demonstrated that these vulnerabilities could be chained together with the Filesystem MCP server to achieve remote code execution. In the documented attack scenario, a malicious prompt could guide an AI agent to:
- Initialize a Git repository in an attacker-controlled directory
- Write a malicious
.git/config file containing a custom filter
- Create a
.gitattributes file to activate the filter
- Drop a shell script payload
- Trigger the payload execution during a Git operation
Because the process relies on legitimate AI tool usage, traditional security controls may not detect the attack.
Fixes and Security Response
Anthropic addressed the issues in versions 2025.9.25 and 2025.12.18 of the package. As part of the remediation:
- The vulnerable
git_init tool was removed entirely
- Additional path validation checks were introduced
- Input sanitization was strengthened across Git command wrappers
Users are strongly advised to upgrade to the latest version immediately.
Broader Implications for AI Tooling Security
Security experts caution that the findings highlight systemic risks in AI agent architectures.
“This is the reference Git MCP server that developers are encouraged to adopt,” said Shahar Tal, CEO of Cyata. “If security boundaries fail in the canonical implementation, it suggests the wider MCP ecosystem needs deeper security review. These are default behaviors—not edge cases.”
The research underscores how prompt injection, when combined with privileged AI tools, can bridge the gap between untrusted input and system-level execution.
Looking Ahead
As AI assistants gain deeper access to development workflows and infrastructure, security teams are urged to reassess trust boundaries, limit tool permissions, and treat AI-driven automation as part of the attack surface—not just a productivity feature.