MCP Security: The Protocol Nobody Secured Before Shipping

MCP is not optional. Agentic AI without a tool integration protocol is an AI that cannot act. Enterprises deploying AI agents are deploying MCP whether they have inventoried it or not. The 86% of organizations with no visibility into their AI data flows are running MCP servers they have not audited.
MCP was designed primarily for functionality rather than security, creating fundamental vulnerabilities that cannot be easily patched: session IDs mandated in URLs, which expose sensitive identifiers in logs and enable session hijacking; no authentication standards, leading to inconsistent and often weak implementations; and no required message signing or verification mechanisms, allowing message tampering. These are protocol-level decisions. Patching individual servers does not close them.
Endor Labs surveyed 2,614 MCP implementations and published the results. Among those implementations: 82% use file system operations prone to path traversal, 67% use sensitive APIs related to code injection, and 34% use sensitive APIs related to command injection. Those percentages apply specifically to open-source MCP implementations analyzed in the Endor Labs dependency management research. They are not audited figures for the full production MCP ecosystem. Treat them as directional, not definitive. The direction is bad.
That is a different security posture than traditional application security. A SQL injection attack exploits a code path. A tool poisoning attack exploits the model’s inference. You cannot patch inference. You have to govern the inputs.

What the numbers actually measure

JFrog’s Agent Skills Registry, announced at Nvidia GTC 2025, addresses the registry side of the supply chain problem: scanning, signing, and governing MCP skills before agents adopt them. The gap that remains is the last mile. Proving that the model executing in production is the exact artifact scanned and signed in the registry requires cryptographic continuity from registry to runtime. That is still an engineering problem. No vendor has closed it.
CVE-2025-6514, carrying a CVSS score of 9.6, was a command injection vulnerability in mcp-remote, a widely-used package for connecting to remote MCP servers. Attackers could craft malicious remote MCP server URLs that would execute arbitrary commands on the client machine. The package had over 437,000 downloads at the time of disclosure. That is not a niche research finding. That is a supply chain event.
The question is not whether to use MCP. The question is whether the security posture around it matches the access it brokers. An MCP server with GitHub credentials, filesystem access, and email integration is not a low-value target. It is a privileged identity with compute-speed execution and no shift change.

When the reference implementation fails

Run the inventory before the next disclosure does it for you.
The vulnerabilities in individual MCP servers are serious. The architectural problem underneath them is worse.
Anthropic introduced the Model Context Protocol in November 2024. They called it the USB-C for AI applications. The framing was accurate. Like USB-C, it standardized a connection layer that everyone immediately plugged everything into. Like USB-C in its first generation, nobody thought hard enough about what happens when the port is compromised.
The honest answer is: not much that is verifiable at scale.
BlueRock Security analyzed over 7,000 MCP servers and found that 36.7% carry a server-side request forgery exposure similar to the one disclosed in Microsoft’s MarkItDown MCP server. That figure applies to the specific SSRF pattern BlueRock scanned for, across the specific corpus of servers they analyzed. It is not a certified industry-wide rate. It is a representative sample. At 36.7%, representative is damaging enough.

The trust model that was never built

Anthropic accepted the disclosures in September 2025. Fixes shipped in December. The gap between discovery and patch was six months. During that window, every developer who built on the reference implementation was building on a broken foundation.
Emerging best practices for MCP deployments include regular threat intelligence updates, patching cadences specific to MCP environments, and continuous security audits combined with staff training on agentic AI risks. Those are directionally correct recommendations. None of them close the protocol-level trust gaps. They reduce exposure. They do not eliminate the underlying architectural debt.
The most operationally significant disclosure came on January 20, 2026. It did not involve a third-party MCP server built by an unknown developer. It involved Anthropic’s own reference implementation.
MCP is now the backbone infrastructure for connecting AI models to external tools, data sources, and automated business workflows. That is the correct framing of the problem. This is not a peripheral protocol. It is the connective tissue of enterprise AI deployments. When that tissue is vulnerable, the blast radius is not bounded by a single service. It extends to every tool, database, and downstream agent the model can reach.
Anthropic’s Model Context Protocol became the connective tissue of enterprise AI in under 18 months. Security came later. The CVE count is now the evidence.

The mass-scale moment

Red Hat’s security guidance for MCP deployments covers the basics: static application security testing in CI/CD pipelines, software composition analysis for dependency vulnerabilities, cryptographic server verification for cloud-hosted MCP services, and integrity scanning for all dependencies. Those controls are not MCP-specific. They are standard application security practices applied to a new attack surface. The fact that they need to be stated as recommendations in early 2026 tells you where baseline adoption sits.
“If Anthropic gets it wrong, in their official MCP reference implementation for what good should look like, then everyone can get MCP security wrong. That is where we are today.”
The trust problem runs deeper than authentication. AI agents trust tool descriptions implicitly. That assumption is the attack surface. Researchers demonstrated that the WhatsApp MCP server was vulnerable to tool poisoning: by injecting malicious instructions into tool descriptions, attackers could trick AI agents into exfiltrating entire chat histories. The attack required no authentication bypass and no code exploitation. It required only the ability to influence what the model reads.
The 30 CVEs in 60 days is not a sign that MCP is uniquely broken. It is a sign that security research attention finally arrived. More CVEs are coming. The architectural gaps in authentication standards, message integrity, and trust model are not patched. They are documented.

What governance exists today

The supply chain angle matters because of how MCP servers propagate. MCP tools quickly accumulated packages and servers via npm and PyPI, and those tools run with whatever permissions the AI system holds. An attacker who compromises a widely-used MCP library does not need to target individual deployments. They reach every deployment that pulls the package. The attack scales automatically.
Eighteen months later, the evidence is no longer theoretical. Between January and February 2026, security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure. The root causes were not exotic zero-days. They were missing input validation, absent authentication, and blind trust in tool descriptions. The fundamentals. The things that web application security learned to enforce twenty years ago.
Astrix Security ran a parallel study. Their team analyzed over 5,200 unique open-source MCP server implementations and found that 88% require credentials, 53% rely on static API keys or Personal Access Tokens that are long-lived and rarely rotated, and only 8.5% use OAuth. 79% of API keys are passed via simple environment variables. A credential management posture that would be flagged in a basic SOC 2 audit is the majority practice across the MCP ecosystem right now.
Operational note
The OWASP Agentic Security Top 10, published in late 2025, maps directly onto confirmed MCP CVEs. Every one of the top 10 risks has at least one documented MCP exploit. For practitioners auditing MCP deployments, the OWASP list is the current best available checklist. It is not a compliance framework. It is a vulnerability inventory.

The question every CISO should be asking

Cyata researcher Yarden Porat disclosed three vulnerabilities in mcp-server-git, Anthropic’s official Git MCP server: CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145. CVE-2025-68143 allowed the git_init tool to create repositories at arbitrary filesystem paths because the configured boundary was never validated. CVE-2025-68144 allowed attackers to overwrite arbitrary files via unsanitized arguments to the git_diff command. CVE-2025-68145 was a path validation bypass on repository restrictions. All three worked out of the box on default installations.
By Doug Wallace
Individual CVEs create bounded incidents. The mcp-remote disclosure created a different category of event.
That assessment came from Shahar Tal, CEO of Cyata, speaking to Dark Reading. It is worth sitting on. A reference implementation is not just code. It is the documented standard that other developers are expected to replicate. When the canonical example ships vulnerable, the vulnerability propagates by design.
Static API keys are not a nuanced risk. A single server compromise leaks every token that server holds. In an MCP context, that means Gmail credentials, GitHub tokens, database access, and any other integration the server brokers. That is not a theoretical blast radius. That is the actual one.

Similar Posts