Skip to main content
← back to blog

MCP Standardizes Tools. It Doesn't Secure Them.

The Model Context Protocol solves agent-to-tool communication. But who authorized the agent to use that tool, with what constraints, for which transaction?

mcp security tools anthropic infrastructure

Bar-El Tayouri, Head of Mend AI at Mend.io, recently asked whether MCP is overhyped. His conclusion: MCP is useful—it standardizes how agents communicate with tools—but it’s “one step in agent tool evolution” rather than a security solution.

That’s exactly right. And it’s worth being precise about what MCP does and doesn’t solve.

What MCP Solves

Before MCP, every agent-to-tool integration required custom code. Want your agent to query a database? Write a connector. Want it to call Stripe? Write another connector. Want it to access your CRM? Another connector.

MCP standardizes this. One protocol, many tools. Agents can discover and invoke tools through a consistent interface. This is genuinely useful infrastructure.

What MCP Doesn’t Solve

Tayouri identifies the gap:

Every step up in this scale means exponentially more risk—roughly ten times more with each autonomy increase.

MCP tells agents how to call tools. It doesn’t answer:

  • Who authorized this agent to use this tool?
  • What constraints apply to this invocation?
  • Which transaction is this action part of?
  • Can we prove this action was the legitimate continuation of an approved workflow?

Tayouri describes a concrete attack: an agent browsing the web encounters hidden content—“emojis or invisible characters”—containing malicious instructions. The agent, which also has database access, executes those instructions on company servers.

MCP doesn’t prevent this. The agent has valid tool access. The tool executes the transaction. Nothing in the protocol layer distinguishes between “agent executing legitimate workflow” and “agent executing injected instructions.”

The Risk Multiplication Problem

The author makes a key observation about combining capabilities:

Risk vectors requiring separation: Agents with both web-browsing and database access. Systems combining code execution with payment capabilities.

This is the ambient authority problem. When an agent holds credentials for multiple tools, every tool is exposed to every attack vector the agent faces. Prompt injection in a web-browsing context becomes a database attack because the agent has database credentials.

MCP actually makes this easier. Tool discovery and invocation become standardized—for legitimate use and for exploitation.

x402 and Autonomous Tool Discovery

Tayouri also mentions Coinbase’s x402 protocol, which enables agents to autonomously discover and pay for tools using USDC. Agents don’t need human pre-configuration—they find tools, pay for access, and use them.

This is the logical endpoint of agent autonomy. And it breaks every assumption in traditional security models:

Traditional Model:
Human configures tool access → Agent uses configured tools

x402 Model:
Agent discovers tools → Agent pays for access → Agent uses tools
                        ↑ No human in the loop

If agents can autonomously acquire tool access, the question becomes: what constrains that access? Not “does the agent have the API key”—it can get one. But “is this specific tool invocation authorized for this specific transaction by this specific principal?”

The Missing Layer

MCP is a transport protocol. It moves transactions between agents and tools. What’s missing is the authorization protocol—the layer that answers whether a specific transaction should execute.

Current Stack:
Agent → MCP → Tool
        ↑ No authorization check

Required Stack:
Agent → Authorization Layer → MCP → Tool
        ↑ Verify: Who delegated this capability?
          What constraints apply?
          Is this the designated executor for this transaction?

Tayouri recommends “automated risk-based action approval systems” and “permission detection tools.” These are runtime checks—necessary but insufficient. They detect anomalies after credentials are already distributed.

Capability-based authorization works differently. Instead of giving agents credentials and then monitoring for misuse, you give agents scoped capability tokens that cryptographically encode what they’re allowed to do. The tool-side gateway verifies the token before execution. No valid token, no execution.

Prompt Injection Meets Capability Constraints

Consider Tayouri’s hidden-content attack scenario. An agent browsing the web encounters injected instructions to “delete all customer records.”

With bearer tokens: The agent has database credentials. It executes the delete. The monitoring system (maybe) catches it after the fact.

With capability chains: The agent has a capability token scoped to “read customer records for order #12345.” The delete transaction doesn’t match the token’s constraints. The gateway rejects it. The attack fails—not because we detected malice, but because the capability didn’t authorize that action. Authorization is evaluated at the transaction boundary, not guessed from context.

This is the difference between detection and prevention. Detection asks “did something bad happen?” Prevention asks “was this action authorized?”

MCP + Authorization

MCP and capability-based authorization aren’t competing. MCP standardizes tool communication. Authorization ensures that communication is legitimate.

Even identity standards bodies acknowledge the gap. MIT Media Lab’s “Authenticated Delegation” paper distinguishes authentication (who) from authorization (what, for this request) and extends OAuth with explicit delegation chains. The OpenID Foundation’s “Identity Management for Agentic AI” admits current IAM “wasn’t designed for recursive delegation chains or the scale of authorization decisions.” Christian Posta sums it up: “Most enterprises can list which users have access… very few can explain why they have it.”

The agent ecosystem needs both:

  1. MCP for tool discovery and invocation
  2. Authorization infrastructure for proving that each invocation is the authorized continuation of a legitimate workflow/transaction

The missing piece is the trust plane: a gateway that enforces capability chains (designated executors, attenuation, expiry, non-replay) before forwarding MCP calls. That’s how you keep MCP’s interoperability while preventing confused-deputy crossings.

The first exists. The second is what we’re building.


For how capability chains prevent these attacks, see 5 Ways Your AI Agents Will Get Hacked. For the technical foundation, see Proof of Continuity and Capabilities 101.

References