Skip to main content
← back to blog

LangChain State of Agent Engineering: Security Is Now a Top Barrier

1,340 practitioners surveyed. 57% have agents in production. Security ranks as the #1 concern for large enterprises. What this means for agent infrastructure.

market research langchain security production

LangChain just published their State of Agent Engineering report, surveying 1,340 practitioners between November 18 and December 2, 2025. The headline number: 57.3% now have agents in production, up from 51% in their previous survey.

But the more interesting finding is what’s blocking the other 43%—and what’s keeping the 57% up at night.

The Production Barrier Stack

Quality remains the top blocker at 32%, encompassing accuracy, consistency, and hallucinations. No surprise there.

What’s notable is the second-tier blockers:

Barrier% Citing
Security24.9% (enterprises with 2,000+ employees)
Latency20%

Cost concerns have declined compared to previous surveys—it’s no longer a top barrier.

Security is now the #1 concern for large enterprises. Not quality. Not cost. Security.

This aligns with what we’re hearing from teams deploying agents against production systems. When an agent can issue refunds, modify records, or trigger downstream workflows, “who authorized this action?” becomes a non-negotiable question.

Observability Is Table Stakes

The report shows 89% have implemented observability for their agents, with 62% having detailed tracing. Among production deployments, that jumps to 94% observability and 71.5% full tracing.

This makes sense. You can’t debug what you can’t see. But observability answers “what happened?”—it doesn’t answer “was this action authorized?” or “who delegated this capability?”

Tracing tells you Agent B called the refund API at 3:42pm. It doesn’t tell you whether Agent B was the legitimate continuation of the customer service workflow that Agent A initiated.

Multi-Model, Multi-Agent Reality

Two numbers stood out:

  • 75%+ use multiple models across their agent deployments
  • 57% rely on prompt engineering + RAG rather than fine-tuning

The multi-model reality means authorization can’t be model-specific. When Agent A generates a capability token using GPT-4’s context, that token needs to remain valid when Agent B, running on Claude, consumes it. Model-agnostic authorization becomes essential.

The RAG-over-fine-tuning preference suggests teams want flexibility. They’re building systems that can swap components without retraining. Authorization infrastructure needs the same composability—portable across models, frameworks, and deployment boundaries.

What’s Missing

The report catalogs what teams are building: coding agents, research agents, customer service agents, internal workflow automation. It documents the tools they’re using: LangChain, LangGraph, OpenAI, Anthropic.

What it doesn’t document—because it doesn’t exist yet—is the authorization layer these agents need.

When a customer service agent delegates to a refund agent, which delegates to a payment processor, the current answer is: bearer tokens, API keys, and hope.

Bearer Token Approach:
Agent A (API Key) → Agent B (same API Key) → Payment API
                    ↑ No audit trail of delegation

Capability Chain:
Agent A → grants scoped refund capability → Agent B → executes with proof
                    ↑ Cryptographic chain of authorization

Every agent in the bearer token chain holds the same credentials. Every agent can do anything those credentials permit.

The 24.9% of enterprises citing security as their top barrier aren’t worried about prompt injection (though that’s real). They’re worried about the question that keeps compliance teams awake: “Can you prove this agent was authorized to take this specific action, by this specific principal, under these specific constraints?”

Current infrastructure can’t answer that. Cryptographic capability chains can.


See how Proof of Continuity solves the agent authorization problem in our technical deep-dive. For the threat model that production agents face, see 5 Ways Your AI Agents Will Get Hacked.