Skip to main content
← back to blog

Context Engineering for Agent Trust

Everyone's optimizing what agents know. Nobody's solving what agents are permitted to do. The context engineering revolution is incomplete without trust.

context-engineering security authorization multi-agent architecture

The AI infrastructure community has discovered context engineering.

The signs are everywhere: Karpathy endorsed the term, calling it “the delicate art and science of filling the context window with just the right information.” LangChain published a defining piece. Anthropic released engineering guides on building effective agents. Academia is catching up—one paper traces the field’s 20-year evolution from GUI-era sensors to agentic LLMs, another introduces Agentic Context Engineering as a formal framework, treating contexts as “evolving playbooks” that accumulate and refine over time. Weaviate published an entire ebook codifying six core components: agents, query augmentation, retrieval, prompting techniques, memory, and tools.

The core insight is sound: context engineering is entropy reduction. You’re transforming high-entropy human signals into low-entropy machine-interpretable representations. The better your context, the better your agent performs.

But there’s a blind spot in this conversation. A big one.

Everyone’s engineering operational context—what agents know and remember. Nobody’s engineering trust context—what agents are permitted to do and why.

The Four Eras of Context Engineering

The GAIR-NLP paper maps context engineering across four eras:

EraParadigmExample
1.0Context as TranslationGUIs, location sensors
2.0Context as InstructionPrompt engineering, few-shot
3.0Context as ScenarioAgents understand goals
4.0Context as WorldAI builds your environment

We’re in the 2.0 → 3.0 transition. The jump from “context-aware” to “context-cooperative” systems. Memory architectures like MemGPT treat the context window as virtual memory. RAG systems retrieve relevant knowledge on demand. Long-context models push the boundaries of what fits in a single pass.

The ACE researchers identified two failure modes in current approaches: brevity bias (systems discard domain-specific insights when summarizing) and context collapse (iterative rewrites progressively erode detailed information). Their solution: structured, incremental updates that preserve knowledge across iterations.

This is real progress. But it’s solving half the problem.

The Missing Dimension: Authorization Context

Consider what happens when your refund agent spawns a sub-agent to verify a customer’s purchase history, which then queries an inventory service, which then calls a payment gateway.

At each hop, the system needs to answer two questions:

  1. What does this agent know? (Operational context)
  2. What is this agent permitted to do? (Trust context)

The entire context engineering discourse focuses on question one. Memory management. Retrieval augmentation. Prompt optimization. Context window gymnastics. Weaviate’s 41-page ebook covers all six components in detail—chunking strategies, semantic vs. episodic memory, the Thought-Action-Observation cycle for tool orchestration—and the word “authorization” appears exactly zero times.

Question two gets a hand-wave: “Use OAuth.” “Add an API key.” “The orchestrator handles it.”

But bearer tokens can’t safely traverse message queues—they carry no transaction context and work for anyone who possesses them. API keys grant static permissions regardless of transaction state. And orchestrators create bottlenecks that limit the autonomy many agent architectures require.

The trust context problem is unsolved.

And it suffers from the same failure modes the ACE researchers identified for operational context. Bearer tokens exhibit brevity bias—they collapse rich authorization context (“refund up to $200 for order #12345, because the customer requested it”) into a single scope string (“refunds:write”). Multi-hop delegation causes context collapse—each handoff loses transaction-specific information until only a generic permission remains.

What Trust Context Actually Requires

Operational context needs to answer: What information is relevant to this transaction?

Trust context needs to answer something harder: Why is this agent authorized to act, right now, in this specific transaction?

That “why” has structure:

  • Provenance: Who originally granted this authority?
  • Delegation chain: How did authority flow to this agent?
  • Constraints: What limits accumulated along the way?
  • Continuity: Is this agent the legitimate next step?

Traditional auth systems collapse all of this into a binary: valid token or not. They answer “what do you possess?” when the real question is “who are you in this transaction?”

This is the confused deputy problem at scale. And multi-agent choreography makes it impossible to ignore.

From Context-Aware to Context-Cooperative Authorization

The paper’s era framework applies to authorization too:

Era 1.0: Static ACLs Permissions hardcoded in config files. No awareness of transaction state.

Era 2.0: OAuth Scopes Tokens carry permission claims. Better, but scopes are coarse and tokens are bearer instruments—anyone who possesses them can use them.

Era 3.0: Capability Chains Authorization context travels with the transaction. Permissions attenuate at each hop. The system understands why this agent is authorized, not just that it’s authorized.

Era 4.0: Self-Attenuating Agents Agents derive minimal permissions from transaction descriptions. Authorization context emerges from the goal, not from pre-configured policies.

Most enterprise systems are stuck at Era 2.0. Some are experimenting with 3.0. The gap between operational context (rapidly advancing) and trust context (largely static) is widening.

What Context-Cooperative Authorization Looks Like

Here’s the concrete difference.

Era 2.0 (OAuth):

Agent A receives: Bearer token with "refunds:write" scope
Agent A passes to Agent B: Same bearer token
Gateway checks: Is token valid? Does scope match?

The token carries no transaction context. If intercepted, any bearer can use it. If the original authorization was for “refund up to $200 for order #12345,” that context is lost.

Era 3.0 (Capability Chains):

Agent A receives: Capability chain with:
  - Root: Gateway grants refund capability, ≤$500, designates Agent A
  - Constraints: order_id=12345, reason=customer_request

Agent A delegates to Agent B: Extended chain with:
  - Block 2: Agent A attenuates to ≤$200, designates Agent B

Gateway checks:
  1. Is chain cryptographically valid?
  2. Is requester the designated executor?
  3. Do accumulated constraints permit this action?

The authorization context travels with the transaction. Constraints accumulate (they can only tighten, never loosen). The designated executor is bound into the chain—interception is useless because attackers can’t prove they’re the continuation.

This is trust context engineering.

Why This Matters Now

Three converging pressures:

1. Choreography is winning. Orchestration doesn’t scale. The industry is moving toward event-driven, message-based agent communication. But OAuth was designed for request-response, not pub-sub—the authorization ceremony requires redirects, synchronous handshakes, and HTTP flows that don’t exist in message queues. The auth model breaks.

2. Agents are crossing boundaries. A healthcare referral agent talks to an insurance verification agent talks to a scheduling agent. Different organizations, different trust domains. You can’t just pass tokens around.

3. Regulators are watching. Article 14 of the EU AI Act requires “traceability” for high-risk AI systems—the ability to reconstruct how a decision was reached. SOC2 CC6.1 requires logical access controls with audit trails. When your agents cross organizational boundaries to make consequential decisions, “the bearer token was valid” doesn’t satisfy either requirement.

The Integration Point

Context engineering for operations and context engineering for trust aren’t competing approaches. They’re complementary layers.

Your MemGPT-style memory system manages what the agent knows across sessions. Your RAG pipeline retrieves relevant information for the current transaction. Your capability chain manages what the agent is permitted to do in this transaction.

The operational context says: “Based on the customer’s history and our refund policy, a $150 refund is appropriate.”

The trust context says: “This agent is cryptographically authorized to issue refunds up to $200 for this specific order, as the designated continuation of a workflow initiated by an authenticated customer service transaction.”

Both are necessary. Only one is getting serious engineering attention.

What We’re Building

At Amla, we’re building the trust context layer for multi-agent systems. Cryptographic capability chains that:

  • Carry authorization context through async workflows
  • Attenuate permissions at each delegation
  • Bind execution rights to designated agents
  • Provide auditable provenance for every action

We call the core primitive Proof of Continuity—because the question isn’t “what do you possess?” but “are you the authorized continuation of this transaction?”

The context engineering revolution is real. But it’s incomplete without trust. Agents that know everything but can do anything aren’t intelligent systems—they’re liabilities.


For the technical deep-dive on how capability chains work, see Proof of Continuity. For the threat model, see 5 Ways Your AI Agents Will Get Hacked.