Why Microsoft Copilot's Audit Log Failure Was Inevitable
The Copilot vulnerability that allowed silent file access exposes a structural flaw: ambient authority plus bolt-on audit logging. Capability chains make that class of bug impossible.
The Microsoft Copilot incident isn’t a weird edge case. It’s the confused deputy in production—and the exact failure mode capability-based authorization is built to prevent.
What Happened
On August 19, 2025, security researcher Zack Korman (CTO at Pistachio) disclosed that Microsoft 365 Copilot could access and summarize files without generating audit log entries. The exploit was embarrassingly simple: tell Copilot not to include a reference link in its response, and it would still read the file—silently, with no audit trail.
From the disclosure:
Just like that, your audit log is wrong. For a malicious insider, avoiding detection is as simple as asking Copilot.
Microsoft quietly patched it on August 17th. No CVE. No customer notification. Every tenant running Copilot before that date has audit logs they cannot trust.
For teams under HIPAA, SOX, or any control framework that depends on audit integrity, that’s catastrophic.
Timeline
| Date | Event |
|---|---|
| July 4 | Korman discovers the vulnerability |
| July 7 | Microsoft MSRC status: “reproducing” |
| August 2 | Microsoft notifies of August 17 fix date |
| August 14 | Microsoft confirms no plans for public disclosure |
| August 17-18 | Fix deployed; Korman permitted to disclose |
| August 19 | Public disclosure |
Notably, Michael Bargury (CTO at Zenity) had reported the same issue at Black Hat 2024—a full year before Korman’s discovery.
Why This Was Inevitable
An HN commenter put it bluntly:
As long as not ALL the data the agent has access to is checked against the rights of the current user placing the request, there WILL be ways to leak data.
This is the confused deputy in the wild. Copilot is the deputy. It holds ambient authority to everything a user can touch. Audit logging is a sidecar—bolted on, not built in.
User transaction → Copilot (ambient authority) → SharePoint
↓
Audit log (separate system)The AI can be asked to use a code path that never triggers the logging hook. As long as authorization and audit are separable, attackers will find a gap.
Patches Won’t Fix the Architecture
The current model stays the same:
- Copilot keeps ambient authority to user data.
- Policy checks and audit logging sit outside the authorization boundary.
- Another code path will bypass the hook, because the hook is optional.
You can whack each mole, but the board is built to spawn more moles.
The Alternative: Capability Chains
Capability-based systems eliminate ambient authority. Each transaction receives a specific, cryptographically-signed capability that both authorizes and records the action. Authorization is the audit trail.
User transaction → capability token ("summarize /reports/q3.docx", constraints, expiry, nonce)
↓
Gateway validates the entire capability chain
↓
Access executes only if the chain is valid
↓
The chain itself is the audit recordKey properties (see Capabilities 101 for the full model):
- No ambient authority. The agent doesn’t “have access”—it must present a capability for each transaction.
- Attenuation only. Capabilities can narrow scope (document X) but can’t widen it (all documents).
- Non-transferable, non-replayable. Designated for a specific agent/process, bound to a nonce, expires.
- Authorization equals audit. There’s no parallel logging hook to bypass; the proof you needed to act is the record that you acted.
What Capability Chains Would Have Done
- User asks Copilot to summarize a file.
- Control plane issues a capability:
{action: "summarize", resource: "/reports/q3.docx", constraints: [...], nonce, expires, designated: copilot_ctx_123}. - Copilot presents the capability to the gateway.
- Gateway validates the entire chain; if valid, the action executes and the chain is stored.
- No capability? No access. No separate audit hook to skip.
“Don’t log this” becomes meaningless because you can’t access without producing the proof that you accessed.
The Takeaway
Copilot’s audit gap wasn’t a one-off bug. It was a predictable outcome of ambient authority plus bolt-on observability. As enterprises adopt agentic systems, they’ll keep rediscovering this failure until they move to capability chains with transaction-bounded authority.
If you’re shipping AI agents that touch real data and real money, this isn’t hypothetical. It’s today’s problem.
References
- Korman, Z. (2025): Copilot Broke Your Audit Log, but Microsoft Won’t Tell You
- The Register (2025): Microsoft mum about M365 Copilot on-demand security bypass
- Heise (2025): Microsoft’s Copilot falsified access logs for months
- HN Discussion: news.ycombinator.com/item?id=44957454