79% of Organizations Have No Guardrails for AI Agents
Akto surveyed 100+ CISOs and security leaders. The findings: agents are in production, but inventory, governance, and runtime controls are missing. The gap is now measurable.
Akto just published their State of Agentic AI Security 2025 report, surveying 100+ security leaders—CISOs, Heads of AI Security, and technical architects—across Fortune 500s to mid-sized companies.
The headline: 69% of organizations are already piloting or running production agent deployments. But the security infrastructure to support them doesn’t exist.
The Visibility Crisis
The most striking finding: only 21% of organizations have full visibility into agent actions, MCP tool invocations, or data access.
| Gap | % Affected |
|---|---|
| No complete inventory of agents and MCP connections | 79% |
| No security/governance policy for agents | 79% |
| No formal risk assessment in past 12 months | 60% |
| No continuous monitoring of agent-to-agent interactions | 83% |
Suhel Khan, CISO at Chargebee, put it directly:
Visibility is the biggest gap today. You can’t govern or enforce guardrails if you don’t know what your agents are doing. Without observability, every control is guesswork.
This isn’t surprising. When agents can invoke tools, authenticate with enterprise systems, and trigger downstream actions, they become part of the attack surface—regardless of whether they’re “experimental” or “production-grade.”
The Top 6 Threats
Security leaders consistently identified six threat categories:
| Rank | Threat | Why It Matters |
|---|---|---|
| 1 | Supply chain risks | Every MCP integration is an unvetted execution surface |
| 2 | Data leakage | Agents touch sensitive systems; data leaks through outputs, logs, cross-agent workflows |
| 3 | Prompt injection | Attackers steer agents into unsafe actions, bypassing guardrails |
| 4 | Uncontrolled autonomous actions | Escalation, infinite loops, unintentional state modification |
| 5 | Regulatory violations | Agents access restricted data without traceability |
| 6 | Agent impersonation | Weak identity boundaries allow token replay and permission spoofing |
Note what’s at the top: supply chain. When an agent connects to an MCP server, it’s executing code from an external source. A compromised integration can redirect agents, steal data, or trigger unsafe actions. This is why MCP security has become a critical concern.
Current Controls Are Reactive
The report catalogs what organizations have deployed:
| Control | Adoption | Reality |
|---|---|---|
| Logging & behavioral audits | 44% | Post-incident only |
| Policy-based runtime guardrails | 41% | Uneven coverage across tools |
| AI firewalls | 39% | Early, brittle, poorly integrated |
| AI traffic monitoring | 38% | Most lack end-to-end visibility |
| Homegrown tools | 30% | Don’t scale, limited enforcement |
| No controls | ~19% | Unrestricted agent autonomy |
The pattern: organizations can tell you what happened after an incident. They can’t prevent unsafe actions in real-time.
Jackie Mak from KPMG US frames the shift required:
The only way to stay ahead of the curve is to embed security into the entire lifecycle of Agentic AI systems. A ‘secure-by-design’ approach is not just a best practice; it’s a prerequisite for responsible AI adoption.
Why Teams Are Struggling
| Challenge | % Citing |
|---|---|
| Tools too early/immature | 32% |
| Lack of internal expertise | 24% |
| Integration challenges (SIEM/IAM/DLP don’t map to agent behavior) | 20% |
| Governance gaps (no ownership/policies) | 13% |
| Budget/prioritization | 11% |
The integration challenge deserves attention. Existing security infrastructure—SIEM, IAM, DLP—was built for human-speed, request-response patterns. Agent workflows are recursive, bursty, and span multiple systems in milliseconds. When a single agentic “goal” triggers a fan-out of sub-tasks, database queries, and API calls, legacy systems see something that looks like an attack.
What Security Leaders Want
The report captures what CISOs are prioritizing for 2026:
- 94% plan to evaluate purpose-built agentic AI security platforms
- 60%+ prioritizing policy guardrails, runtime enforcement, and AI traffic monitoring
Specifically, security leaders want:
- Full action logs with cryptographic proof
- Strict execution boundaries (least-privilege)
- Sandboxing for untrusted operations
- Strong identity enforcement
- Human-in-the-loop for sensitive operations
The phrase that captures the requirement: “make autonomous systems deterministic.”
Autonomy is valuable only when paired with transparency and control. Confidence comes from seeing what agents do, enforcing constraints in real-time, tracing decisions end-to-end, and intervening when judgment or compliance demands it.
What’s missing from today’s control sets is transaction-bounded authorization—proving that each tool invocation is the authorized continuation of a specific request, not just “this agent has a key.” That’s where capability chains and Proof of Continuity fit: the permission is the proof, and it travels with the transaction.
The Governance Vacuum
Perhaps the starkest finding: 79% of organizations have no security or governance policy for how AI agents and MCP connections should be onboarded, permissioned, monitored, or managed.
This means:
- No consistent standards for agent permissions
- No identity or authentication requirements
- No approval workflows for tool or connector onboarding
- No baseline expectations for monitoring or audits
Enterprises are deploying autonomous systems into governance vacuums.
What This Means
The Akto report quantifies what practitioners have been saying anecdotally: agent adoption is outpacing security maturity. The gap is now measurable—79% without guardrails, 83% without continuous monitoring, 60% without risk assessments.
But the report also signals where the industry is headed. When 94% of organizations plan to evaluate dedicated agentic AI security platforms in 2026, that’s market formation. The question isn’t whether agent authorization infrastructure will exist—it’s who will build it.
The organizations that establish agent inventories, privilege policies, and runtime controls now will set the standard for safe autonomy. Those that wait will inherit compounding, silent risk.
For more on the structural security gaps in agent systems, see The Missing Layer. For how capability-based authorization addresses these challenges, see Capabilities 101 and Proof of Continuity.