Skip to content

Amla Labs 8 min read

Quickstart: Using Capabilities for Agent Delegation

This guide walks you through implementing capability-based delegation in a multi-agent system. You’ll learn how to create root capabilities, delegate them to sub-agents, and enforce security constraints—all with working code examples.

Prerequisites

# Install the Amla SDK
pip install amla-sdk

# Set up your gateway (see docs for deployment)
# Or use our hosted beta: https://gateway.amlalabs.com

Step 1: Create Your Root Capability

First, initialize the Amla gateway and create a root capability for your orchestrator:

from amla_sdk import AmlaClient, CapabilitySpec
from datetime import timedelta

# Initialize client with gateway URL
client = AmlaClient(
    gateway_url="https://gateway.example.com",
    private_key_path="./keys/root.key",
    public_key_path="./keys/root.pub"
)

# Create root capability for orchestrator
root_capability = client.create_root_capability(
    interfaces=["database:read", "database:write", "api:execute"],
    resources=["*"],  # All resources
    ttl=timedelta(hours=8),
    max_uses=None,  # Unlimited for root
    metadata={
        "agent": "orchestrator",
        "purpose": "document processing pipeline"
    }
)

print(f"Root capability created: {root_capability.id}")
print(f"Token: {root_capability.token[:50]}...")

This gives you a capability that:

  • Can access all resources (*)
  • Has unlimited uses
  • Works for 8 hours
  • Required no external authorization server

Step 2: Create a Delegated Capability for Your Agent

Now, create a delegated capability for your AI agent without any network calls:

class AIAgent:
    def __init__(self, name, parent_capability):
        self.name = name
        self.parent_capability = parent_capability

        # Create attenuated capability for this agent
        self.capability = parent_capability.attenuate(
            interfaces=["database:read"],  # Reduced: read-only
            resources=["documents", "metadata"],  # Reduced: specific tables only
            ttl_seconds=3600,  # Reduced: 1 hour
            max_uses=100,  # Limited uses
            metadata={
                "agent": name,
                "delegated_from": parent_capability.id
            }
        )

    def execute_task(self, query):
        # Use the attenuated capability to authorize requests
        result = self.capability.authorize_and_execute(
            operation="read",
            resource="documents",
            category="database",
            action=lambda: self._perform_query(query)
        )
        return result

This creates a new capability that:

  • Cannot write (only read interface)
  • Cannot access all tables (only documents and metadata)
  • Has limited uses (100 operations max)
  • Expires sooner (1 hour vs 8 hours)
  • Required no server communication (cryptographic attenuation)

Step 3: Sub-Agents Can Delegate Further (With Even Fewer Permissions)

Each sub-agent can create its own delegated capabilities:

class DocumentAnalyzer(AIAgent):
    def spawn_extractor(self, section_name):
        # Create even more restricted capability for extractor
        extractor_capability = self.capability.attenuate(
            interfaces=["database:read"],  # Still read-only
            resources=["documents"],  # Further reduced: only documents table
            ttl_seconds=300,  # 5 minutes for sub-tasks
            max_uses=10,  # Very limited uses
            metadata={
                "agent": f"{self.name}-extractor-{section_name}",
                "delegated_from": self.capability.id
            }
        )

        return DataExtractor(
            name=f"{self.name}-extractor-{section_name}",
            capability=extractor_capability
        )

The Security Model: Automatic Privilege Attenuation

The key insight: Each delegation automatically narrows the scope. A child capability:

Cannot escalate privileges - Mathematically impossible to gain more permissions than parent ✅ Cannot extend expiration - Can only expire sooner, never later ✅ Cannot increase usage limits - Can only have fewer uses ✅ Cannot access new resources - Can only access a subset of parent’s resources

This is enforced cryptographically using Ed25519 signatures and Biscuit tokens. Any attempt to modify the capability breaks the signature:

# ❌ This will raise AttenuationViolationError
try:
    malicious_cap = child_capability.attenuate(
        interfaces=["database:write"],  # ❌ Parent doesn't have this!
        max_uses=1000  # ❌ More than parent's 100!
    )
except AttenuationViolationError as e:
    print(f"Attack prevented: {e}")
    # Error: Child cannot escalate privileges

Practical Example: Document Processing Pipeline

Let’s walk through a real scenario. You have an AI system that processes documents:

1. Main Orchestrator

from amla_sdk import AmlaClient, CapabilitySpec

# Initialize with root capability
orchestrator = AmlaClient(gateway_url="https://gateway.example.com")
root_cap = orchestrator.create_root_capability(
    interfaces=["database:*", "api:*"],
    resources=["*"],
    ttl_seconds=28800,  # 8 hours
    metadata={"agent": "orchestrator"}
)

# Process a document
def process_document(doc_id):
    # Create analyzer with reduced permissions
    analyzer = DocumentAnalyzer(
        name="analyzer",
        parent_capability=root_cap.attenuate(
            interfaces=["database:read", "api:execute"],
            resources=["documents", "metadata", "nlp_service"],
            ttl_seconds=3600,
            max_uses=100
        )
    )

    return analyzer.analyze(doc_id)

2. Document Analyzer

class DocumentAnalyzer:
    def __init__(self, name, parent_capability):
        self.name = name
        self.capability = parent_capability

    def analyze(self, doc_id):
        # Spawn specialized extractors
        text_extractor = self.spawn_extractor("text", max_uses=10)
        metadata_extractor = self.spawn_extractor("metadata", max_uses=5)

        # Each extractor has even more limited access
        text = text_extractor.extract(doc_id)
        meta = metadata_extractor.extract(doc_id)

        return self.combine_results(text, meta)

    def spawn_extractor(self, task_type, max_uses):
        # Create highly restricted capability for worker
        extractor_cap = self.capability.attenuate(
            interfaces=["database:read"],  # Read-only
            resources=["documents"] if task_type == "text" else ["metadata"],
            ttl_seconds=300,  # 5 minutes
            max_uses=max_uses
        )

        return DataExtractor(
            name=f"{self.name}-{task_type}",
            capability=extractor_cap
        )

3. Data Extractors

class DataExtractor:
    def __init__(self, name, capability):
        self.name = name
        self.capability = capability

    def extract(self, doc_id):
        # This will be tracked and limited
        return self.capability.authorize_and_execute(
            operation="read",
            resource="documents",
            category="database",
            additional_facts={"doc_id": doc_id},
            action=lambda: self._fetch_data(doc_id)
        )

    def _fetch_data(self, doc_id):
        # Actual database query here
        # If this agent is compromised, attacker can only:
        # - Read (not write)
        # - Access specific tables (not all)
        # - Make 10 requests max (then token exhausted)
        # - For 5 minutes (then token expired)
        pass

The Audit Trail

Every action creates a cryptographically verifiable audit trail:

# Query audit logs
audit_logs = client.get_audit_trail(capability_id=root_cap.id)

for log in audit_logs:
    print(f"""
    Agent: {log.agent}
    Action: {log.operation} on {log.resource}
    Delegation Chain: {' → '.join(log.delegation_chain)}
    Uses: {log.uses_count}/{log.max_uses}
    Status: {'✅ Authorized' if log.success else '❌ Denied'}
    Reason: {log.error_message or 'Success'}
    """)

Output:

Agent: orchestrator
Action: read on documents
Delegation Chain: root → analyzer → text-extractor
Uses: 3/10
Status: ✅ Authorized

Agent: orchestrator
Action: write on documents
Delegation Chain: root → analyzer → text-extractor
Uses: 4/10
Status: ❌ Denied
Reason: Capability lacks 'database:write' interface

Error Handling

Capabilities can be verified offline, making it easy to check delegation chains:

try:
    # Verify locally without network call
    is_valid = capability.verify_signature(public_key)

    if not is_valid:
        raise SecurityError("Invalid signature - token tampered")

    if capability.is_expired():
        raise SecurityError("Capability expired")

    if capability.is_exhausted():
        raise SecurityError("Usage limit exceeded")

    # Proceed with operation
    result = capability.authorize(...)

except AttenuationViolationError as e:
    print(f"Privilege escalation attempt detected: {e}")
except SecurityError as e:
    print(f"Security check failed: {e}")

Next Steps

Now that you understand the basics, learn about:


Related Guides:

Interested in Amla Labs?

We're building the future of AI agent security with capability-based credentials. Join our design partner program or star us on GitHub.