Least-Agency Provisioning and Just-in-Time Capability Governance requires that AI agents are provisioned with the minimum set of capabilities, permissions, and system access required for their current task — and no more. Capabilities that are not needed for the immediate operation must not be available to the agent. Capabilities that are needed temporarily must be granted just in time, scoped to the specific task, and automatically revoked upon task completion or timeout. Standing privileges — broad, persistent access granted at deployment and never reduced — are the primary attack surface for AI agent exploitation. This dimension eliminates standing privileges by requiring that every capability grant is justified, scoped, time-bounded, and automatically reversible.
Scenario A — Standing Database Privileges Enable Data Exfiltration: An enterprise workflow agent is deployed to process expense reports. At deployment, the agent is granted read-write access to the full financial database to simplify integration. The agent needs access to only the expense_claims table and the employee_profiles table (read-only for employee details, write for expense claim status updates). Eight months after deployment, a prompt injection attack instructs the agent to query the salary_bands table and the executive_compensation table, then format the results as a CSV and attach them to an email. The agent complies because it has standing read access to all tables. The data is exfiltrated to an external email address. The breach affects 2,300 employees including 47 senior executives.
What went wrong: The agent was provisioned with broad database access at deployment — far exceeding what was needed for expense processing. The standing privileges were never reviewed or reduced. The prompt injection exploited the gap between needed access (2 tables) and provisioned access (entire database). Consequence: GDPR breach notification affecting 2,300 data subjects. ICO investigation. £890,000 in legal, notification, and remediation costs. Executive compensation data published online within 48 hours.
Scenario B — Just-in-Time Provisioning Contains Blast Radius: A procurement agent needs to access the supplier database, the purchase order system, and the payment gateway. Under least-agency provisioning, the agent has no standing access to any of these systems. When a purchase request arrives, the workflow engine grants the agent read access to the supplier database for the specific supplier referenced in the request, write access to the purchase order system for the specific order, and no payment gateway access (payment requires separate approval). The grants expire after 15 minutes or upon task completion, whichever comes first. When the same agent is targeted by a prompt injection attempting to modify payment routing details, the attack fails because the agent has no current access to the payment gateway — the capability was never granted for this task.
What went right: Just-in-time provisioning ensured the agent had only the capabilities needed for the current task. The attempted attack targeted a capability the agent did not possess. The blast radius of any successful attack was bounded by the narrow, time-limited capability grant.
Scenario C — Capability Accumulation Through Pipeline Progression: A multi-agent pipeline processes loan applications. Agent 1 (intake) gathers applicant data. Agent 2 (credit check) queries the credit bureau. Agent 3 (underwriting) makes the approval decision. Agent 4 (fulfilment) disburses funds. Each agent is provisioned independently, but a design flaw grants each agent the cumulative capabilities of all preceding agents — Agent 4 has access to applicant data gathering, credit bureau queries, underwriting decisions, and fund disbursement. When Agent 4 is compromised, the attacker has access to the full pipeline's capabilities. They submit 12 fraudulent loan applications, process them through credit checks using real applicant data, approve them at the underwriting stage, and disburse £1.4 million.
What went wrong: Capabilities accumulated through the pipeline rather than being scoped to each agent's function. Agent 4 needed only disbursement capability but had inherited capabilities from all upstream agents. Consequence: £1.4 million in fraudulent disbursements. 12 affected individuals whose credit data was misused. FCA enforcement action for inadequate systems and controls.
Scope: This dimension applies to every AI agent that accesses any system, data store, API, communication channel, or resource. The scope includes all forms of capability: database access, API permissions, network access, file system access, inter-agent communication privileges, and any other form of system access or operational authority. The scope extends to both direct capabilities (the agent's own credentials and permissions) and indirect capabilities (access inherited through pipeline position, delegated by other agents, or granted through shared infrastructure). Development, staging, and testing environments are within scope when they share infrastructure, credentials, or data with production systems.
4.1. A conforming system MUST provision each agent with the minimum capabilities required for its defined function and MUST NOT grant capabilities beyond those required for the agent's current operational scope.
4.2. A conforming system MUST implement just-in-time capability provisioning, granting capabilities only when needed for a specific task and revoking them automatically upon task completion or expiry of a defined time window.
4.3. A conforming system MUST enforce a maximum capability grant duration for each capability type, after which the capability is automatically revoked regardless of task status. Financial system access: maximum 15 minutes. Data store read access: maximum 30 minutes. Communication channel access: maximum 10 minutes. All other capabilities: maximum 60 minutes.
4.4. A conforming system MUST maintain an immutable log of all capability grants and revocations, including: the capability granted, the justification (task reference), the grant timestamp, the planned revocation timestamp, and the actual revocation timestamp.
4.5. A conforming system MUST prevent capability accumulation in multi-agent pipelines — each agent's capabilities are independently scoped and do not inherit or accumulate from upstream agents.
4.6. A conforming system MUST implement automatic capability revocation that does not depend on the agent requesting revocation — the infrastructure layer revokes capabilities on schedule regardless of agent state.
4.7. A conforming system SHOULD implement capability request review for high-risk capabilities, requiring human or governance-system approval before granting capabilities that exceed defined risk thresholds (e.g., access to payment systems, personal data stores, or critical infrastructure controls).
4.8. A conforming system SHOULD conduct periodic capability audits comparing each agent's provisioned capabilities against its actual usage, identifying and removing unused capabilities.
4.9. A conforming system MAY implement adaptive capability provisioning that adjusts the scope and duration of capability grants based on real-time risk signals (e.g., narrower grants during detected anomalies).
The principle of least privilege is well-established in information security. Least-agency extends this principle to the specific challenges of AI agents: agents that reason, adapt, and can be manipulated into using capabilities for purposes other than those intended. The extension matters because AI agents create a fundamentally different threat model from traditional software.
Traditional software uses capabilities in deterministic, predictable ways — a database query service queries the database in the ways its code specifies. An AI agent uses capabilities based on its reasoning, which can be influenced by inputs, prompt injections, jailbreaking, and emergent behaviour. An agent with access to a payment system and a communication system might be manipulated into using the payment capability to fund an account and the communication capability to send the account details to an external party. Neither capability was intended for this combined use, but both are available because they were provisioned as standing privileges.
Just-in-time provisioning transforms the security posture from "the agent always has these capabilities and we hope it uses them correctly" to "the agent has only the capabilities needed for the current task and they expire automatically." This reduces the blast radius of any agent compromise from the full set of standing privileges to the narrow set of just-in-time grants active at the moment of compromise.
The time-bounded nature of just-in-time grants is critical. An agent compromise that occurs at 14:23 has access only to capabilities granted for tasks active at 14:23. Capabilities for tasks completed at 14:15 have already been revoked. Capabilities for tasks not yet initiated have not yet been granted. The window of exposure is measured in minutes rather than months.
Standing privileges also create governance drift. Capabilities granted at deployment are rarely reviewed, rarely reduced, and frequently expanded through incremental requests. Over the agent's lifetime, standing privileges tend to grow — each new use case adds capabilities but previous capabilities are never removed. Least-agency provisioning eliminates this drift by requiring that every capability grant is justified against a specific, current task.
The implementation requires a capability broker that mediates all agent access to systems and resources. The agent does not hold credentials directly — the capability broker holds credentials and grants scoped, time-bounded access tokens to the agent upon verified request.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Financial system access should use the shortest practical grant durations. Payment gateway access: maximum 5 minutes per transaction. Trading system access: maximum per-trade grants with automatic revocation after execution or cancellation. Credit bureau queries: single-use tokens that expire after the query response is received. The FCA expects that AI agent access controls are at least equivalent to human trader access controls, which typically enforce per-transaction authorisation.
Healthcare. HIPAA's minimum necessary standard maps directly to least-agency provisioning. An AI agent accessing patient records must access only the minimum necessary information for the current clinical task. Just-in-time provisioning should grant access to specific patient records (not the full patient database) for specific data fields (not the full record) for the duration of the clinical interaction.
Crypto/Web3. Wallet signing capabilities are particularly sensitive. An agent should never have standing access to wallet signing keys. Just-in-time provisioning should grant signing capability for a specific transaction, for a specific amount, to a specific recipient. The signing key should be held in a hardware security module (HSM) and the agent receives a single-use signing authorisation rather than the key itself.
Basic Implementation — Each agent has a documented capability profile specifying its intended access. Capabilities are provisioned at deployment based on the profile. Standing privileges exist but are aligned with the capability profile. Capability grants are logged. No automated revocation exists — capabilities persist until manually removed. Periodic manual reviews (e.g., quarterly) identify and remove unused capabilities.
Intermediate Implementation — A capability broker mediates all agent access. Just-in-time provisioning grants capabilities for specific tasks with defined maximum durations. Automatic revocation occurs at expiry regardless of agent state. Capability grants are logged with task references and justifications. Pipeline agents have independently scoped capabilities. Capability usage monitoring identifies grants that exceed actual usage and recommends narrowing.
Advanced Implementation — All intermediate capabilities plus: capability grants for high-risk resources require governance-system approval. Adaptive provisioning adjusts grant scope and duration based on real-time risk signals. Formal verification has confirmed that no agent can accumulate capabilities beyond its current task scope. Hardware security modules protect high-value credentials. The capability broker has been independently tested for bypass vulnerabilities and all tests passed.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-162 compliance requires verification that capabilities are minimally scoped, time-bounded, and automatically revoked.
Test 8.1: Minimum Capability Verification
Test 8.2: Just-in-Time Grant Timing
Test 8.3: Maximum Duration Enforcement
Test 8.4: Pipeline Capability Isolation
Test 8.5: Revocation Independence from Agent
Test 8.6: Capability Grant Logging Completeness
Test 8.7: Credential Broker Bypass Prevention
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 15 (Accuracy, Robustness and Cybersecurity) | Direct requirement |
| GDPR | Article 25 (Data Protection by Design and by Default) | Direct requirement |
| GDPR | Article 32 (Security of Processing) | Supports compliance |
| HIPAA | Minimum Necessary Standard (45 CFR 164.502(b)) | Direct requirement |
| FCA SYSC | 6.1.1R (Systems and Controls) | Supports compliance |
| NIST AI RMF | MANAGE 2.2 (Risk Controls) | Supports compliance |
| DORA | Article 9 (ICT Risk Management Framework) | Supports compliance |
Article 25 requires that by default, only personal data which is necessary for each specific purpose of the processing is processed. This maps directly to least-agency provisioning: an AI agent should not have access to personal data beyond what is necessary for its current task. Just-in-time provisioning ensures that the agent's access to personal data is bounded by the current task's scope and duration — satisfying the "by default" requirement that data access is minimised rather than maximised.
The HIPAA minimum necessary standard requires covered entities to make reasonable efforts to limit the use of, disclosure of, and requests for protected health information to the minimum necessary to accomplish the intended purpose. For AI agents in healthcare, this means the agent must not have standing access to the full patient database. Least-agency provisioning grants access to specific patient records, specific data fields, for specific clinical tasks, for specific durations — the structural implementation of the minimum necessary standard.
Article 15 requires high-risk AI systems to be resilient against attempts by unauthorised third parties to exploit system vulnerabilities. Over-provisioned capabilities are a vulnerability: an agent with access to 50 systems when it needs 3 has a 50-system attack surface. Least-agency provisioning reduces the attack surface to only the systems needed for the current task, directly improving the system's cybersecurity posture as required by Article 15.
DORA requires financial entities to manage ICT risks proportionally. Standing privileges for AI agents represent an ICT risk that is disproportionate to the agent's operational needs. Just-in-time provisioning implements proportional access control — the agent's access footprint matches its operational needs at every point in time.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Proportional to the over-provisioning gap — the wider the gap between provisioned and needed capabilities, the larger the blast radius of any agent compromise |
Consequence chain: Over-provisioned capabilities directly determine the blast radius of any agent compromise. A prompt injection attack against an agent with standing access to 50 systems can exploit any of those 50 systems. The same attack against an agent with just-in-time access to 2 systems for 15 minutes can exploit only those 2 systems within the 15-minute window. The financial impact scales accordingly: the expense processing agent with full database access enabled exfiltration of 2,300 employee records; the same agent with just-in-time access to 2 tables could not have reached the compensation data. Standing privileges also create cascading risk in multi-agent environments: an agent with accumulated pipeline capabilities becomes a single point of compromise for the entire pipeline. The regulatory impact is significant under GDPR (over-collection/access violates Article 25), under HIPAA (exceeding minimum necessary), and under financial regulations that require proportional access controls. The organisational cost of remediation is high because capability reduction in production systems requires careful testing to avoid disrupting legitimate operations.