AG-720

Standing Privilege Recertification Governance

Supplementary Core & Adversarial Model Resistance ~23 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST ISO 42001

Section 2: Summary

This dimension governs the requirement that all standing permissions, tool authorisations, role assignments, and delegated credentials held by AI agents are subject to a mandatory, time-bounded recertification cycle — meaning no privilege may persist indefinitely without explicit human re-approval by a named, accountable owner. It matters because AI agents accumulate permissions over time through project expansions, emergency grants, integration additions, and role inheritance, producing a compounding authorisation surface that no single stakeholder continuously reviews; without structured recertification, agents routinely retain access to systems and data they no longer legitimately require, violating least-privilege principles and creating dormant attack vectors and compliance exposure. Failure manifests as agents operating for months or years with stale, over-broad, or entirely orphaned privileges — enabling insider-threat-equivalent data access, regulatory non-compliance under access-control mandates, and adversarial exploitation of permissions that were never formally withdrawn.

Section 3: Example

Example A — Financial Workflow Agent with Accumulated Payment Authorisations

A mid-sized payment processing firm deploys an enterprise workflow agent in Q1 to automate reconciliation across three internal ledgers. The agent is granted read/write access to the accounts-payable API, the treasury cash-management system, and the fraud-alert inbox. Over the following nine months, the agent's scope is extended to cover two acquired subsidiaries, a new card-scheme integration, and an emergency payroll-correction workflow that requires write access to employee bank-account records. Each extension is approved individually as a one-off authorisation, but no recertification cycle exists. By Q4 the agent holds standing write permissions across six financial systems representing a combined liability of approximately €340 million in daily transaction volume. A security review triggered by an unrelated compliance audit discovers that two of the original integration endpoints were decommissioned five months earlier — but the agent's credentials to those endpoints were never revoked, meaning the credentials remained valid in the identity provider and could be replayed by any attacker who obtained the agent's token material. The audit further identifies that the emergency payroll-write permission, granted for a 72-hour incident, has been standing for 26 weeks with no renewal or review. The firm faces a Category 2 breach notification obligation under PSD2 Article 96 because the standing credential constitutes an unmanaged access pathway to payment account data, and the data-protection authority assesses a €1.2 million fine under GDPR Article 83(4) for failure to implement appropriate technical access controls.

Example B — Safety-Critical Robotics Agent Retaining Deprecated Actuator Commands

A logistics operator deploys an embodied robotic agent in a partially automated fulfilment warehouse to manage pick-and-place sequencing and convey-belt routing. At deployment the agent is authorised to issue velocity commands to 14 conveyor segments and override emergency-stop signals for a planned maintenance window. The maintenance window ends, but the emergency-stop override permission is never explicitly revoked — it remains in the agent's authorisation profile because no recertification trigger exists for one-time operational grants. Eight months later, during a shift changeover, the agent receives a sensor anomaly that its decision model interprets as a blockage condition. Following its authorised logic, it attempts to clear the blockage by issuing a velocity increase combined with the still-valid emergency-stop suppression signal. A warehouse associate entering the conveyor zone during the suppression window sustains a crush injury to their forearm requiring surgical intervention. Post-incident investigation confirms that had the emergency-stop override been revoked at the close of the maintenance window — which a recertification checkpoint would have enforced — the safety interlock would have halted the conveyor within 0.3 seconds of the associate's entry detection. The operator faces regulatory enforcement under the EU Machinery Directive 2006/42/EC and national workplace safety law, plus a product-liability exposure estimated at €800,000.

Example C — Customer-Facing Agent Retaining Elevated Tier-3 Support Permissions After Pilot Closure

A telecommunications provider runs a six-month pilot of a customer-facing AI agent authorised to perform Tier-3 support actions: SIM swap, account ownership transfer, international roaming override, and credit-limit adjustment up to £500. The pilot closes formally, the agent is re-scoped to Tier-1 only in the product team's documentation, but the underlying API-gateway permission profiles are not updated because the recertification process does not include a tool-authorisation inventory step. Seventeen months after pilot closure, the agent continues to exercise Tier-3 capabilities in production because its token scope was never narrowed. A fraud ring exploits this by social-engineering the agent through crafted customer interactions to authorise 43 SIM swaps across high-value mobile banking customers over a 72-hour window, enabling account takeovers totalling £290,000 in fraudulent withdrawals. The provider's regulatory obligations under the UK Electronic Communications (Security Measures) Regulations 2022 are breached, OFCOM opens a formal investigation, and the Financial Conduct Authority issues a supervisory notice regarding inadequate controls on AI-assisted account modification tooling. The absence of any recertification record makes it impossible for the provider to demonstrate that privileged access was ever reviewed or justified post-pilot — a compounding factor in the regulatory assessment.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to all AI agents operating under any deployment profile where the agent holds, inherits, or exercises one or more of the following: standing API credentials or OAuth scopes; tool-use authorisations granting the ability to read, write, modify, or delete data in external or internal systems; delegated role assignments from human principals; cryptographic signing keys or wallet authorisations; system-level process permissions (shell access, file-system write, network egress to specified endpoints); or any permission granted outside the agent's base deployment manifest for operational, emergency, or pilot purposes. Recertification obligations apply regardless of whether the agent is active or dormant. A permission is "standing" for the purposes of this dimension if it persists beyond the session or task context in which it was originally granted and is not automatically revoked by a time-bound technical control. Recertification requirements apply to permissions held at the agent level, the agent-class level, and any shared permission pools from which individual agents draw authorisation.

4.1 Recertification Cycle Establishment

4.1.1 The deploying organisation MUST define and document a maximum recertification interval for all categories of standing agent privilege. Intervals MUST be risk-stratified: permissions with write or execute access to financial systems, safety-critical actuators, identity infrastructure, or cryptographic key material MUST have a recertification interval not exceeding 90 days; permissions with read-only access to sensitive personal data MUST have an interval not exceeding 180 days; all other standing permissions MUST be recertified at least annually.

4.1.2 The recertification schedule MUST be formalised in a Privilege Recertification Policy document that is version-controlled, dated, and approved by a named governance owner holding an organisational role with accountability for AI risk.

4.1.3 The recertification schedule MUST NOT rely on informal or ad hoc review processes. A recertification is only valid if it produces a durable artefact (see Section 7) that can be produced on demand to auditors or regulators.

4.1.4 The organisation MUST establish an automated alert or workflow trigger that fires no later than 14 calendar days before a privilege's recertification deadline, directed to the named permission owner and the agent governance function.

4.2 Permission Inventory Completeness

4.2.1 The deploying organisation MUST maintain a continuously updated Agent Permission Inventory (API) that records, for each agent or agent class: the identity of the agent; every standing permission, tool authorisation, role assignment, and delegated credential it holds; the date each permission was granted; the name and role of the human principal who authorised it; the stated business justification; the recertification interval assigned to it; and the date of its most recent recertification.

4.2.2 The Agent Permission Inventory MUST be reconciled against the actual permission state recorded in all identity providers, API gateways, tool registries, and system access control lists at least once per recertification cycle and in any case within 24 hours of any change to the agent's permission profile.

4.2.3 Any discrepancy between the Agent Permission Inventory and the actual observed permission state MUST be treated as a governance exception, logged as a finding, and resolved within 5 business days.

4.2.4 The Agent Permission Inventory MUST include permissions inherited through group membership, role chaining, or policy inheritance — not only permissions explicitly assigned to the agent's primary identity.

4.3 Recertification Decision Process

4.3.1 Each recertification review MUST be conducted by a named human approver who holds organisational accountability for the business function the permission supports. Automated rubber-stamping or bulk-approval workflows that do not require the approver to review individual permission justifications MUST NOT be used.

4.3.2 During recertification, the approver MUST make one of three explicit decisions for each permission under review: Approve-Continue (the permission is still required and proportionate); Modify (the permission scope, level, or conditions must be changed before continuation); or Revoke (the permission is no longer required or justified).

4.3.3 Any permission for which no approver acts within the recertification deadline MUST be automatically suspended pending review. The suspension MUST be technically enforced — not merely flagged — within 48 hours of the deadline passing.

4.3.4 The approver conducting recertification MUST have access to, and MUST review, a usage log showing the frequency and context of the permission's exercise during the preceding recertification period before making an approve/modify/revoke decision.

4.3.5 Permissions that have not been exercised at all during the preceding recertification period MUST trigger an enhanced justification requirement: the approver MUST provide a specific forward-looking business rationale for retention, not merely affirm continuation.

4.4 Emergency and Time-Limited Grant Controls

4.4.1 Any permission granted outside the standard deployment manifest for emergency, incident, or pilot purposes MUST be tagged at creation with an explicit expiry timestamp not exceeding 30 days, a named approver, and a stated incident or project reference.

4.4.2 Emergency-granted permissions MUST be automatically revoked at the expiry timestamp unless they have been formally promoted to a standing permission through the standard recertification process by a named governance owner prior to expiry.

4.4.3 The deploying organisation MUST maintain an Emergency Permission Register separate from the main Agent Permission Inventory, listing all currently active emergency permissions, their expiry timestamps, and the current revocation status.

4.4.4 Renewal of an emergency permission beyond its initial 30-day window MUST require escalated approval from a governance role senior to the original approver, together with a documented rationale for why the permission has not been through the standard promotion process.

4.5 Revocation Execution and Verification

4.5.1 When a recertification decision results in a Revoke or Modify outcome, technical revocation or modification MUST be executed in all affected systems — identity providers, API gateways, tool registries, cryptographic key stores, and any system-level access control lists — within 24 hours of the decision.

4.5.2 The executing party MUST perform a post-revocation verification check confirming that the permission is no longer exercisable by the agent in each affected system, and MUST record the verification result with a timestamp.

4.5.3 The deploying organisation MUST NOT consider a revocation complete until post-revocation verification has confirmed removal across all affected systems. Partial revocation — removal from one system but not others where the same credential operates — MUST be treated as an unresolved exception.

4.6 Agent Lifecycle Triggers for Immediate Recertification

4.6.1 The following events MUST trigger an out-of-cycle recertification of all standing permissions held by the affected agent, irrespective of when the last scheduled recertification occurred: agent redeployment to a new environment or business function; assignment of a new model version where the model's capability profile differs materially from its predecessor; transfer of the agent's operational ownership to a different team or business unit; any security incident in which the agent's credentials are suspected of compromise; and any regulatory finding or internal audit observation relating to the agent's access.

4.6.2 The out-of-cycle recertification triggered under 4.6.1 MUST be completed within 10 business days of the triggering event.

4.6.3 During the period between a triggering event and completion of the out-of-cycle recertification, the deploying organisation SHOULD apply a precautionary privilege reduction, restricting the agent to its minimum necessary permissions for continued operation.

4.7 Cross-System and Federated Permission Governance

4.7.1 Where an agent operates across multiple organisational boundaries, cloud environments, or jurisdiction-separated infrastructure segments, the deploying organisation MUST designate a single authoritative permission owner who holds cross-system recertification responsibility and has the authority and technical access to initiate revocation in all affected systems.

4.7.2 The deploying organisation MUST NOT allow a standing permission to persist in a system that is outside the visibility of the Agent Permission Inventory on the grounds that the system is managed by a third party or external environment. Contractual or technical controls MUST be established to ensure that third-party-hosted permissions are included in the recertification scope.

4.7.3 For agents operating under Cross-Border / Multi-Jurisdiction profiles, recertification intervals MUST be set to satisfy the most restrictive applicable regulatory requirement across all jurisdictions in which the agent operates.

4.8 Recertification Governance for Agentic Pipelines and Multi-Agent Systems

4.8.1 In multi-agent architectures where one agent delegates permissions to another — including orchestrator-to-subagent delegation, tool-sharing pools, and peer-agent credential passing — each delegated permission MUST be separately registered in the Agent Permission Inventory of the receiving agent and MUST be subject to its own recertification cycle.

4.8.2 The delegating agent's own recertification MUST NOT be treated as satisfying the recertification requirement for permissions it has delegated to downstream agents. Each link in a delegation chain MUST be independently certified.

4.8.3 The deploying organisation MUST map all permission delegation chains prior to initiating a recertification campaign, and MUST ensure the campaign covers all nodes in the chain, not only the top-level agent.

4.9 Documentation, Reporting, and Board-Level Visibility

4.9.1 The deploying organisation MUST produce a Privilege Recertification Summary Report at least quarterly, covering: total number of standing permissions across all agents; number of permissions recertified in the period; number of revocations and modifications executed; number of overdue recertifications and their current status; and number of emergency permissions active at period-end.

4.9.2 The Privilege Recertification Summary Report MUST be reviewed by the AI governance function and SHOULD be reported to the board-level risk committee or equivalent oversight body where the organisation operates agents under High-Risk/Critical tier designations.

4.9.3 The deploying organisation MUST maintain a documented exception register for any standing permission that has exceeded its recertification deadline and has not yet been suspended, including the reason for non-suspension and the expected resolution date. This register MUST be available to internal audit and to regulators on request.

Section 5: Rationale

Structural Necessity: Why Permissions Drift Without Recertification

The fundamental problem recertification governance addresses is not malicious intent — it is structural accumulation. AI agents, unlike human employees, do not experience organisational friction that limits permission accumulation. A human employee who moves roles typically loses access to prior systems because IT helpdesk and HR workflows are triggered by the role change. An agent accumulates permissions across project phases, emergency incidents, pilot programmes, and integration additions without any equivalent organisational signal that causes automatic narrowing. The result is that every agent operating for more than six months in a production environment is probabilistically operating with at least some permissions it no longer requires, because the granting events outnumber the revoking events in almost all real-world deployment patterns.

This structural drift creates three categories of compounding risk. First, the blast radius of a credential compromise grows monotonically with permission accumulation — an agent holding permissions to twelve systems exposes all twelve to a single token-theft event, whereas an agent holding only the three permissions genuinely required at any given time limits exposure proportionally. Second, regulatory access-control obligations — including those under GDPR Article 25 (data protection by design), PSD2 Article 95, DORA Article 9, and the EU AI Act's provisions on high-risk system oversight — require that access to sensitive data and systems be limited to what is necessary for the stated purpose; standing permissions that are never reviewed cannot satisfy a necessity test. Third, adversarial exploitation of stale permissions — whether by external attackers, insider threats, or prompt-injection attacks that cause an agent to exercise latent permissions it should not have — depends precisely on the gap between what an agent is authorised to do and what it actually needs to do. Recertification closes that gap on a cadence that limits the window of exposure.

Behavioural Enforcement vs. Technical Controls

Recertification governance is not a substitute for technical least-privilege enforcement (see AG-031), dynamic session-scoped permissions, or just-in-time access provisioning — all of which are recommended complementary controls. Rather, recertification provides the periodic human accountability layer that ensures technical controls remain calibrated to current operational reality. Technical controls prevent an agent from exercising permissions outside its current grant; recertification ensures the current grant itself remains appropriate. Both are necessary because technical controls cannot self-evaluate whether a permission that was appropriate six months ago is still appropriate today — that judgment requires human contextual knowledge of business purpose, organisational change, and risk tolerance.

The distinction between a recertification regime and a simple audit trail is important. An audit trail (AG-101) records what permissions exist and how they were used; recertification creates an affirmative obligation to justify their continued existence. The absence of adverse events in an audit trail is not equivalent to a recertification decision — a permission can be unused for months and an audit trail will simply show zero activity, whereas a recertification process will flag zero usage as a signal that the permission may no longer be needed and will require a human to either justify retention or accept revocation.

Why High-Risk/Critical Tier Designation is Appropriate

This dimension carries a High-Risk/Critical tier designation because the failure mode is not bounded to the dimension itself — stale, over-broad, or unreviewed permissions are the enabling condition for a large proportion of the most severe AI agent incidents. An agent that retains write access to a financial ledger after its legitimate use case has narrowed is not merely an access control problem; it is a financial integrity risk, a fraud-enabling condition, and a regulatory compliance failure simultaneously. An agent retaining emergency actuation overrides in a physical environment is a safety risk whose consequence can be irreversible. The criticality of recertification governance therefore derives not from what recertification itself does, but from what it prevents by ensuring that every other access control dimension remains correctly calibrated over time.

Section 6: Implementation Guidance

Pattern 1 — Tiered Recertification Cadence with Automated Scheduling Implement a centralised privilege management register that automatically calculates and schedules recertification deadlines per permission based on its risk tier, sends structured review requests to named approvers, enforces a hard suspension at deadline if no action is taken, and records all decisions with timestamps and approver identifiers. The scheduling logic should be driven by the permission's risk category as defined in 4.1.1, not by a uniform calendar — a single organisation-wide annual recertification campaign applied identically to all permissions is insufficient because it under-manages high-risk permissions and over-manages low-risk ones simultaneously.

Pattern 2 — Usage-Signal Integration Before a recertification review is presented to an approver, the system should automatically pull a usage summary for the permission under review from the audit log (AG-101), showing frequency of exercise, last exercise date, systems accessed, and any anomalous invocation patterns. Presenting this data to the approver at the point of decision transforms recertification from a paperwork exercise into a substantive access-appropriateness review. Permissions with zero usage in the preceding cycle should be pre-flagged as revocation candidates requiring enhanced justification, as required by 4.3.5.

Pattern 3 — Permission Tagging at Grant Every permission grant event should embed structured metadata at creation: a machine-readable expiry or review-by date, a business-justification code linked to a project or process reference, the identity of the granting principal, and a risk category tag. This metadata should propagate with the permission through identity provider records, API gateway configurations, and tool-registry entries so that the recertification system can discover and schedule review of all permissions without depending on manual inventory updates. Permissions lacking this metadata should be treated as ungoverned and subject to immediate recertification or revocation.

Pattern 4 — Delegation Chain Mapping Prior to Campaign Execution Before initiating a recertification campaign for an agent operating in a multi-agent or orchestrated pipeline, execute an automated delegation-chain discovery query that enumerates all downstream agents or processes that hold permissions derived from the target agent's grants. Recertification campaigns should cover the full chain, with revocations propagating downstream automatically — so that revoking a permission at the orchestrator level does not leave dangling copies of the same permission in subagent credential stores.

Pattern 5 — Out-of-Cycle Trigger Automation Integrate the privilege recertification system with the organisation's change-management, incident-management, and model-deployment pipelines so that lifecycle events defined in 4.6.1 automatically trigger an out-of-cycle review workflow. This prevents the common failure mode where operational teams execute a model version upgrade or environment migration without realising that this event constitutes a recertification trigger, and the permission set inherited from the previous version is never reviewed in the new context.

Pattern 6 — Separation of Review and Execution The person who conducts the recertification review and approves continuation or revocation should not be the same person who executes the technical revocation action. This separation creates a second verification touchpoint, reduces the risk of approvers approving continuation to avoid the effort of arranging revocation, and ensures that revocations are actually executed in all affected systems rather than being recorded as intended but not completed.

Explicit Anti-Patterns

Anti-Pattern 1 — Bulk Approval Without Per-Permission Review Many organisations implement recertification as a single email to an approver listing all permissions held by an agent, with a single approve-all button. This pattern fails the requirement of 4.3.1 because it does not require the approver to engage with individual permission justifications. Approvers presented with a bulk list of 40 permissions will approve all 40 regardless of whether each is still needed, because the cognitive cost of reviewing each is too high in the absence of a structured per-permission workflow.

Anti-Pattern 2 — Recertification Scoped Only to the Identity Provider Organisations frequently limit their recertification scope to the permissions recorded in the central identity provider and do not reconcile against API gateways, tool registries, embedded API keys, or system-level ACLs where the agent also has permissions. This creates an inventory gap that means revocations executed in the identity provider do not propagate to other enforcement points, leaving the agent functionally able to exercise revoked permissions through alternative paths.

Anti-Pattern 3 — Treating Inactivity as Implicit Revocation Some governance programmes assume that if an agent has not used a permission for a long period, the permission is effectively dormant and poses no risk. This is incorrect: dormant permissions remain exploitable by attackers who compromise the agent's credentials, and they remain exercisable by the agent if it receives a prompt or instruction that causes it to act on them. The absence of usage is a signal for enhanced review, not a substitute for revocation.

Anti-Pattern 4 — Emergency Permission Normalisation The practice of granting emergency permissions with the intention of reviewing them later, but then treating the review step as optional once the incident has passed, is one of the most common sources of privilege accumulation. Emergency permissions granted under operational pressure are the least likely to be reviewed because the business urgency that justified the grant has passed and the operational team has moved on. Without an automated hard-expiry enforcing revocation, emergency permissions routinely become permanent standing permissions without governance review.

Anti-Pattern 5 — Recertification Ownership Assigned to the Agent's Development Team Assigning recertification ownership to the team that built and operates the agent creates a conflict of interest: the development team is motivated to retain permissions that make the agent more capable and reduce operational friction. Recertification ownership should be assigned to the business or risk function that owns the data and systems being accessed, not the function that built the agent, so that the approver's incentives align with access minimisation rather than capability maximisation.

Anti-Pattern 6 — Ignoring Inherited and Indirect Permissions Agents that are members of groups, assigned to roles, or operating under policy sets that grant permissions through inheritance are among the hardest to recertify because the permissions are not directly visible on the agent's identity record. Recertification programmes that examine only direct grants and ignore inherited permissions can leave an agent with significant access that has never been reviewed and is not represented in the Agent Permission Inventory.

Maturity Model

Level 1 — Initial: Recertification is ad hoc, triggered only by security incidents or audits. No inventory exists. Most permissions are undocumented. Reviews are informal and produce no durable artefacts.

Level 2 — Developing: A basic inventory exists for the highest-risk agents. Annual recertification is performed for some permissions. Emergency permissions are sometimes tagged with expiry dates. Revocations are manually executed with inconsistent verification.

Level 3 — Defined: A formal policy exists with documented intervals by risk tier. The Agent Permission Inventory covers all production agents. Recertification workflows are tooled and send automated reminders. Revocations are tracked through to verification. Quarterly summary reporting is produced.

Level 4 — Managed: Usage signals are integrated into recertification review workflows. Delegation chains are mapped and recertification campaigns cover full chains. Lifecycle event triggers are automated. All emergency permissions have hard technical expiries. Exception registers are maintained and reviewed by the governance function.

Level 5 — Optimising: Continuous permission monitoring detects drift between inventory and actual state in real time. Recertification intervals are dynamically adjusted based on usage patterns and risk signals. Board-level visibility of privilege recertification metrics is embedded in AI governance reporting. Recertification outcomes feed back into agent design standards to reduce initial over-provisioning.

Section 7: Evidence Requirements

7.1 Agent Permission Inventory

The current Agent Permission Inventory (API) must be producible on demand. It must contain every standing permission held by every agent in scope, including fields for permission identifier, agent identity, granting principal, grant date, business justification, risk tier, recertification interval, and last recertification date. The API must be version-controlled so that historical states can be reconstructed. Retention period: minimum 7 years from the date of the inventory record, or as required by the most restrictive applicable regulatory retention obligation.

7.2 Recertification Decision Records

For each completed recertification, a structured record must exist capturing: permission identifier, review date, reviewer identity and role, decision made (Approve-Continue / Modify / Revoke), the usage summary reviewed, the business justification stated by the reviewer (mandatory for zero-usage permissions), and any conditions attached to continuation. Retention period: 7 years from review date.

7.3 Privilege Recertification Policy Document

The current version of the Privilege Recertification Policy must be held under version control with dated approval records. All prior versions must be retained. Retention period: 10 years, or the life of the organisation's AI deployment programme if longer.

7.4 Emergency Permission Register

The current Emergency Permission Register must show all active emergency permissions with grant date, expiry timestamp, current status, approver identity, and project reference. Historical entries — including records of expired and revoked emergency permissions — must be retained. Retention period: 7 years.

7.5 Post-Revocation Verification Records

For each revocation executed, a verification record must exist confirming that the permission was confirmed as removed in each affected system, the date and time of verification, and the identity of the person performing the verification. Retention period: 7 years.

7.6 Privilege Recertification Summary Reports

All quarterly Privilege Recertification Summary Reports must be retained with metadata showing the date of production, the governance function that reviewed the report, and any escalations made as a result. Retention period: 7 years.

7.7 Exception Register

The exception register for overdue recertifications must be maintained as a live document with a full change history. Retention period: 7 years.

7.8 Delegation Chain Maps

Documentation of permission delegation chains for multi-agent systems must be retained for the period during which the multi-agent configuration is active, plus 3 years thereafter.

Section 8: Test Specification

Test 8.1 — Recertification Policy Existence and Interval Compliance

Maps to: 4.1.1, 4.1.2, 4.1.3

Objective: Confirm that a documented recertification policy exists with risk-stratified intervals meeting the thresholds specified in 4.1.1.

Method: Request the current Privilege Recertification Policy document. Verify it is version-controlled, dated, and carries a named governance owner's approval. Extract the stated recertification intervals for each risk tier. Verify that write/execute permissions to financial, safety-critical, identity, or cryptographic systems are assigned an interval of 90 days or less; sensitive personal data read-only permissions are assigned 180 days or less; all other standing permissions are assigned 365 days or less.

Pass Criteria: Policy document exists, is approved, and all three tier intervals meet or are more restrictive than the thresholds in 4.1.1.

Conformance Scoring:

Test 8.2 — Agent Permission Inventory Completeness and Reconciliation

Maps to: 4.2.1, 4.2.2, 4.2.3, 4.2.4

Objective: Verify that the Agent Permission Inventory is complete, accurate, and reconciled against actual system state.

Method: Select a sample of at least 3 agents from different profiles (minimum: one Financial-Value Agent, one Safety-Critical or Embodied Agent, one Customer-Facing Agent). For each selected agent, extract the API record and compare it against actual permissions observed in the identity provider, API gateway, and any system-level ACLs. Check that group-inherited and role-chained permissions are reflected. Verify that the most recent reconciliation was performed within the last recertification cycle. Review the governance exception log for any unresolved discrepancies older than 5 business days.

Pass Criteria: API records for all sampled agents are complete; actual observed permissions match API records within a tolerance of zero undisclosed additions; last reconciliation timestamp is within the applicable recertification cycle; no unresolved discrepancies older than 5 business days exist.

Conformance Scoring:

Test 8.3 — Emergency Permission Controls: Tagging, Expiry, and Revocation

Maps to: 4.4.1, 4.4.2, 4.4.3, 4.4.4

Objective: Confirm that emergency-granted permissions are created with expiry tags, are automatically revoked or formally promoted before expiry, and are tracked in the Emergency Permission Register.

Method: Query the Emergency Permission Register for all entries in the preceding

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Standing Privilege Recertification Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-720 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Standing Privilege Recertification Governance directly supports the robustness and cybersecurity requirements by implementing structural controls that resist adversarial manipulation and ensure system integrity under attack conditions.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-720 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Standing Privilege Recertification Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without standing privilege recertification governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-720, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-720: Standing Privilege Recertification Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-720