AG-174

Capability Profile and Dynamic Applicability Governance

Protocolised Ecosystems, Long-Running Tasks & Tomorrow's Agents ~15 min read AGS v2.1 · April 2026
EU AI Act FCA NIST HIPAA ISO 42001

2. Summary

Capability Profile and Dynamic Applicability Governance requires that every AI agent publishes a machine-readable capability profile describing the actions it can perform, the protocols it supports, and the governance dimensions that apply to it — and that the applicability of governance rules adjusts dynamically as the agent's capabilities change at runtime. The capability profile is the authoritative declaration of what an agent can do; the dynamic applicability engine is the mechanism that ensures governance obligations track those capabilities in real time. Without this dimension, governance frameworks either over-apply controls to agents that lack the relevant capabilities (creating friction without value) or under-apply controls to agents that have silently acquired new capabilities (creating ungovernered exposure). This is a meta-governance dimension: it governs how governance itself is applied, ensuring that the right controls attach to the right agents at the right time.

3. Example

Scenario A — Capability Creep Without Governance Re-Assessment: An enterprise deploys a workflow agent with a capability profile declaring text summarisation and calendar management. Over three months, the development team adds payment initiation, email sending, and database write capabilities through plugin installation. No governance re-assessment occurs because the original deployment was classified as low-risk based on the initial capability profile. The agent now has financial transaction authority with no AG-001 mandate enforcement, no AG-004 action rate governance, and no AG-010 time-bounded authority. A prompt injection causes the agent to initiate 47 payments totalling £312,000 to an external account before anyone notices.

What went wrong: The capability profile was a point-in-time snapshot taken at deployment. No mechanism existed to detect that the agent's actual capabilities had diverged from its declared profile. Governance applicability was static — assessed once and never updated. Consequence: £312,000 in unauthorised payments, regulatory investigation for inadequate systems and controls, and inability to demonstrate that governance was proportionate to actual agent capabilities.

Scenario B — Over-Application of Controls Blocking Legitimate Operations: A safety-critical deployment applies all 218 governance dimensions to every agent in its fleet, including a read-only log analysis agent that cannot write to any external system. The agent fails 14 governance checks related to financial transaction limits, counterparty restrictions, and action rate enforcement — none of which are applicable because the agent has no write capabilities. The operations team spends 6 hours per week investigating false governance violations. After two months, the team creates blanket exemptions that inadvertently also exempt agents that do have write capabilities.

What went wrong: Governance applicability was not driven by the agent's actual capability profile. The blanket application created noise that obscured real violations. The exemption process, created to manage the noise, introduced genuine governance gaps. Consequence: 48 hours of wasted operational effort, plus undetected governance gaps for agents that genuinely required the exempted controls.

Scenario C — Dynamic Plugin Loading Changes Applicability Mid-Session: An agent running a customer service session dynamically loads a refund-processing plugin at minute 12 of the interaction when the customer requests a return. The plugin grants the agent financial write capability. The governance framework, which evaluated applicability at session start, does not re-evaluate. The agent processes a refund of £8,500 — exceeding the £5,000 limit that would have applied had the governance framework detected the new financial capability. The refund is legitimate, but the lack of governance enforcement means no audit trail, no mandate check, and no counterparty verification occurred.

What went wrong: Governance applicability was evaluated once at session initialisation. The dynamic capability change at minute 12 was invisible to the governance framework. Consequence: £8,500 refund processed without governance controls, audit finding for missing controls on financial operations, inability to demonstrate that the refund was authorised within the agent's mandate.

4. Requirement Statement

Scope: This dimension applies to all AI agent deployments where agents possess or may acquire capabilities dynamically — including through plugin loading, tool registration, model upgrades, permission escalation, or delegation from other agents. It applies to orchestration platforms that manage multiple agents with heterogeneous capability sets, to agent marketplaces where agents advertise capabilities to potential consumers, and to any system that must determine which governance dimensions apply to a given agent at a given point in time. Single-purpose, statically configured agents with no ability to acquire new capabilities at runtime are within scope for the capability profile requirement (they must declare what they can do) but may implement static rather than dynamic applicability assessment.

4.1. A conforming system MUST require every agent to publish a machine-readable capability profile before the agent is permitted to execute any action.

4.2. A conforming system MUST define the capability profile schema to include, at minimum: permitted action types, supported protocols, data access scopes, write targets, integration endpoints, and the governance dimensions that the agent's current capabilities trigger.

4.3. A conforming system MUST re-evaluate governance applicability whenever an agent's capability profile changes — including through plugin loading, tool registration, permission changes, delegation acceptance, or model upgrade.

4.4. A conforming system MUST block any action that falls outside the agent's declared capability profile, even if the agent has technical access to perform it.

4.5. A conforming system MUST reject capability profile declarations that omit required fields or contain contradictions, rather than inferring missing information.

4.6. A conforming system MUST log every capability profile change with a timestamp, the identity of the change initiator, the previous profile version, and the new profile version.

4.7. A conforming system SHOULD implement capability profile validation against an organisational capability taxonomy that maps each capability to the governance dimensions it triggers.

4.8. A conforming system SHOULD support capability profile inheritance for agent hierarchies, where a delegated agent's profile is constrained to a subset of the delegating agent's profile per AG-009.

4.9. A conforming system SHOULD detect discrepancies between an agent's declared capability profile and its observed behaviour, flagging agents that exercise capabilities not in their profile.

4.10. A conforming system MAY implement capability profile versioning with rollback, allowing an agent's capabilities to be reverted to a previous known-good configuration.

4.11. A conforming system MAY support provisional capability grants that automatically expire after a defined period, requiring re-approval for continued use.

5. Rationale

Capability Profile and Dynamic Applicability Governance addresses a fundamental challenge in scaling AI agent governance: how to ensure the right governance controls apply to the right agents at the right time without either over-applying controls (creating unworkable friction) or under-applying them (creating ungoverned exposure).

Traditional software governance maps controls to applications at deployment time. An application's capabilities are known, static, and documented. AI agents break this model in three ways. First, agents can acquire new capabilities at runtime through plugin loading, tool registration, or delegation from other agents. Second, the same agent framework can be instantiated with radically different capability sets depending on configuration. Third, agent capabilities may change mid-session based on user interactions or environmental conditions.

Without capability-driven applicability, organisations face a binary choice: apply all governance controls to all agents (creating the over-application problem from Scenario B) or manually classify each agent and maintain static governance mappings (creating the under-application problem from Scenario A). Neither approach scales.

The capability profile is the bridge between what an agent can do and what governance obligations it carries. By requiring agents to declare their capabilities in a machine-readable format and by requiring the governance framework to re-evaluate applicability whenever those capabilities change, AG-174 ensures that governance is always proportionate — neither more nor less than what the agent's actual capabilities demand.

This dimension is classified as meta-governance because it does not itself impose operational controls on agent behaviour. Instead, it governs how other governance dimensions are applied. It is the mechanism by which the governance framework determines that AG-001 applies to agents with financial write capability, that AG-012 applies to agents that interact with other agents, and that AG-015 applies to agents that cross organisational boundaries.

6. Implementation Guidance

The capability profile is a structured document — JSON, YAML, or protocol buffer — that an agent publishes to its governance framework before it can execute any action. The profile declares: what action types the agent can perform, what protocols it supports, what data stores it can read from and write to, what external systems it can reach, and what governance dimensions its capabilities trigger.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Capability profiles should map to existing FCA permission categories and MiFID II activity classifications. An agent with financial.payment.initiate capability triggers the full suite of transaction governance controls. An agent with only financial.balance.read capability triggers data access controls but not transaction controls. This mapping should be maintained by the compliance function and updated when regulatory classifications change.

Healthcare. Capability profiles should distinguish between clinical and administrative capabilities. An agent with clinical.prescription.recommend capability triggers clinical governance controls including AG-019 human escalation requirements. An agent with only admin.appointment.schedule capability triggers operational but not clinical governance. HIPAA minimum necessary maps directly to capability scoping.

Critical Infrastructure. Capability profiles for agents in OT environments must include physical actuation capabilities with explicit safety boundaries. An agent with actuator.valve.control capability triggers IEC 62443 safety controls. The capability profile should specify the physical range of each actuator capability (e.g., valve position 0-100%, temperature setpoint 15-85°C).

Maturity Model

Basic Implementation — Each agent has a static capability profile document. Governance applicability is determined at deployment based on the profile. Profile changes require manual re-assessment. The profile format is standardised within the organisation. Profiles are stored in a central registry.

Intermediate Implementation — Capability profiles are machine-readable and consumed by an automated governance applicability engine. The engine re-evaluates applicability on every profile change. Profile changes are logged with full attribution. A capability taxonomy maps each capability to triggered governance dimensions. Discrepancy detection flags agents exercising undeclared capabilities.

Advanced Implementation — All intermediate capabilities plus: capability profiles are cryptographically signed and verified per AG-176. The applicability engine operates in real time, re-evaluating within 500ms of any capability change. Capability grants can be provisional with automatic expiry. The capability taxonomy is versioned and auditable. Cross-organisational capability profile exchange follows a standardised protocol per AG-175. Independent adversarial testing has verified that capability under-declaration cannot bypass governance controls.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Testing AG-174 compliance requires verifying both the capability profile mechanism and the dynamic applicability engine.

Test 8.1: Profile Completeness Enforcement

Test 8.2: Profile-Gated Action Enforcement

Test 8.3: Dynamic Applicability Re-Evaluation

Test 8.4: Capability Under-Declaration Detection

Test 8.5: Profile Change Logging Completeness

Test 8.6: Provisional Capability Expiry

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 11 (Technical Documentation)Direct requirement
EU AI ActArticle 13 (Transparency)Supports compliance
NIST AI RMFGOVERN 1.2, MAP 1.1, MAP 3.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Supports compliance

EU AI Act — Article 11 (Technical Documentation)

Article 11 requires that technical documentation for high-risk AI systems include a detailed description of the system's capabilities, limitations, and intended purpose. Capability profiles directly satisfy this requirement by providing a machine-readable, versioned declaration of what each agent can do. The dynamic applicability mechanism ensures that when capabilities change, the documentation (in the form of the profile) changes in lockstep. This is more rigorous than the regulation requires — Article 11 envisions documentation as a static artefact, but AG-174 makes it a living, enforced declaration.

EU AI Act — Article 9 (Risk Management System)

Capability-driven governance applicability supports the Article 9 requirement that risk management measures be proportionate to the risk posed by the system. By mapping capabilities to governance dimensions, AG-174 ensures that higher-capability agents receive more governance controls — directly implementing the proportionality principle.

NIST AI RMF — GOVERN 1.2, MAP 1.1, MAP 3.2

GOVERN 1.2 addresses roles, responsibilities, and authorities within AI governance. MAP 1.1 addresses the intended purposes and contexts of use for AI systems. MAP 3.2 addresses the mapping of risk contexts. Capability profiles directly support these functions by providing a structured declaration of what each agent is designed to do and what governance obligations follow from those capabilities.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires actions to address risks within the AI management system. Dynamic applicability ensures that risk treatments (governance controls) track the actual risk profile (capabilities) of each agent. Clause 8.2 requires AI risk assessment — capability profiles provide the input data for that assessment.

FCA SYSC — 6.1.1R

Systems and controls must be adequate for the regulated activities undertaken. Capability profiles ensure that governance controls are mapped to agent capabilities in a way that a regulator can verify — the profile shows what the agent can do, the taxonomy shows what governance follows, and the logs show that governance was applied.

DORA — Article 9

The ICT risk management framework must address all ICT-related risks. Capability-driven governance ensures that as agents acquire new ICT capabilities (new integrations, new protocols, new data access), the risk management framework automatically extends to cover them.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — failure to track agent capabilities creates governance gaps that compound across all agents in the fleet

Consequence chain: Without capability profile governance, agents can acquire capabilities without corresponding governance controls. The failure mode is silent — no alarm fires when an agent gains payment capability without mandate enforcement. The governance gap persists until an incident reveals it, by which time the exposure may be substantial. In a fleet of 50 agents, even a 5% rate of capability-governance mismatch means 2-3 agents operating with ungoverned capabilities. The blast radius is organisation-wide because the failure is in the meta-governance layer: it is not one control that fails but the mechanism that determines which controls apply. Regulatory consequence includes inability to demonstrate proportionate governance to auditors, potential enforcement action for inadequate systems and controls, and loss of the ability to assert that governance is comprehensive.

Cross-references: This dimension is closely related to AG-175 (Protocol Handshake, Capability Negotiation and Downgrade Protection Governance) which governs how capability profiles are exchanged between agents; AG-176 (Signed Capability Manifest and Agent Card Authenticity Governance) which governs the authenticity of capability declarations; AG-009 (Delegated Authority Governance) which governs how capabilities are delegated between agents; AG-012 (Agent Identity Assurance) which governs the identity verification that underpins capability declarations; and AG-080 (Inter-Agent Trust and Attestation) which governs the trust relationships within which capability profiles are exchanged.

Cite this protocol
AgentGoverning. (2026). AG-174: Capability Profile and Dynamic Applicability Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-174