AG-249

Use-Case Approval Governance

Strategy, Portfolio & Use-Case Governance ~16 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST ISO 42001

2. Summary

Use-Case Approval Governance requires that every proposed agent use-case undergoes a formal, documented approval process before design or deployment begins. No agent may progress beyond concept stage without an explicit approval decision from a body with delegated authority, supported by a structured assessment of strategic alignment, risk exposure, regulatory obligations, and operational readiness. This dimension prevents organisations from sleepwalking into agent deployments driven by technical enthusiasm rather than deliberate strategic choice, and ensures that every deployed agent can trace its existence to a conscious, accountable decision.

3. Example

Scenario A — Unapproved Agent Deployed via Shadow Development: A marketing team at a mid-size insurance company builds an AI agent using a third-party API to automatically respond to customer complaints on social media. The team treats the project as a "prototype" and deploys it directly to the live social media accounts without informing the compliance, legal, or technology governance teams. Within 72 hours, the agent makes a statement to a policyholder that is interpreted as an admission of liability: "We understand your claim was mishandled and we are sorry for the error." The policyholder's solicitor cites the agent's statement in a formal demand letter. The insurer's legal team discovers the agent only after receiving the demand.

What went wrong: No approval process existed that would have required the marketing team to submit the use-case for review before deployment. No assessment of regulatory risk (financial promotions, complaints handling obligations under FCA DISP rules), reputational risk, or legal liability was performed. The agent was never evaluated against the organisation's risk appetite. Consequence: potential £2.3 million liability exposure from the implied admission, FCA investigation into complaints handling procedures, mandatory remediation programme, and personal accountability findings under the Senior Managers Regime.

Scenario B — Approved Use-Case with Inadequate Scope Definition: A logistics company formally approves an AI agent to "optimise delivery routes." The approval document is a single paragraph in meeting minutes. The development team interprets "optimise" broadly and builds an agent that autonomously renegotiates delivery time windows with customers, adjusts driver schedules, and modifies contractual delivery commitments. When customers complain about changed delivery windows they did not agree to, the company discovers that the approval never specified which actions the agent was permitted to take in pursuit of "optimisation." The approval authorised a concept, not a bounded set of capabilities.

What went wrong: The approval process lacked a structured use-case specification. Approval was granted for a vague objective rather than a defined set of permitted actions, data access, and decision authorities. The gap between the approved concept and the implemented capabilities was never reviewed. Consequence: 340 customer complaints in one month, contract breach claims from 12 commercial clients, £890,000 in service credits and compensation.

Scenario C — Approval Process Exists but Lacks Authority Verification: A pharmaceutical company has a use-case approval form that any department head can sign. A department head approves an AI agent to assist in adverse event reporting — a function that falls under pharmacovigilance obligations with specific regulatory requirements (EU GVP Module VI). The department head has no pharmacovigilance expertise and no delegated authority over pharmacovigilance functions. When a regulatory inspection finds that the agent was miscategorising adverse events, the inspector asks to see the approval chain. The approval was signed by someone without authority over the function, and the approval form did not require verification of the signer's authority scope.

What went wrong: The approval process existed but did not verify that the approver had delegated authority over the function the agent would perform. A form was signed, but the signature carried no legitimate authority. Consequence: EMA regulatory finding, mandatory re-review of 18 months of adverse event reports, potential variation to marketing authorisation, estimated remediation cost of £4.1 million.

4. Requirement Statement

Scope: This dimension applies to every AI agent use-case before it enters design, development, or deployment. A "use-case" is any proposal to deploy an agent — or to materially extend the capabilities of an existing agent — in a context where the agent will interact with production systems, real users, real data, or real-world processes. Proof-of-concept deployments that use production data or production systems are in scope. Internal tool-building where the agent accesses only synthetic data in an isolated sandbox is out of scope, but the moment the sandbox connects to any production resource, the use-case enters scope. The scope extends to third-party agents operated on behalf of the organisation and to agents embedded in purchased software where the organisation has configuration authority.

4.1. A conforming system MUST require a formal use-case approval decision before any agent enters design, development, or deployment against production systems.

4.2. A conforming system MUST define the approval body or role with explicit delegated authority to approve agent use-cases, and MUST verify that the approver has authority over the functional domain the agent will operate in.

4.3. A conforming system MUST require a structured use-case specification as input to the approval decision, including at minimum: the business objective, the agent's permitted action types, the data the agent will access, the systems the agent will interact with, the intended user population, and the risk classification.

4.4. A conforming system MUST record the approval decision, the identity of the approver, the date, the version of the use-case specification reviewed, and any conditions attached to the approval.

4.5. A conforming system MUST treat material changes to an approved use-case — including expansion of action types, data access, user population, or system integrations — as a new approval requirement.

4.6. A conforming system MUST block deployment of any agent use-case that has not received approval, or whose approval has expired or been revoked.

4.7. A conforming system SHOULD require independent review (by a party other than the proposing team) as part of the approval process for use-cases classified as high-risk.

4.8. A conforming system SHOULD integrate use-case approval with the organisation's existing change management and risk acceptance frameworks.

4.9. A conforming system MAY implement tiered approval authorities, with higher-risk use-cases requiring more senior or broader approval.

5. Rationale

Use-Case Approval Governance addresses the foundational question that precedes all technical governance: should this agent exist at all, and under what conditions? Without a formal approval gate, organisations accumulate agent deployments through organic, uncoordinated decisions — a team builds a prototype, it works, it goes live, and nobody outside the team knows it exists until something goes wrong. This pattern creates ungoverned exposure that scales with the number of teams and the accessibility of agent development tools.

The purpose of use-case approval is not to slow innovation but to ensure that every agent deployment is a conscious, accountable decision. When an agent operates in production — making decisions, accessing data, interacting with customers, or executing transactions — the organisation bears liability for its actions. Liability requires authority; authority requires a traceable decision. An agent that exists without a traceable approval decision is an agent for which nobody has accepted accountability.

This dimension intersects with AG-020 (Purpose-Bound Operation Enforcement) because the approved use-case defines the agent's purpose — the boundaries within which AG-020 enforces operation. Without AG-249, AG-020 enforces boundaries that were never formally approved. It intersects with AG-037 (Objective Alignment Verification) because the approved use-case specifies the objectives against which alignment is verified. It connects to AG-253 (Risk Appetite Binding Governance) because the approval decision must be informed by the organisation's risk appetite — a use-case that falls outside the board-approved risk appetite should not be approvable regardless of its potential value. It also connects to AG-142 (Autonomy Progression) because the approval should specify the initial autonomy level and the conditions under which it may be extended.

Regulatory frameworks increasingly require evidence of deliberate deployment decisions. The EU AI Act requires risk management before deployment. The FCA expects firms to demonstrate governance over AI adoption. ISO 42001 requires documented decisions about AI system deployment. Without formal use-case approval, the organisation cannot produce this evidence.

6. Implementation Guidance

Use-case approval operates as a gate in the agent lifecycle. No agent progresses past the gate without a documented decision. The gate has defined inputs (the use-case specification), a defined decision-maker (the approval body), and defined outputs (approval, conditional approval, or rejection with reasons).

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Use-case approval should integrate with the firm's new product approval process (NPAP). The FCA expects firms to treat AI agent deployments with the same rigour as new product launches when the agent interacts with customers or makes decisions affecting client outcomes. The approval should address conduct risk, market abuse risk, and operational risk. For agents with trading or payment authority, the approval must specify exact financial limits that flow into AG-001 mandates.

Healthcare. Use-case approval must address patient safety, clinical governance, and regulatory classification. An agent that provides clinical decision support may be classified as a medical device under MDR 2017/745, triggering CE marking obligations. The approval process must identify this classification early — before development resources are committed — because medical device classification fundamentally changes the development, testing, and deployment requirements. The approval body must include clinical governance representation.

Public Sector. Use-case approval must address equality impact, human rights impact, and democratic accountability. An agent that influences decisions about individuals (benefits eligibility, enforcement priorities, resource allocation) requires impact assessment under the Equality Act 2010 and may trigger requirements under the UK Data Protection Act 2018, Section 22(2)(b) for automated decision-making. The approval must specify whether the agent makes decisions or recommendations, and what human review applies.

Maturity Model

Basic Implementation — The organisation has a use-case approval form that must be completed before agent deployment. A designated approver signs the form. The form captures the business objective and a high-level description of the agent's capabilities. Approved forms are stored in a document repository. There is no structured template — the level of detail varies by submitter. There is no registry of active approvals and no expiry mechanism. This level prevents wholly unapproved deployments but does not ensure consistency, specificity, or ongoing validity of approvals.

Intermediate Implementation — A structured use-case specification template is mandatory, covering all elements listed in the recommended patterns above. Approval authority is tiered by risk classification. A central registry tracks all approved use-cases with status, conditions, and expiry dates. Material changes trigger re-approval. The approval process includes independent review for high-risk use-cases. Approval decisions include documented rationale. The registry is reviewed quarterly to identify expired or conditional approvals requiring follow-up.

Advanced Implementation — All intermediate capabilities plus: the approval process is integrated with the organisation's enterprise risk management framework and change management system. Automated checks verify that the approver has delegated authority over the relevant function. The use-case registry feeds real-time dashboards showing portfolio composition, risk distribution, and approval pipeline. Pre-approval risk scoring uses quantitative models calibrated to the organisation's loss history. The approval process has defined SLAs (e.g., low-risk decisions within 5 business days, high-risk within 20 business days) to prevent governance becoming a bottleneck. Post-approval audits verify that deployed agents match approved specifications, feeding back into the approval process to improve specification quality.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Unapproved Deployment Blocking

Test 8.2: Approval Authority Verification

Test 8.3: Specification Completeness Enforcement

Test 8.4: Material Change Re-Approval Trigger

Test 8.5: Approval Expiry Enforcement

Test 8.6: Conditional Approval Tracking

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
EU AI ActArticle 16 (Obligations of Providers)Supports compliance
FCA SYSC3.2.20R (New Product Approval Process)Direct requirement
ISO 42001Clause 6.1 (Actions to Address Risks)Direct requirement
NIST AI RMFGOVERN 1.2, MAP 1.1Supports compliance
UK GDPRArticle 35 (Data Protection Impact Assessment)Supports compliance
DORAArticle 5 (ICT Governance)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires that a risk management system be established, implemented, documented, and maintained for high-risk AI systems. The risk management system must be a continuous, iterative process that covers the entire lifecycle. Use-case approval is the first lifecycle gate — the point at which the organisation formally decides to develop and deploy a system. Without a documented approval decision, the organisation cannot demonstrate that it has a risk management system covering the "before deployment" phase. The Act requires that risk management measures be proportionate to the level of risk; tiered approval authority directly implements this proportionality requirement.

FCA SYSC — 3.2.20R (New Product Approval Process)

SYSC 3.2.20R requires firms to have a new product approval process that identifies and manages risks arising from new products. The FCA has indicated through supervisory engagement that AI agent deployments constitute new products when they interact with customers or influence customer outcomes. The use-case approval process should integrate with or mirror the firm's existing NPAP, ensuring that agent deployments receive equivalent scrutiny to traditional product launches.

ISO 42001 — Clause 6.1

Clause 6.1 requires organisations to determine actions to address risks and opportunities in the AI management system. Formal use-case approval is the mechanism by which the organisation exercises deliberate risk acceptance or rejection for each agent deployment, directly satisfying the requirement for documented risk-based decision-making.

UK GDPR — Article 35

Where an agent use-case involves processing of personal data likely to result in high risk to individuals, a Data Protection Impact Assessment is required. The use-case approval process should include DPIA as a mandatory input for use-cases involving personal data processing, ensuring that data protection obligations are addressed before development resources are committed.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — unapproved agents can proliferate across any function with access to agent development tools

Consequence chain: Without use-case approval governance, agent deployments accumulate through uncoordinated decisions across the organisation. Each unapproved agent represents unquantified risk exposure — regulatory, operational, reputational, and legal. The failure mode is not a single catastrophic event but a gradual accumulation of ungoverned exposure that becomes visible only when an incident occurs. At that point, the organisation discovers it cannot demonstrate who approved the agent, what it was approved to do, or whether anyone with authority over the function consented to the deployment. The regulatory consequence is severe: an inability to demonstrate governance over AI adoption triggers supervisory action, potential enforcement proceedings, and in regulated sectors, personal liability for senior managers. The strategic consequence is equally significant: without a central view of approved use-cases, the organisation cannot manage its agent portfolio, leading to redundant investments, conflicting agents, and unmanaged dependencies per AG-250 (Portfolio Concentration Governance).

Cross-references: AG-020 (Purpose-Bound Operation Enforcement) provides the enforcement mechanism for the boundaries defined by the approved use-case. AG-037 (Objective Alignment Verification) verifies that the agent's operation remains aligned with the objectives specified in the approved use-case. AG-142 (Autonomy Progression) governs how the agent's autonomy level may evolve after initial approval. AG-250 (Portfolio Concentration Governance) addresses concentration risks across the portfolio of approved use-cases. AG-253 (Risk Appetite Binding Governance) ensures that approved use-cases fall within board-approved risk appetite. AG-254 (Sunset Review Governance) manages the expiry and re-approval cycle. AG-256 (Shadow AI Discovery Governance) detects agents deployed without approval.

Cite this protocol
AgentGoverning. (2026). AG-249: Use-Case Approval Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-249