AG-257

Use-Case Prioritisation Governance

Strategy, Portfolio & Use-Case Governance ~14 min read AGS v2.1 · April 2026
EU AI Act FCA NIST ISO 42001

2. Summary

Use-Case Prioritisation Governance requires organisations to allocate agent implementation effort using explicit value, safety, and compliance criteria rather than first-come-first-served, loudest-voice, or political seniority. When an organisation has more approved agent use-cases than it can implement simultaneously, it must prioritise based on a transparent, criteria-driven framework that considers expected value delivery, risk and safety impact, compliance urgency, implementation feasibility, and strategic alignment. This dimension prevents the common failure where agent implementation priorities are driven by which department has the most political influence rather than which use-cases deliver the most value per unit of risk and governance cost.

3. Example

Scenario A — Political Prioritisation Over Value-Based Prioritisation: A retail bank has 14 approved agent use-cases and capacity to implement 5 in the current planning cycle. The Chief Commercial Officer insists that the 3 use-cases from the commercial banking division are implemented first, citing revenue importance. The compliance team's use-case for automated regulatory reporting — which would eliminate a manual process consuming 4 FTEs and carrying significant regulatory risk — is deprioritised. The data science team's use-case for fraud detection enhancement — projected to prevent £1.8 million in annual fraud losses — is also deprioritised. Six months later, the bank receives a regulatory fine of £420,000 for errors in manual regulatory reporting (the process the deprioritised agent would have automated), and fraud losses increase by £940,000 relative to the period when the fraud agent would have been operational.

What went wrong: No criteria-driven prioritisation framework existed. Implementation priorities were determined by political influence rather than value, risk, and compliance analysis. The use-cases that would have delivered the highest risk-adjusted value were deprioritised in favour of commercially visible but lower-value use-cases. Consequence: £420,000 regulatory fine, £940,000 incremental fraud losses, total £1.36 million in avoidable costs.

Scenario B — Compliance-Urgency Ignored in Favour of Revenue Generation: A healthcare technology company has 8 approved agent use-cases. A new regulation requires automated adverse event detection in post-market surveillance data within 18 months. The compliance use-case is approved and ready for implementation. However, the company prioritises 3 revenue-generating use-cases (AI-powered product features) because the revenue impact is immediately quantifiable. The compliance use-case is deferred to "the next planning cycle." When the regulatory deadline arrives, the company has not implemented the agent and must hire 12 temporary staff at a cost of £780,000 to perform manual adverse event detection while the agent is built — a process that takes 6 additional months. The regulatory authority issues a warning letter citing the firm's failure to have automated surveillance in place by the deadline.

What went wrong: The prioritisation framework did not weight compliance urgency appropriately. Revenue-generating use-cases were prioritised because their benefits were easily quantified, while the compliance use-case's benefit (avoiding regulatory non-compliance) was harder to express in monetary terms. Consequence: £780,000 in temporary staffing costs, regulatory warning letter, 6-month gap in automated surveillance, and potential future enforcement action.

Scenario C — Transparent Prioritisation Delivers Optimal Outcomes: A government department has 11 approved use-cases and capacity for 4. The department applies a weighted scoring model: value delivery (30%), risk reduction (25%), compliance urgency (20%), implementation feasibility (15%), strategic alignment (10%). Each use-case is scored by a cross-functional panel including technology, compliance, operations, and finance representatives. The top 4 use-cases include: 1 compliance-critical agent (regulatory reporting, compliance urgency score: 95/100), 1 high-value operational agent (citizen query handling, value score: 88/100), 1 risk-reduction agent (fraud detection, risk score: 82/100), and 1 strategically aligned agent (cross-departmental data analysis, strategic score: 90/100). The scoring and rationale are documented and communicated to all proposing teams.

What went right: The prioritisation was criteria-driven, transparent, and balanced across value, risk, compliance, and strategy. Teams whose use-cases were not selected understood the rationale and could improve their proposals for the next cycle.

4. Requirement Statement

Scope: This dimension applies to any organisation with more approved agent use-cases (AG-249) than it can implement simultaneously. When implementation capacity is unconstrained — every approved use-case can be implemented immediately — prioritisation governance is not required. In practice, every organisation faces capacity constraints: engineering time, governance bandwidth, testing resources, or budget limitations. The scope extends to both initial prioritisation (which approved use-cases to implement first) and ongoing reprioritisation (how priorities change as conditions evolve). The scope includes both new use-cases and major modifications to existing agents that require significant implementation effort.

4.1. A conforming system MUST define a prioritisation framework with explicit, weighted criteria for ranking agent use-cases for implementation.

4.2. A conforming system MUST include at minimum the following criteria in the prioritisation framework: expected value delivery, risk and safety impact, compliance urgency, implementation feasibility, and strategic alignment.

4.3. A conforming system MUST apply the prioritisation framework to all approved use-cases competing for implementation capacity, producing a transparent priority ranking.

4.4. A conforming system MUST document the prioritisation decision including the scoring methodology, the scores assigned, the resulting ranking, and the rationale for any overrides.

4.5. A conforming system MUST review priorities at least quarterly and whenever a significant change affects the scoring (e.g., new regulatory deadline, incident in a related function, budget change).

4.6. A conforming system SHOULD use a cross-functional panel to apply the scoring criteria, ensuring that no single function dominates the prioritisation.

4.7. A conforming system SHOULD incorporate historical benefit realisation data (AG-255) into the scoring — use-cases in functions where previous agents have delivered strong benefit realisation should score higher on feasibility and value certainty.

4.8. A conforming system SHOULD define a "fast track" for compliance-critical use-cases with regulatory deadlines, allowing them to bypass the standard prioritisation queue when the compliance urgency is demonstrated.

4.9. A conforming system MAY publish the prioritisation queue to all proposing teams, providing transparency into where their use-cases rank and what would need to change for the ranking to improve.

5. Rationale

Agent implementation capacity is finite. Even organisations with large technology budgets face constraints on engineering time, governance bandwidth, testing resources, and production infrastructure. When more use-cases are approved than can be implemented, prioritisation happens — the only question is whether it happens explicitly, based on defined criteria, or implicitly, based on political influence, squeaky wheels, or arbitrary sequencing.

Implicit prioritisation systematically favours the wrong use-cases. Departments with more political influence get their agents implemented first, regardless of value. Revenue-generating use-cases are prioritised over risk-reduction and compliance use-cases because revenue is easier to quantify. Technically simple use-cases are prioritised over high-value complex ones because quick wins are politically attractive. The result is a portfolio that reflects organisational politics rather than strategic value optimisation.

Explicit prioritisation does not eliminate difficult tradeoffs — it makes them visible, debatable, and accountable. When a compliance-critical use-case is deprioritised in favour of a revenue use-case, the decision and its rationale are documented. The responsible decision-maker is identifiable. The risk acceptance is explicit. This accountability changes behaviour: decision-makers who must document and justify their prioritisation choices make better choices.

This dimension connects to AG-255 (Benefit Realisation Tracking Governance) because historical benefit data should calibrate value projections in the scoring. It connects to AG-253 (Risk Appetite Binding Governance) because the risk criterion should reflect the contribution to the organisation's risk profile. It connects to AG-251 (Strategic Fit and Substitution Governance) because feasibility scoring should consider whether simpler alternatives exist.

6. Implementation Guidance

Prioritisation governance requires a repeatable, transparent process that produces defensible rankings. The process should be rigorous enough to prevent political capture but lightweight enough to operate within normal planning cycles.

Recommended patterns:

CriterionWeightDescription
Expected value delivery25%Projected net benefit (AG-255 methodology) calibrated by historical realisation rates
Risk reduction20%Reduction in operational, financial, or safety risk from implementing the agent
Compliance urgency20%Proximity to regulatory deadlines, severity of compliance gaps, enforcement risk
Implementation feasibility15%Technical readiness, data availability, integration complexity, team capability
Strategic alignment10%Alignment with organisational strategy, board priorities, and multi-year roadmap
Governance readiness10%Readiness to meet governance requirements (monitoring, testing, evidence) at deployment

Each criterion is scored 0-100 by the cross-functional panel. Weighted scores produce a total. Use-cases are ranked by total score. The model should be calibrated periodically — if the model consistently produces rankings that the governance body overrides, the weights or criteria need adjustment.

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial services firms should weight compliance urgency heavily given the density of regulatory obligations and the severity of enforcement action. The FCA's expectations for implementation of new regulations (e.g., Consumer Duty, operational resilience) create hard deadlines that should fast-track compliance use-cases. The firm's Internal Capital Adequacy Assessment may also inform prioritisation — use-cases that reduce operational risk capital requirements deliver measurable financial value through capital release.

Healthcare. Healthcare prioritisation should weight patient safety and clinical governance heavily. A clinical decision support agent that could prevent adverse events should rank above an administrative efficiency agent, even if the administrative agent has a clearer financial return. Patient safety is not readily quantifiable in monetary terms, but the scoring framework should include it as a distinct criterion weighted appropriately.

Public Sector. Public sector prioritisation should include citizen impact as a criterion: how many citizens will benefit, how significantly, and how equitably. The Equality Act 2010 may require that use-cases serving underrepresented groups receive additional weighting to ensure equitable investment. Value-for-money considerations (HM Treasury Green Book) should also feature prominently.

Maturity Model

Basic Implementation — The organisation has an informal process for deciding which approved use-cases to implement first. Priorities are determined in planning meetings without structured criteria. No scoring model exists. Documentation of prioritisation decisions is limited to meeting minutes. This level produces decisions but not accountable, criteria-driven decisions.

Intermediate Implementation — A weighted scoring model with defined criteria is applied to all competing use-cases. A cross-functional panel scores and ranks use-cases. Priorities are reviewed quarterly. Overrides are documented with rationale. A compliance fast track exists for regulatory deadlines. Backlog items are re-scored each cycle. The scoring model is communicated to all proposing teams.

Advanced Implementation — All intermediate capabilities plus: historical benefit realisation data calibrates value projections. Governance capacity is an explicit constraint in the prioritisation model. The scoring model is periodically calibrated based on post-implementation outcomes (use-cases that scored high on value but delivered low benefit trigger model adjustment). Priority dashboards provide real-time visibility into the queue. The organisation can demonstrate that its agent portfolio composition reflects strategic value optimisation, not political influence.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Scoring Framework Application Completeness

Test 8.2: Cross-Functional Panel Composition

Test 8.3: Override Documentation Verification

Test 8.4: Quarterly Review Execution

Test 8.5: Compliance Fast-Track Justification

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
HM TreasuryManaging Public Money — Value for MoneySupports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
FCA SYSC6.1.1R (Adequate Policies and Procedures)Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks)Supports compliance
NIST AI RMFGOVERN 1.2, MAP 1.1Supports compliance
UK Equality Act 2010Section 149 (Public Sector Equality Duty)Supports compliance

HM Treasury — Managing Public Money

For public sector organisations, prioritisation of technology investments must demonstrate value for money. A criteria-driven prioritisation framework provides the evidence that implementation resources were allocated to maximise public value. The National Audit Office would expect to see documented prioritisation criteria, scoring, and decisions — not just a list of what was implemented.

UK Equality Act 2010 — Section 149

The Public Sector Equality Duty requires public authorities to have due regard to the need to advance equality of opportunity. When prioritising agent use-cases, public sector organisations should consider whether the prioritisation framework adequately weights citizen impact and equality considerations. A framework that consistently deprioritises use-cases serving disadvantaged groups in favour of internal efficiency use-cases may fail to meet the duty's requirements.

10. Failure Severity

FieldValue
Severity RatingMedium
Blast RadiusPortfolio-wide — poor prioritisation affects the value delivery, risk posture, and compliance standing of the entire agent portfolio

Consequence chain: Without criteria-driven prioritisation, agent implementation resources are allocated based on political influence rather than value, risk, and compliance. The portfolio that results reflects organisational politics rather than strategic optimisation. High-value, high-risk-reduction, and compliance-critical use-cases are deprioritised. The financial consequence is the opportunity cost of implementing lower-value use-cases before higher-value ones — the difference between optimal and actual portfolio value can be substantial (as illustrated in Scenario A: £1.36 million in avoidable costs). The compliance consequence is that regulatory deadlines are missed because compliance use-cases were deprioritised in favour of revenue use-cases. The strategic consequence is that the organisation's agent portfolio does not align with its stated strategy, undermining confidence in the governance framework's ability to deliver strategic value.

Cross-references: AG-249 (Use-Case Approval Governance) produces the approved use-cases that enter the prioritisation queue. AG-255 (Benefit Realisation Tracking Governance) provides historical data that calibrates value projections. AG-253 (Risk Appetite Binding Governance) informs the risk scoring criterion. AG-251 (Strategic Fit and Substitution Governance) informs the feasibility scoring criterion. AG-045 (Economic Incentive Alignment Verification) ensures that economic factors are accurately reflected in value scoring.

Cite this protocol
AgentGoverning. (2026). AG-257: Use-Case Prioritisation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-257