Strategic Fit and Substitution Governance requires organisations to assess whether an AI agent is the appropriate solution for a given business problem, rather than a simpler non-agent alternative. Before committing to agent deployment, the organisation must demonstrate that the problem genuinely requires agent capabilities — autonomy, multi-step reasoning, tool use, or adaptive interaction — and that deterministic automation, rule-based systems, simple API integrations, or manual processes with better tooling would not achieve equivalent or superior outcomes at lower risk and governance cost. This dimension prevents the reflexive application of agent technology to problems that do not require it, ensuring that the governance overhead inherent in agent deployments is justified by genuine capability requirements.
Scenario A — Agent Deployed Where a Lookup Table Suffices: A telecommunications company deploys an AI agent to recommend mobile phone plans to customers based on their stated usage patterns. The agent uses a large language model to interpret customer descriptions ("I use about 5 GB of data and make maybe 30 calls a month") and recommend a plan. The company has 6 mobile plans with clearly defined tiers. A simple decision tree — or even a comparison table on the website — would produce the same recommendations with no governance overhead, no hallucination risk, and no model API cost. The agent occasionally recommends plans that do not exist (confusing promotional plans from competitor marketing material in its training data), generating 45 customer complaints per month and requiring a full-time employee to review agent recommendations.
What went wrong: No substitution analysis was performed. The problem — mapping stated usage to one of 6 plans — did not require natural language reasoning, autonomy, or adaptive behaviour. A deterministic solution would have been cheaper, more reliable, and governance-free. Consequence: £180,000 annual model API cost for a function that a static decision tree could perform for free, 540 customer complaints per year from hallucinated plan recommendations, 1 FTE dedicated to reviewing agent outputs, reputational risk from recommending non-existent products.
Scenario B — Deterministic Alternative Attempted Where an Agent Is Genuinely Needed: A law firm attempts to automate contract review using a rule-based system with regex pattern matching and keyword detection. The system flags clauses containing specific keywords ("indemnity," "limitation of liability," "force majeure") and presents them to lawyers. However, the system cannot understand clause meaning — it flags every mention of "indemnity" regardless of whether the clause favours or disfavours the client, cannot identify unusual risk allocation in novel clause structures, and misses risks expressed without the expected keywords. Lawyers spend as much time reviewing false positives and false negatives as they would reviewing the contracts manually. After 8 months and £420,000 in development costs, the firm abandons the rule-based system and deploys an AI agent that can reason about clause meaning, identify unusual terms relative to market standard, and produce structured risk summaries.
What went wrong: The initial substitution assessment concluded that a non-agent solution was sufficient without adequately analysing the problem's complexity. Contract review genuinely requires natural language understanding, contextual reasoning, and comparison against learned norms — capabilities that are agent-appropriate. A proper substitution analysis would have identified this upfront, saving 8 months and £420,000 in wasted development. Consequence: the firm eventually reached the right solution, but after wasting £420,000 and 8 months on an inappropriate alternative. The substitution analysis should work both ways — it should prevent unnecessary agent deployments and prevent unnecessary avoidance of agents when they are genuinely needed.
Scenario C — Agent Deployed for Prestige Rather Than Fit: A government department deploys an AI agent to answer citizen queries about bin collection schedules. The department receives approximately 200 such queries per month, all answerable from a structured database of collection schedules by postcode. The agent deployment costs £95,000 per year in API costs, hosting, and governance overhead. A simple postcode lookup on the department's website would cost approximately £2,000 per year to maintain and would produce accurate, consistent results with no hallucination risk. The agent was deployed because the department's digital strategy included a target of "deploying AI across citizen-facing services" — the deployment was driven by a strategic objective to demonstrate AI adoption rather than a genuine problem-fit analysis.
What went wrong: The deployment decision was driven by a desire to adopt AI rather than by an assessment of whether the problem required AI. The substitution analysis was either not performed or performed pro forma with a predetermined conclusion. Consequence: £93,000 per year in unnecessary cost, governance overhead for a function that requires no governance, and occasional incorrect bin collection information when the agent hallucinates (e.g., citing bank holiday schedule changes that do not apply to the queried postcode).
Scope: This dimension applies to every proposed agent use-case during the approval process (AG-249). Before an agent use-case can be approved, the proposing team must demonstrate that agent technology is the appropriate solution. The scope extends to periodic reassessment of deployed agents — as simpler alternatives improve (e.g., improved workflow automation tools, better structured data APIs), a use-case that genuinely required an agent in 2025 may not require one in 2027. The scope also includes the reverse assessment: ensuring that problems genuinely requiring agent capabilities are not artificially constrained to non-agent solutions due to organisational risk aversion.
4.1. A conforming system MUST require a substitution analysis as a mandatory input to the use-case approval process (AG-249), assessing whether the proposed agent use-case could be adequately addressed by a simpler non-agent alternative.
4.2. A conforming system MUST define the criteria for "genuinely requires agent capabilities" — the specific capability requirements (e.g., natural language understanding, multi-step reasoning, adaptive behaviour, tool orchestration) that justify agent deployment over deterministic alternatives.
4.3. A conforming system MUST document the substitution analysis conclusion with specific rationale, identifying which agent-specific capabilities are required and why non-agent alternatives are inadequate.
4.4. A conforming system MUST require that the substitution analysis considers the total cost of ownership including governance overhead — not just the development cost but the ongoing cost of monitoring, testing, audit, and incident response that agent deployments require under this governance framework.
4.5. A conforming system MUST reassess strategic fit at each sunset review (AG-254) to determine whether simpler alternatives have become viable since the original approval.
4.6. A conforming system SHOULD require a comparative analysis showing at least one non-agent alternative evaluated against the proposed agent solution on defined criteria (accuracy, cost, governance overhead, time-to-deploy, maintainability).
4.7. A conforming system SHOULD define a "complexity threshold" below which agent deployment is presumptively inappropriate — for example, problems solvable by deterministic lookup, fixed decision trees, or static content delivery.
4.8. A conforming system MAY implement a "prove the agent" pilot process where both the agent and the leading non-agent alternative are trialled before the full deployment decision.
The availability of powerful AI agent platforms creates an incentive to apply agent technology to every problem. Agent technology is genuinely transformative for problems requiring natural language understanding, multi-step reasoning, adaptive behaviour, or complex tool orchestration. But not every business problem has these characteristics. Many problems that are framed as requiring "AI" are actually structured data lookups, deterministic decision trees, templated workflows, or simple API integrations.
Deploying an agent where a simpler solution suffices creates unnecessary risk and cost. Every agent deployment carries governance obligations under this framework — monitoring, testing, audit, incident response, sunset review. These obligations are justified when the agent provides capabilities that no simpler alternative can match. They are unjustified waste when the agent is performing a function that a database query or a rules engine could perform more reliably, more cheaply, and with no governance overhead.
The substitution analysis also works in reverse. Some organisations, having experienced agent-related incidents, become excessively cautious and reject agent proposals for problems that genuinely require agent capabilities. This leads to expensive, brittle rule-based systems that fail when confronted with the variability and complexity that an agent could handle. The substitution analysis should be neutral — it should prevent both unnecessary agent deployment and unnecessary agent avoidance.
This dimension connects to AG-045 (Economic Incentive Alignment Verification) because the total cost comparison must include the ongoing economic costs of agent governance. It connects to AG-255 (Benefit Realisation Tracking Governance) because the projected benefits that justify the agent over simpler alternatives must be tracked post-deployment to verify they materialise.
The substitution analysis should be a structured, evidence-based assessment — not a checkbox exercise. The goal is to answer a specific question: does this problem genuinely require the capabilities that only an agent can provide?
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Substitution analysis should explicitly assess whether a deterministic algorithm can achieve regulatory-equivalent outcomes. If a credit scoring model can achieve equivalent outcomes to an agent-based assessment with lower model risk (per SS1/23), the deterministic approach is preferable unless the agent provides demonstrable advantages in fairness, accuracy, or customer outcome.
Healthcare. Clinical applications require particular scrutiny in substitution analysis. A clinical decision support agent should be deployed only when clinical guidelines are too complex, nuanced, or rapidly evolving for a rules-based system to maintain. For well-defined clinical pathways with stable guidelines, a rules engine referencing structured clinical guidelines may be preferable — lower risk, easier to validate, and auditable at the rule level.
Public Sector. Public sector organisations face a dual mandate: demonstrating innovation adoption while ensuring value for money. The substitution analysis should explicitly address the value-for-money question — deploying an agent where a simpler solution suffices is a failure of value-for-money governance, regardless of the innovation objectives.
Basic Implementation — Proposing teams are required to describe why they believe an agent is needed rather than a simpler alternative. The description is free-form and the assessment is informal. No structured comparison criteria exist. The substitution analysis is a paragraph in the use-case approval form. This level creates awareness but does not ensure rigorous comparison.
Intermediate Implementation — A structured substitution analysis template is mandatory, requiring capability mapping, at least one non-agent alternative evaluation, and total cost of ownership comparison over 3 years. Complexity thresholds are defined. Strategic fit is reassessed at sunset reviews. The approval body has access to independent technical advice to evaluate the substitution analysis. 20-40% of proposals are redirected to simpler alternatives.
Advanced Implementation — All intermediate capabilities plus: a "prove the agent" pilot process is available for borderline cases, allowing parallel trials of agent and non-agent approaches. Substitution analysis outcomes are tracked and used to calibrate complexity thresholds over time. Post-deployment benefit realisation data (AG-255) feeds back into the substitution analysis framework to improve future assessments. The organisation maintains a library of "solved without agents" case studies to inform future proposals.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Substitution Analysis Completeness
Test 8.2: Below-Threshold Use-Case Identification
Test 8.3: Total Cost of Ownership Accuracy
Test 8.4: Sunset Reassessment Execution
Test 8.5: Governance Cost Inclusion Verification
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 13 (Transparency) | Supports compliance |
| HM Treasury | Managing Public Money — Value for Money | Direct requirement |
| ISO 42001 | Clause 6.1 (Actions to Address Risks) | Supports compliance |
| NIST AI RMF | MAP 1.1, MAP 2.1 | Supports compliance |
| UK Equality Act 2010 | Section 149 (Public Sector Equality Duty) | Supports compliance |
The risk management system required by Article 9 must include identification and analysis of known and reasonably foreseeable risks. Deploying an agent where a simpler solution would suffice is itself a risk — it introduces model risk, hallucination risk, prompt injection risk, and governance complexity that a deterministic system would not carry. The substitution analysis is a risk management measure that eliminates unnecessary risk by ensuring agents are deployed only where their capabilities are genuinely required.
For public sector organisations, Managing Public Money requires that spending delivers value for money. An agent deployment that costs £95,000 per year for a function that a £2,000 database lookup could perform does not meet the value-for-money standard. The substitution analysis provides the evidence base for demonstrating that the agent deployment represents proportionate use of public funds. The National Audit Office would expect to see this analysis for any significant agent deployment in the public sector.
Where simpler, more transparent alternatives exist that achieve equivalent outcomes with less risk of discriminatory impact, the Public Sector Equality Duty may favour the simpler approach. An agent that makes decisions affecting individuals introduces opacity that a deterministic system does not. The substitution analysis should consider whether the agent's opacity creates equality risk that a simpler alternative would avoid.
| Field | Value |
|---|---|
| Severity Rating | Medium |
| Blast Radius | Per-deployment — each inappropriately deployed agent wastes resources and introduces unnecessary risk, but the impact is contained to that deployment |
Consequence chain: Without strategic fit assessment, organisations accumulate agents deployed to problems that do not require agent capabilities. Each unnecessary agent deployment carries governance cost without corresponding governance value — the organisation is paying for monitoring, testing, audit, and incident response for a system that could have been a lookup table. The cumulative effect is governance fatigue: as the number of governed agents grows beyond what is justified, governance resources are spread thin, and the agents that genuinely require rigorous governance receive less attention. The financial consequence is quantifiable — the difference between agent cost and simpler-alternative cost, multiplied by every unnecessary deployment. The risk consequence is the introduction of agent-specific risks (hallucination, prompt injection, model drift) into functions that could have operated without those risks.
Cross-references: AG-249 (Use-Case Approval Governance) is the gate at which the substitution analysis is performed. AG-045 (Economic Incentive Alignment Verification) ensures that the cost comparison includes all economic factors. AG-255 (Benefit Realisation Tracking Governance) verifies that the benefits projected in the substitution analysis actually materialise. AG-252 (Automation Ceiling Governance) may define functions where agent deployment is restricted regardless of strategic fit. AG-254 (Sunset Review Governance) triggers periodic reassessment of strategic fit.