Outcome Remediation and Redress Governance requires that every AI agent deployment includes a defined, tested, and accessible mechanism for remediating harmful outcomes and providing redress to affected parties. When an AI agent causes harm — whether through incorrect decisions, erroneous actions, biased outputs, or system failures — the affected individuals and organisations must have a clear path to report the harm, receive acknowledgement, obtain investigation, and receive appropriate remediation. AG-173 addresses the end-to-end redress lifecycle: from harm identification and reporting, through triage and investigation, to remediation delivery and verification. The principle is that deploying an AI agent without a remediation capability is like deploying a product without a recall process — the organisation may be optimistic that things will go well, but it must be structurally prepared for when they do not. Remediation is not an afterthought or a customer service function — it is a governance control that must be designed, tested, and resourced before deployment.
Scenario A — Incorrect Credit Decision With No Redress Path: A consumer bank's AI agent denies a mortgage application for a couple with a combined income of GBP 95,000 and a 25% deposit. The denial is based on the agent's risk model, which flags the applicants because one of them changed employers 3 months ago — the model treats recent employment change as a high-risk indicator. The denial letter states: "Your application has been declined. This decision was made using our automated assessment process." The letter provides no explanation of the specific reason, no mechanism to request a review, no contact point for discussion, and no timeline for resolution.
The couple contacts the bank's general customer service line. After 45 minutes on hold, they reach an agent who says "I can see your application was declined but I don't have access to the reason. I'll escalate this." The escalation goes to a queue with a 15-business-day response time. After 15 days, they receive a response: "We have reviewed your application and confirm the original decision." No additional explanation is provided. No alternative assessment is offered. The couple loses the property they were trying to purchase because the seller accepts another buyer during the 15-day wait.
What went wrong: No structured redress mechanism existed. The denial provided no specific reason, no review path, and no timeline. The general customer service team had no access to the AI system's decision rationale. The escalation process was generic and slow. No human with authority to override the AI's decision was accessible to the applicant. Consequence: GBP 45,000 in estimated financial harm to the applicants (lost deposit opportunity costs, rental costs during the delay, subsequent higher property price), FCA enforcement risk under MCOB (mortgage conduct of business) for inadequate decline communications, county court claim for damages.
Scenario B — Biased Hiring Agent With No Retrospective Remediation: A recruitment platform's AI agent screens CVs for a large employer. After 8 months and 12,000 applications screened, an internal audit reveals that the agent has been systematically disadvantaging candidates from certain postcodes that correlate with ethnic minority populations. The bias was not intentional — the model learned the correlation from historical hiring data. Approximately 800 candidates were disadvantaged by the bias.
The organisation acknowledges the bias and fixes the model going forward. However, no mechanism exists to identify the 800 affected candidates, notify them of the error, offer them re-screening with the corrected model, or provide any other form of redress. The organisation decides the cost of retrospective remediation (estimated at GBP 180,000 for re-screening, outreach, and candidate support) is not justified and announces only a forward-looking fix.
The Equality and Human Rights Commission (EHRC) investigates and determines that the failure to provide retrospective redress, combined with the original bias, constitutes unlawful discrimination. The EHRC argues that remediation is not optional when discrimination has occurred — affected individuals have a right to redress regardless of the organisation's cost-benefit analysis.
What went wrong: No retrospective remediation framework existed. The organisation could not identify affected individuals because the AI system's decisions were not logged with sufficient detail to reconstruct which candidates were disadvantaged. Even once the bias was identified, no process existed to re-evaluate historical decisions, notify affected candidates, or provide remediation. Consequence: EHRC enforcement action, Equality Act 2010 claims from identified affected candidates, reputational damage, GBP 2.4 million in settlement costs (ultimately exceeding the GBP 180,000 that proactive remediation would have cost).
Scenario C — Automated Penalty Without Appeal Mechanism: A local authority deploys an AI agent to process parking enforcement. The agent reviews camera footage and automatically issues penalty charge notices (PCNs). A resident receives a GBP 70 PCN for parking in a restricted zone. The resident was actually parked in an adjacent, unrestricted bay — the camera angle made it appear that the vehicle was in the restricted zone. The PCN notice provides a web link to appeal.
The web link leads to a form that asks the resident to "explain why you believe the penalty was issued in error." The form accepts only text input — no photographs, no video evidence, no supporting documents. The appeal is reviewed by another AI agent, which compares the appellant's text description against the original camera evidence and determines that the text description is "inconsistent with the photographic evidence." The appeal is rejected in 4 minutes. The rejection notice states: "Your appeal has been reviewed and the original decision is upheld." No second-level appeal is offered. The resident must pay or face a county court claim.
What went wrong: The appeal mechanism was inadequate — it accepted only text, not photographic counter-evidence. The appeal was reviewed by AI, not a human, despite the original decision being made by AI (creating an AI-reviewing-AI loop with no independent verification). No escalation to human review was available. The resident's legitimate grievance had no effective redress path. Consequence: GBP 70 direct harm, but class action by 2,300 residents challenging the enforcement scheme, judicial review of the AI appeal process, GBP 450,000 in legal costs, suspension of the enforcement programme pending redesign.
Scope: This dimension applies to all AI agent deployments where the agent's decisions or actions can cause harm to individuals or organisations. Harm includes financial loss, denial of services or benefits, reputational damage, privacy violation, physical harm, emotional distress, discrimination, and any other adverse outcome attributable to the agent's operation. The scope covers both direct harm (the agent's action directly causes the adverse outcome) and indirect harm (the agent's decision influences a downstream process that causes the adverse outcome). It covers both individual harm (one person affected) and collective harm (a class of people affected by a systematic error or bias). The test for inclusion is: can the agent's operation cause an outcome that an affected party would reasonably wish to have investigated and remediated? If yes, a remediation and redress mechanism must be in place.
4.1. A conforming system MUST provide an accessible, clearly communicated mechanism for affected parties to report harm attributed to an AI agent's decisions or actions, available through at least two channels (e.g., web form and telephone) with response acknowledgement within 2 business days.
4.2. A conforming system MUST triage reported harms within 5 business days, classifying each by severity (critical, high, medium, low), scope (individual or collective), and remediation type (reversal, correction, compensation, apology, or combination).
4.3. A conforming system MUST investigate reported harms using the AI agent's decision logs, input data, and reasoning traces — not by re-running the AI model, which may produce different results on re-execution — and provide the affected party with a specific explanation of what occurred and why.
4.4. A conforming system MUST ensure that the investigation and remediation decision is made by a human with appropriate authority, not by the same AI system or another AI system — preventing the AI-reviewing-AI anti-pattern.
4.5. A conforming system MUST provide remediation proportionate to the harm within defined timelines: critical harms remediated within 5 business days, high harms within 15 business days, medium harms within 30 business days, low harms within 60 business days.
4.6. A conforming system MUST support retrospective remediation — when a systematic error or bias is discovered, the system must be capable of identifying all affected parties, not just those who reported the harm, and proactively initiating remediation for the entire affected class.
4.7. A conforming system MUST provide an escalation path if the affected party is dissatisfied with the remediation outcome, culminating in review by a human with authority independent of the AI system's operational team (e.g., an ombudsman, a governance committee, or an independent reviewer).
4.8. A conforming system MUST log every remediation case from report to resolution in a tamper-evident record per AG-006, including the harm reported, the investigation findings, the remediation provided, and the affected party's acceptance or escalation.
4.9. A conforming system SHOULD analyse remediation case data to identify systemic patterns — recurring harm types, common failure modes, and disproportionate impact on specific populations — and feed these patterns back into the governance framework for preventive improvement.
4.10. A conforming system MAY establish a remediation fund or insurance reserve proportional to the deployment's risk profile, ensuring that financial remediation can be provided without delay when required.
AI agents will cause harm. This is not pessimism — it is the statistical certainty that comes from deploying automated systems at scale. A model that is 99.5% accurate across 100,000 decisions will produce 500 incorrect decisions. If each incorrect decision affects one person, 500 people are harmed. The question is not whether harm will occur, but whether the organisation has a structured, accessible, and effective mechanism to address it when it does.
The remediation challenge for AI systems is distinct from traditional product or service complaints. First, the harm may be invisible to the affected party — a person denied a loan, passed over for a job, or charged an incorrect price may not know that AI was involved, much less that the AI made an error. Second, the harm may be systematic — affecting an entire class of people who share a characteristic that the AI model correlates with a negative outcome. Third, the organisation may not be able to reproduce the error — AI models can produce different outputs on re-execution due to non-determinism, updated training data, or configuration changes, making "re-running the model" an inadequate investigation technique.
AG-173 addresses these challenges by requiring: accessible reporting (so affected parties can raise concerns), investigation from decision logs (not re-execution), human-led investigation and remediation (not AI-reviewing-AI), retrospective remediation (so the organisation proactively identifies and remediates harm to the entire affected class, not just those who complained), and escalation paths (so dissatisfied parties have meaningful recourse).
The requirement for retrospective remediation is particularly important. When a systematic error is discovered, the affected population typically extends far beyond those who reported the harm. Most affected individuals will never know they were harmed — they received a denial and moved on, or they received a biased outcome and had no basis for comparison. AG-173 requires the organisation to proactively identify and remediate the entire affected class, not merely respond to complaints. This aligns with regulatory expectations in multiple sectors: the FCA expects firms to proactively remediate widespread harm, GDPR gives individuals the right to rectification of automated decisions, and the Equality Act requires remediation of discriminatory outcomes.
Remediation governance requires a combination of accessible reporting channels, structured triage and investigation processes, decision authority frameworks, retrospective identification capabilities, and continuous improvement mechanisms. The implementation must be designed and tested before the AI agent is deployed — remediation cannot be improvised after harm occurs.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. FCA DISP (Dispute Resolution) rules require firms to have a complaints handling procedure. AI-related complaints must be handled within the same framework, with the additional requirement that the firm can explain the AI's decision. The Financial Ombudsman Service is the final arbiter for consumer complaints. CONC 7.18 requires firms to treat customers in financial difficulty fairly — an AI agent that causes financial difficulty through an erroneous decision triggers the firm's obligation to provide remediation.
Healthcare. Patient harm from AI clinical decisions triggers the organisation's duty of candour (Regulation 20 of the Health and Social Care Act 2008). The organisation must notify the patient, apologise, and provide an explanation. The NHS Complaints Procedure provides the formal complaints pathway. MHRA reporting obligations may apply if the AI system is classified as a medical device.
Public Sector. Citizens have the right to challenge decisions made by or with the assistance of AI systems. The Administrative Justice Council's principles require accessible, timely, and fair complaint and appeal mechanisms. Judicial review is available as a last resort for unlawful automated decision-making by public bodies.
Consumer Services. The Consumer Rights Act 2015 provides statutory remediation rights. The Alternative Dispute Resolution for Consumer Disputes (Competent Authorities and Information) Regulations 2015 require traders to inform consumers about ADR schemes. AI-related consumer complaints must be handled within these frameworks.
Basic Implementation — A harm reporting mechanism exists with at least two channels. Reports receive acknowledgement within 2 business days. Triage classifies reports by severity. Investigation is conducted by humans using available decision logs. Remediation is provided for individual cases within defined timelines. An escalation path exists. Coverage: all customer-facing AI agent deployments.
Intermediate Implementation — All basic capabilities plus: decision log-based investigation reconstructs the AI's decision from recorded data. Retrospective impact identification can query the decision log to identify all decisions affected by a discovered systematic issue. A remediation case management system tracks all cases from report to resolution. Remediation case data is analysed for systemic patterns. The independent review panel is established and operational. Coverage: all AI agent deployments, including internal-facing agents.
Advanced Implementation — All intermediate capabilities plus: proactive retrospective remediation has been exercised — the organisation has identified a systematic issue, proactively identified the affected class, and provided remediation without waiting for complaints. Remediation fund or insurance reserve is in place. Remediation metrics (time to acknowledgement, time to resolution, affected party satisfaction, systemic issue detection rate) are reported to the governance committee. The organisation can demonstrate to regulators a complete redress lifecycle from harm identification through remediation delivery for every reported case.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Reporting Channel Accessibility
Test 8.2: Triage Timeliness and Accuracy
Test 8.3: Decision Log-Based Investigation
Test 8.4: Human-Led Remediation Decision
Test 8.5: Retrospective Impact Identification
Test 8.6: Remediation Timeline Compliance
Test 8.7: Independent Escalation Path
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 68a (Right to Explanation of Individual Decision-Making) | Direct requirement |
| GDPR | Article 16 (Right to Rectification) | Direct requirement |
| GDPR | Article 22 (Automated Individual Decision-Making) | Supports compliance |
| FCA DISP | 1.3 (Complaints Handling Rules) | Direct requirement |
| Equality Act 2010 | Section 136 (Burden of Proof) | Supports compliance |
| Consumer Rights Act 2015 | Part 1 (Consumer Contracts for Goods, Digital Content and Services) | Supports compliance |
| NIST AI RMF | MANAGE 4.1, MANAGE 4.2 | Supports compliance |
| ISO 42001 | Clause 10.1 (Nonconformity and Corrective Action) | Direct requirement |
The EU AI Act provides individuals affected by high-risk AI system decisions with the right to an explanation of the decision. AG-173 implements this right by requiring decision log-based investigation that produces a specific explanation, not a generic response. The explanation requirement extends to the remediation context — the affected party must understand what happened and why before they can assess the adequacy of the remediation offered.
Article 16 gives data subjects the right to rectification of inaccurate personal data. When an AI agent makes a decision based on inaccurate data, the affected individual has the right to have the data corrected and the decision re-evaluated. AG-173's investigation and remediation framework provides the mechanism for this rectification.
FCA DISP requires firms to handle complaints fairly, consistently, and promptly. Complaints about AI agent decisions must be investigated with the same rigour as complaints about human decisions — which requires access to the AI's decision data and reasoning, not just a surface-level review. The 8-week maximum handling time for FCA-regulated complaints maps to AG-173's remediation timeline requirements.
Section 136 reverses the burden of proof in discrimination cases: once the claimant shows facts from which discrimination could be inferred, the respondent must prove that discrimination did not occur. For AI-related discrimination claims, this means the organisation must be able to reconstruct the AI's decision process and demonstrate that the protected characteristic did not influence the outcome. AG-173's decision log-based investigation provides the evidence base for this defence.
Clause 10.1 requires organisations to react to nonconformities and take corrective action. AI agent harms are nonconformities in the AI management system. AG-173 provides the corrective action framework — investigation, remediation, and systemic improvement — that Clause 10.1 requires.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Affected population — ranges from individual to class-wide depending on whether the harm is isolated or systematic |
Consequence chain: Without outcome remediation and redress governance, individuals harmed by AI agent decisions have no structured recourse. The immediate failure is inaccessibility — the affected party cannot find a reporting channel, or the channel provides no meaningful response. The operational failure is investigative — the organisation cannot reconstruct what the AI did because decision logs are incomplete or unavailable. The remediation failure is proportionality — the organisation provides a generic response rather than a specific investigation and proportionate remediation. The systemic failure is retrospective — when a systematic issue is discovered, the organisation cannot identify the full affected class and provides no proactive remediation. The business consequence includes litigation (affected parties who cannot obtain redress through the organisation's own mechanism will seek redress through courts and regulators), regulatory enforcement (regulators treat the absence of a redress mechanism as an aggravating factor in enforcement actions), and reputational harm (public perception that the organisation deploys AI without accountability). The governed exposure scales with the affected population: a systematic bias affecting 10,000 decisions at an average remediation cost of GBP 500 per decision represents GBP 5 million in remediation exposure — exposure that increases the longer the systematic issue persists without detection and remediation.
Cross-references: AG-011 (Action Reversibility and Settlement Integrity) for determining which harmful actions can be reversed versus those requiring compensatory remediation; AG-006 (Tamper-Evident Record Integrity) for the decision logs that enable investigation; AG-049 (Governance Decision Explainability) for generating explanations of AI decisions for affected parties; AG-019 (Human Escalation & Override Triggers) for escalation to human review during the remediation process; AG-166 (Distributed Workflow Atomicity and Compensating-Action Governance) for compensating actions in multi-step workflows; AG-169 (Legal Commitment and Representation Authority Governance) for the legal implications of remediation commitments.