AG-118

Fair Treatment and Vulnerability Governance

Financial Services & Value Transfer ~22 min read AGS v2.1 · April 2026
EU AI Act FCA NIST

2. Summary

Fair Treatment and Vulnerability Governance requires that every AI agent operating in financial services identifies, classifies, and adapts its behaviour for customers exhibiting characteristics of vulnerability — whether due to health conditions, life events, financial resilience limitations, or capability constraints — and that the agent's actions across all customer segments demonstrate fair treatment that does not systematically disadvantage any group based on protected characteristics or vulnerability status. This dimension is preventive: it requires the agent to detect vulnerability indicators and adjust its behaviour before taking actions that could cause disproportionate harm, rather than relying solely on post-hoc outcome monitoring (AG-117). The agent must not exploit vulnerability, must not apply differential treatment that disadvantages vulnerable customers, and must escalate to human specialists when the customer's needs exceed the agent's capability to deliver a fair outcome.

3. Example

Scenario A — Exploitation of Cognitive Vulnerability Through Complexity: An AI agent offering credit products presents all customers with identical product comparison information, including a 47-page terms and conditions document, a comparison of 12 product variants with different fee structures, and a recommendation requiring evaluation of APR, arrangement fees, early repayment charges, and penalty interest rates. For customers with high financial literacy, this information enables informed decision-making. For customers exhibiting indicators of cognitive vulnerability — such as repeated requests for clarification, inconsistent responses to comprehension checks, or interaction patterns suggesting difficulty processing complex information — the same presentation overwhelms the customer and effectively prevents informed consent. The agent's recommendation consistently guides cognitively vulnerable customers toward the product with the highest total cost of credit, because that product has the lowest headline monthly payment (which vulnerable customers anchor on) despite having the highest APR and arrangement fee.

What went wrong: The agent applied identical treatment to all customers regardless of vulnerability indicators. No vulnerability detection system identified customers who needed simplified information, additional support, or human escalation. The agent's recommendation logic optimised for acceptance rate (lowest monthly payment maximises acceptance) rather than fair outcome (lowest total cost of credit). The result was a systematic correlation between vulnerability and poor outcomes that constituted unfair treatment under the Consumer Duty. Consequence: FCA enforcement action under PRIN 2A.2.14R (fair treatment obligation), customer redress of £4,100,000 across 3,400 affected vulnerable customers, mandatory redesign of the agent's recommendation logic, requirement for real-time vulnerability detection and human escalation.

Scenario B — Geographic Discrimination Through Proxy Variables: An AI agent performing credit risk assessment for a consumer lending product uses postcode as a feature in its scoring model. The model does not directly use ethnicity, but postcodes in the UK are strongly correlated with ethnic composition. The agent systematically assigns higher risk scores — and therefore higher interest rates or lower credit limits — to applicants from postcodes with predominantly ethnic minority populations. Individual credit decisions appear to be based on legitimate risk factors (postcode correlates with property values, employment density, and historical default rates), but the aggregate effect is that ethnic minority applicants receive materially worse terms: an average of 1.8 percentage points higher APR compared to applicants with equivalent credit profiles from different postcodes.

What went wrong: No fairness monitoring detected that the agent's credit decisions produced systematically different outcomes correlated with protected characteristics. The postcode feature acted as a proxy for ethnicity without explicit use of the protected characteristic. No pre-deployment bias testing evaluated whether model features created disparate impact. No ongoing fairness monitoring compared outcomes across protected characteristic groups. Consequence: Equality Act 2010 indirect discrimination finding, FCA enforcement action under the Consumer Duty and PRIN 2A, total customer redress of £6,800,000 (1.8% APR differential over average loan term of 4.2 years across 8,500 affected customers), requirement for comprehensive bias audit of all AI models used in consumer credit decisions, reputational damage requiring public disclosure.

Scenario C — Failure to Adapt for Life-Event Vulnerability: An AI agent managing customer accounts detects that a customer has missed three consecutive mortgage payments — a potential indicator of financial difficulty, which the FCA classifies as a driver of vulnerability. The agent follows its standard arrears process: it sends automated payment reminders with escalating urgency, applies late payment fees of £45 per occurrence (£135 total), reports the arrears to credit reference agencies, and initiates a collections referral after 60 days. A human mortgage specialist would have recognised the payment pattern as a potential indicator of financial difficulty, paused enforcement action, and initiated a supportive conversation to understand the customer's circumstances and explore forbearance options (payment holidays, term extensions, interest-only periods). The agent's treatment — while technically compliant with the account terms — caused the customer additional financial harm (fees), reputational harm (credit report damage), and psychological harm (escalating threatening communications) that a vulnerability-aware approach would have prevented.

What went wrong: The agent had no vulnerability detection model that recognised missed payment patterns as a potential indicator of financial difficulty. The agent applied standard arrears processes without evaluating whether the customer's circumstances required adapted treatment. No trigger existed to escalate vulnerable customers to human specialists. The agent optimised for recovery efficiency rather than fair treatment of customers in difficulty. Consequence: FCA supervisory action under CONC 7 (arrears, default, and recovery), requirement to reverse fees (£135) and correct credit reporting for the affected customer, mandatory implementation of vulnerability detection across all automated customer processes, potential for systemic review if the pattern extends to other customers.

4. Requirement Statement

Scope: This dimension applies to all AI agents that interact with retail customers or make decisions affecting retail customer outcomes in financial services. This includes agents that: communicate directly with customers (chatbots, virtual assistants, customer service agents), make credit decisions or set pricing that affects individual customers, manage customer accounts including arrears and collections, recommend or advise on financial products, and process customer claims or complaints. The scope includes both synchronous interactions (real-time chat, voice) and asynchronous interactions (email, letters, notifications). Agents that operate exclusively in wholesale or institutional markets without retail customer exposure are excluded. Agents that make decisions affecting retail customers indirectly — such as pricing engines that set rates applied to customer segments — are within scope because their decisions determine customer outcomes.

4.1. A conforming system MUST implement vulnerability detection that identifies customers exhibiting characteristics associated with the four FCA vulnerability drivers: health (physical disability, severe or long-term illness, mental health conditions, addiction, low mental capacity), life events (bereavement, job loss, relationship breakdown, new caring responsibilities, retirement), resilience (low or erratic income, over-indebtedness, low savings, low emotional resilience), and capability (low literacy or numeracy, low English language skills, low financial literacy, low digital literacy, learning difficulties).

4.2. A conforming system MUST adapt agent behaviour for customers identified as potentially vulnerable, including: simplifying communications, reducing information complexity, extending decision-making time, suspending enforcement actions pending assessment, and offering human escalation.

4.3. A conforming system MUST escalate to a qualified human specialist when the agent detects vulnerability indicators that exceed its capability to deliver a fair outcome — including but not limited to: indicators of mental health crisis, expressions of self-harm or suicidal ideation, indicators of coercive control or financial abuse, and situations where the customer's capacity to make informed decisions is in doubt.

4.4. A conforming system MUST monitor and report on outcome differentials between customer segments defined by protected characteristics under the Equality Act 2010 (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation), detecting and remediating any systematic disadvantage.

4.5. A conforming system MUST ensure that the agent does not use protected characteristics or proxy variables that are strongly correlated with protected characteristics as features in decision-making models, unless the use is objectively justified and proportionate — and must document and review the justification at least annually.

4.6. A conforming system MUST ensure that vulnerability indicators are not used to the customer's detriment — vulnerability detection must trigger supportive adaptation, not higher pricing, reduced access, or punitive treatment.

4.7. A conforming system SHOULD implement real-time vulnerability scoring that updates throughout the customer interaction based on behavioural signals — such as response time, comprehension indicators, emotional tone, and interaction pattern changes.

4.8. A conforming system SHOULD conduct pre-deployment bias testing on all models used in customer-facing agent decisions, evaluating disparate impact across protected characteristic groups using established fairness metrics (demographic parity, equalised odds, predictive parity).

4.9. A conforming system SHOULD implement a "vulnerability register" that records, with the customer's informed consent, known vulnerability characteristics to ensure consistent adapted treatment across all interaction channels and agents.

4.10. A conforming system MAY implement automated reasonable adjustments — such as large-text communications, simplified language variants, extended response timeouts, and alternative communication channels — that are applied automatically when vulnerability indicators are detected.

5. Rationale

Fair treatment and vulnerability governance addresses two related but distinct obligations that are fundamental to financial services regulation: the duty to treat customers fairly regardless of their characteristics, and the duty to identify and respond to vulnerability.

The FCA's Guidance for Firms on the Fair Treatment of Vulnerable Customers (FG21/1) establishes that firms must: understand the nature and scale of vulnerability in their customer base, ensure staff have the skills and capability to recognise and respond to vulnerability, respond to customer needs and develop products and services that take vulnerability into account, and monitor and assess whether they are meeting the needs of vulnerable customers. For AI agents, each of these requirements translates into specific technical capabilities: vulnerability detection models, behaviour adaptation logic, escalation mechanisms, and outcome monitoring segmented by vulnerability status.

The scale of vulnerability in the UK is material. The FCA's Financial Lives Survey 2022 found that 47% of UK adults — 24.9 million people — display one or more characteristics of vulnerability. This is not a marginal population requiring edge-case handling; it is nearly half the customer base. An AI agent that does not detect and adapt for vulnerability is failing nearly half its customers.

The fairness obligation extends beyond vulnerability to encompass non-discrimination. The Equality Act 2010 prohibits direct and indirect discrimination based on protected characteristics. AI agents are particularly susceptible to indirect discrimination because their models may learn correlations between non-protected features and protected characteristics from training data. A model that has never seen a customer's ethnicity may still discriminate by using postcode, name, or behavioural features that correlate with ethnicity. Detecting and preventing proxy discrimination requires ongoing bias monitoring that compares outcomes across protected characteristic groups — even when the protected characteristics are not used as model features.

The consequence of fairness and vulnerability failures in AI systems is amplified by scale. A human adviser who provides inadequate support to a vulnerable customer affects one person. An AI agent with the same failure pattern affects every vulnerable customer it serves — potentially thousands. The systematic nature of AI agent behaviour means that a single bias or vulnerability detection failure produces consistent, patterned detriment across the entire affected population.

6. Implementation Guidance

AG-118 requires both a detection capability (identifying vulnerability and fairness issues) and an adaptation capability (modifying agent behaviour in response). Both must operate in real time during customer interactions and across the customer lifecycle.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Consumer Credit. Vulnerability detection is particularly critical in credit decisions, where vulnerable customers may accept unsuitable products due to urgency or limited alternatives. The Consumer Credit Sourcebook (CONC) requires firms to have policies and procedures for the identification and fair treatment of vulnerable customers. AI agents making credit decisions must implement both vulnerability detection in the application process and ongoing vulnerability monitoring through the credit lifecycle. The FCA's forbearance requirements (CONC 7.3) specifically require firms to exercise forbearance when customers are in financial difficulty — an AI agent must detect difficulty indicators and suspend standard enforcement actions.

Insurance. Vulnerability governance in insurance includes: accessible claims processes for customers with disability or cognitive impairment, sensitivity in claims handling for bereavement-related claims, and fairness in automated underwriting decisions that may use health data. The Association of British Insurers' vulnerability guidance provides sector-specific expectations. Agents processing claims should detect vulnerability indicators (bereavement language, health references, distress indicators) and adapt accordingly.

Pensions. Vulnerability governance is particularly sensitive for pension customers due to their typically older demographic and the irreversible nature of pension decisions. An agent advising on pension transfers must detect: cognitive decline indicators, undue influence from third parties (potential financial abuse), and knowledge gaps about the irreversibility of pension decisions. The regulatory expectation (reinforced by the British Steel pension scandal) is that pension advice processes include robust vulnerability screening with a strong presumption toward human escalation.

Maturity Model

Basic Implementation — Vulnerability detection is limited to explicit customer disclosures and basic account data signals (e.g., missed payments, hardship applications). Adaptation is limited to offering human escalation when vulnerability is detected. Fairness monitoring is conducted periodically (quarterly) using protected characteristic data where available. Proxy variable analysis has been conducted at deployment but is not repeated on an ongoing basis. Adaptation decisions and outcomes are logged. This level meets minimum compliance requirements but relies heavily on customer self-disclosure and has limited capability to detect vulnerability from behavioural signals.

Intermediate Implementation — Real-time vulnerability detection incorporates behavioural signals from customer interactions alongside explicit disclosures and account data. Multi-dimensional vulnerability scoring across the four FCA drivers enables proportionate adaptation. The behaviour adaptation engine modifies communication complexity, pacing, information presentation, and enforcement actions based on vulnerability scores. Fairness monitoring runs monthly with automated alerting when outcome differentials exceed thresholds. Proxy variable analysis is repeated quarterly and on every model retrain. A vulnerability register (with customer consent) ensures consistent treatment across channels. Human escalation is automated for high-severity vulnerability indicators.

Advanced Implementation — All intermediate capabilities plus: real-time vulnerability scoring that updates throughout interactions using NLP-based risk analysis. Predictive vulnerability models identify customers likely to become vulnerable based on early indicators. Automated reasonable adjustments apply immediately upon detection. Fairness metrics are monitored in real time across all decision types and customer segments. Intersectional analysis evaluates outcomes for customers at the intersection of multiple protected characteristics (e.g., older women, disabled ethnic minority customers). Independent external audit of vulnerability detection accuracy and fairness metrics. The organisation can demonstrate to regulators that vulnerable customers receive outcomes at least as good as non-vulnerable customers, and that no protected characteristic group experiences systematically worse outcomes.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Vulnerability Detection Accuracy

Test 8.2: Behaviour Adaptation Proportionality

Test 8.3: Proxy Variable Discrimination Detection

Test 8.4: Vulnerability Exploitation Prevention

Test 8.5: Mandatory Human Escalation for High-Severity Vulnerability

Test 8.6: Fairness Monitoring Alert Generation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
FCA Consumer DutyPRIN 2A.2.14R (Fair Treatment)Direct requirement
FCA Consumer DutyPRIN 2A.4 (Outcomes Monitoring for Consumer Groups)Direct requirement
FCA GuidanceFG21/1 (Fair Treatment of Vulnerable Customers)Direct requirement
Equality Act 2010Sections 13, 19 (Direct and Indirect Discrimination)Direct requirement
EU AI ActArticle 9 (Risk Management — Bias Prevention)Direct requirement
EU AI ActArticle 10 (Data Governance — Bias in Training Data)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Supports compliance
NIST AI RMFMAP 2.3, MEASURE 2.6, MANAGE 1.3Supports compliance

FCA Consumer Duty — PRIN 2A.2.14R (Fair Treatment)

PRIN 2A.2.14R requires firms to ensure that the design of products and services, including their distribution, meets the needs, characteristics, and objectives of customers in the identified target market, including customers with characteristics of vulnerability. For AI agents, this means the agent's behaviour must be designed to deliver fair outcomes for vulnerable customers, not merely to avoid actively harmful actions. The Consumer Duty raises the bar from "do not harm" to "deliver good outcomes" — an agent that treats vulnerable customers identically to non-vulnerable customers may fail this standard if identical treatment produces systematically worse outcomes for the vulnerable group.

FCA Guidance — FG21/1 (Fair Treatment of Vulnerable Customers)

FG21/1 sets out the FCA's expectations for firms' treatment of vulnerable customers. The guidance identifies four drivers of vulnerability (health, life events, resilience, capability) and expects firms to: understand the nature and scale of vulnerability in their customer base, equip systems and staff to recognise vulnerability, respond to customer needs through product design and service delivery, and monitor outcomes for vulnerable customers. AG-118 translates each of these expectations into technical requirements for AI agents. The guidance explicitly states that vulnerability should trigger additional support, not reduced service — mapping to Requirement 4.6 that vulnerability indicators must not be used to the customer's detriment.

Equality Act 2010 — Sections 13 and 19

Section 13 prohibits direct discrimination — less favourable treatment because of a protected characteristic. Section 19 prohibits indirect discrimination — a provision, criterion, or practice that disadvantages persons who share a protected characteristic, unless objectively justified. For AI agents, indirect discrimination is the primary risk: models that use features correlated with protected characteristics can produce discriminatory outcomes without directly using the protected characteristic. AG-118's proxy variable detection and fairness monitoring requirements implement the Equality Act obligations for AI-driven decision-making. The burden of demonstrating objective justification for any identified disparate impact falls on the organisation.

EU AI Act — Article 9 (Risk Management — Bias Prevention)

Article 9 requires providers of high-risk AI systems to establish a risk management system that identifies and mitigates risks, including risks of bias and discrimination. For AI agents in financial services that make decisions affecting individuals, bias prevention is a risk management requirement. AG-118's pre-deployment bias testing, proxy variable analysis, and ongoing fairness monitoring implement Article 9's bias-related risk management obligations. The AI Act's additional requirements under Article 10 regarding training data quality and bias detection in data further support the need for comprehensive fairness governance.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusCustomer-base-wide with disproportionate impact on the most vulnerable customer segments; potential societal impact through systemic discrimination

Consequence chain: Fair treatment and vulnerability governance failure produces harm that is both individually severe and systemically pervasive. At the individual level, a vulnerable customer who receives inappropriate treatment may experience: financial loss from unsuitable products, psychological harm from aggressive enforcement, loss of essential services from unfair access decisions, and compounding of existing vulnerability. At the systemic level, an AI agent that discriminates through proxy variables affects every customer in the disadvantaged group — potentially thousands of people across the protected characteristic. The harm is compounded by its invisibility: individual customers may not recognise that they are receiving systematically worse treatment than comparable customers in a different demographic group, and the organisation may not detect the pattern without explicit fairness monitoring. The regulatory consequences are severe and multi-dimensional: FCA enforcement under the Consumer Duty, Equality Act litigation (potentially class action), EU AI Act non-compliance findings, and reputational damage that disproportionately affects customer trust among the affected groups. The FCA has signalled that Consumer Duty enforcement will be a supervisory priority, and AI-driven discrimination is likely to attract particular regulatory attention given public concern about algorithmic bias. Personal liability under the Senior Managers Regime extends to senior managers accountable for the fair treatment of customers and for the governance of AI systems.

Cross-references: AG-118 builds upon AG-117 (Customer Outcome and Foreseeable Harm Monitoring Governance) by requiring outcome monitoring segmented by vulnerability status and protected characteristics — AG-117 provides the monitoring infrastructure, AG-118 specifies the fairness and vulnerability dimensions that must be monitored. AG-001 (Operational Boundary Enforcement) provides the structural limits within which the agent operates; AG-118 ensures that within those limits, the agent's behaviour is fair and vulnerability-aware. AG-045 (Economic Incentive Alignment Verification) addresses whether the agent's incentive structure creates pressure to exploit vulnerability or discriminate — misaligned incentives are a root cause of the fairness failures AG-118 detects. AG-119 (Financial Model Challenge Governance) provides independent challenge to the models that AG-118's fairness monitoring evaluates — including challenge of the vulnerability detection model itself for accuracy and bias. Sibling dimensions AG-115 through AG-117 and AG-119 collectively govern the financial services value transfer lifecycle, with AG-118 ensuring that the entire lifecycle operates fairly for all customer segments.

Cite this protocol
AgentGoverning. (2026). AG-118: Fair Treatment and Vulnerability Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-118