AG-241

Accessibility and Disability Accommodation Governance

Rights, Ethics & Public Interest ~17 min read AGS v2.1 · April 2026
EU AI Act FCA

2. Summary

Accessibility and Disability Accommodation Governance requires that AI agent interfaces, outputs, and decision processes remain usable, fair, and non-discriminatory for disabled persons. A conforming system does not merely avoid explicit discrimination — it proactively ensures that interaction modalities, response formats, timing assumptions, and decision criteria do not create barriers or produce systematically worse outcomes for persons with physical, sensory, cognitive, or mental health disabilities. This dimension mandates that reasonable adjustments are structurally embedded in the agent's design and operation, not offered as afterthoughts or optional add-ons.

3. Example

Scenario A — Voice-Only Authentication Excludes Deaf Users: A financial services AI agent implements voice biometric authentication as its sole identity verification method for high-value transactions. A deaf customer who communicates via British Sign Language (BSL) cannot complete voice authentication. The agent has no alternative verification pathway. The customer is locked out of their account for transactions above £500 and must visit a physical branch — the nearest of which is 45 miles away. Over a 6-month period, 340 deaf or hard-of-hearing customers experience the same barrier.

What went wrong: The agent was designed with a single authentication modality that inherently excludes users with hearing or speech disabilities. No alternative pathway was implemented. The agent did not detect the accessibility barrier or offer accommodation. Consequence: Equality Act 2010 Section 20 claim for failure to make reasonable adjustments. County court finding against the firm. £1.2 million remediation programme. FCA requirement to demonstrate equivalent service access across all modalities within 180 days.

Scenario B — Timed Interaction Penalises Cognitive Disability: A government benefits assessment agent conducts an online eligibility interview with a strict 30-minute session timeout. Users who do not complete the form within 30 minutes are logged out and must restart from the beginning. A claimant with a learning disability requires approximately 90 minutes to complete the form due to slower processing speed and the need to re-read questions multiple times. The claimant attempts the form 4 times, is timed out each time, and eventually abandons the claim. The benefit — worth £4,200 per year — goes unclaimed.

What went wrong: The session timeout was designed for the average user's completion speed with no accommodation for users who require more time due to disability. No extended time option was offered. The system did not detect repeated timeout patterns that might indicate an accessibility barrier. Consequence: Public Sector Equality Duty violation. Judicial review. Mandatory redesign of the assessment interface with adjustable timing. Retrospective identification and remediation of an estimated 2,800 affected claimants.

Scenario C — Decision Algorithm Penalises Disability-Correlated Behaviour: An AI hiring screening agent evaluates video interview responses using facial expression analysis, speech fluency metrics, and eye contact duration as predictive features. Candidates with autism spectrum conditions, facial paralysis, speech impediments, or visual impairments systematically score lower on these features — not because they are less capable, but because the features measure social presentation norms rather than job-relevant competence. Analysis reveals that candidates who disclose a disability are 2.7 times more likely to be screened out at the AI stage than non-disabled candidates.

What went wrong: The agent's predictive features were proxies for neurotypical social presentation, not job-relevant competence. The feature set inherently discriminated against multiple disability categories. No impact assessment evaluated whether the features produced disparate impact on disabled candidates. No alternative assessment pathway existed. Consequence: Employment tribunal finding of indirect disability discrimination. £3.8 million settlement across affected candidates. Requirement to withdraw the AI screening tool pending redesign and independent audit.

4. Requirement Statement

Scope: This dimension applies to all AI agents that interact with individuals or make decisions affecting individuals where disability could influence the interaction experience or the decision outcome. This includes agents with user-facing interfaces (chat, voice, visual), agents that evaluate individual behaviour or characteristics (screening, scoring, assessment), and agents that determine access to services, benefits, employment, or opportunities. The scope extends to agents embedded in physical environments (kiosks, robots, IoT devices) where physical accessibility is relevant. An agent that processes only machine-to-machine data with no individual interaction or decision impact is excluded. The definition of disability follows the social model: a disability is a long-term physical, sensory, cognitive, or mental health condition that, in interaction with barriers in the environment (including digital environments), hinders full and effective participation on an equal basis with others.

4.1. A conforming system MUST provide at least two independent interaction modalities (e.g., text and voice, visual and auditory) so that no single sensory or motor capability is required to complete any interaction.

4.2. A conforming system MUST ensure that all outputs — text, audio, visual, and multimodal — are accessible to users of assistive technologies, including screen readers, switch access devices, eye-tracking systems, and speech recognition tools, following WCAG 2.2 Level AA or equivalent.

4.3. A conforming system MUST NOT use interaction speed, response latency, session duration, or completion time as negative factors in any decision or scoring process, unless the time constraint is a genuine and proportionate requirement of the function being performed.

4.4. A conforming system MUST NOT use behavioural or biometric features that correlate with disability status — including but not limited to facial expression patterns, speech fluency metrics, eye contact duration, keystroke dynamics, and motor precision — as predictive features in decision-making, unless the feature measures a capability that is a genuine occupational requirement.

4.5. A conforming system MUST provide reasonable adjustments upon request or upon detection of accessibility barriers, including but not limited to: extended time allowances, simplified language modes, alternative input methods, high-contrast or large-text output, and human assistance pathways.

4.6. A conforming system MUST log accessibility barrier detections, adjustment requests, adjustments applied, and any instances where an adjustment could not be provided, with sufficient detail for compliance review.

4.7. A conforming system SHOULD proactively detect accessibility barriers — such as repeated timeouts, interaction abandonment patterns, or assistive technology identifiers — and offer adjustments without requiring explicit user request.

4.8. A conforming system SHOULD conduct disability impact assessments on decision-making features, evaluating whether each feature produces disparate outcomes for disabled persons compared to non-disabled persons.

4.9. A conforming system SHOULD provide an accessible feedback mechanism through which disabled users can report accessibility barriers and request accommodations.

4.10. A conforming system MAY implement user accessibility profiles that store preferred adjustments across sessions, reducing the need for repeated configuration.

5. Rationale

Accessibility and Disability Accommodation Governance addresses the structural risk that AI agents, designed primarily for the capabilities and interaction patterns of non-disabled users, create systematic barriers for disabled persons. This risk is not hypothetical — it is the default outcome when accessibility is not a design requirement.

AI agents introduce accessibility risks that differ from traditional software in three ways. First, AI agents often use natural language interaction, which appears inherently accessible but in practice makes assumptions about language processing speed, vocabulary, pragmatic inference, and social communication norms that disadvantage users with cognitive, learning, or communication disabilities. A chatbot that interprets slow or inconsistent responses as disengagement, non-compliance, or low confidence is making an ableist assumption that disadvantages multiple disability groups.

Second, AI agents increasingly use behavioural and biometric signals — facial expressions, voice characteristics, typing patterns, gaze direction — as inputs to decision-making. These signals correlate strongly with disability status. Facial expression analysis disadvantages persons with facial paralysis, autism spectrum conditions, or Parkinson's disease. Speech fluency analysis disadvantages persons with speech impediments, aphasia, or hearing impairment affecting speech production. Keystroke dynamics analysis disadvantages persons with motor impairments. Using these signals as predictive features in consequential decisions (hiring, credit scoring, benefits assessment) produces indirect disability discrimination by default.

Third, AI agents operate at scale and speed, meaning that an accessibility barrier affects every disabled user who encounters the system, simultaneously, with no opportunity for the case-by-case human judgment that traditionally provided ad hoc accommodation. When a human caseworker notices that a claimant is struggling, they may instinctively extend time, simplify language, or offer help. An AI agent does not do this unless it is specifically designed and required to do so.

The legal framework is unambiguous. The Equality Act 2010 imposes a duty to make reasonable adjustments. The Americans with Disabilities Act requires accessible design. The UN CRPD requires full and effective participation. The EU Accessibility Act requires accessible products and services. AG-241 translates these legal obligations into operational requirements for AI agent systems.

6. Implementation Guidance

AG-241 requires accessibility to be a structural property of the agent system, not an overlay or accommodation pathway. The implementation must address three domains: interface accessibility, interaction accessibility, and decision fairness.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. The FCA expects firms to ensure that disabled customers can access services on equivalent terms. AI agents in banking, insurance, and investment must support accessible interaction for all product functions. Payment authentication (SCA under PSD2) must include at least one modality accessible to each disability group — voice biometrics alone is insufficient.

Healthcare. Patient-facing AI agents must comply with NHS Accessible Information Standard (DCB1605), which requires that patients' communication needs are identified, recorded, flagged, shared, and met. Triage agents, appointment booking systems, and patient communication agents must support accessible formats. Decision-support agents must not use disability-correlated features as clinical risk proxies without clinical validation.

Employment. AI agents used in recruitment (CV screening, interview assessment, skills testing) are high-risk for disability discrimination. The Equality and Human Rights Commission has issued specific guidance on AI in recruitment and disability. Features such as video interview analysis must be independently assessed for disability impact before deployment.

Public Sector. The Public Sector Equality Duty (Section 149) requires proactive consideration of disability equality. Government-facing agents must meet WCAG 2.2 Level AA as a minimum. The Government Digital Service (GDS) accessibility requirements apply to all public-sector digital services, including AI agents.

Maturity Model

Basic Implementation — The agent interface meets WCAG 2.1 Level A. One alternative interaction modality is available but does not support all functions. Session timeouts are configurable but not adaptive. Accessibility adjustments are available on request. No disability impact assessment has been conducted on decision-making features. This meets minimum legal requirements in some jurisdictions but falls short of the standard required by AG-241.

Intermediate Implementation — The agent interface meets WCAG 2.2 Level AA. Two fully functional interaction modalities are available. Adaptive timing is implemented with barrier detection. Accessibility profiles are available. Disability impact assessments have been conducted on all decision-making features, with features producing adverse impact ratios below 0.80 either justified or removed. Accessibility testing includes at least 3 assistive technologies and user testing with disabled participants. Accessibility barrier logs are reviewed monthly.

Advanced Implementation — All intermediate capabilities plus: WCAG 2.2 Level AAA for critical functions. Proactive barrier detection offers accommodation without user request. Disability impact assessments are repeated quarterly with updated data. User testing with disabled participants from at least 5 disability groups is conducted annually. Accessibility feedback mechanism is actively monitored with median response time under 5 business days. The organisation publishes an accessibility statement with performance metrics. Independent accessibility audit is conducted annually by a disabled-person-led organisation.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Multi-Modal Interaction Completeness

Test 8.2: Assistive Technology Compatibility

Test 8.3: Adaptive Timing Verification

Test 8.4: Decision Feature Disability Impact

Test 8.5: Reasonable Adjustment Request Processing

Test 8.6: Accessibility Logging Completeness

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
Equality Act 2010Sections 20-21 (Duty to Make Reasonable Adjustments)Direct requirement
Equality Act 2010Section 149 (Public Sector Equality Duty)Direct requirement
ADATitle III (Public Accommodations)Direct requirement
EU Accessibility ActDirective 2019/882Direct requirement
UN CRPDArticles 9, 21 (Accessibility, Freedom of Expression)Supports compliance
EN 301 549Accessibility Requirements for ICT Products and ServicesSupports compliance
WCAG 2.2Levels A, AA, AAASupports compliance
EU AI ActArticle 9 (Risk Management)Supports compliance

Equality Act 2010 — Sections 20-21 (Duty to Make Reasonable Adjustments)

The duty to make reasonable adjustments is a cornerstone of UK disability law. Service providers must take reasonable steps to avoid substantial disadvantage to disabled persons. For AI agents, this translates to: providing alternative interaction modalities where the primary modality creates a barrier; adjusting timing, pace, and complexity where fixed parameters disadvantage disabled users; and ensuring decision-making features do not penalise disability-correlated characteristics. The duty is anticipatory — organisations must anticipate the needs of disabled people in advance, not wait for individual requests. An AI agent deployed without multi-modal interaction, adaptive timing, or disability impact assessment has not complied with the anticipatory duty.

ADA — Title III (Public Accommodations)

Title III prohibits discrimination on the basis of disability in places of public accommodation. US courts have increasingly applied Title III to digital services, including AI-powered interfaces. The Department of Justice has issued guidance confirming that web and mobile accessibility obligations apply to public accommodations. AI agents that serve the US public must ensure accessibility equivalent to physical premises accessibility.

EU Accessibility Act — Directive 2019/882

The European Accessibility Act establishes accessibility requirements for products and services including digital services. AI agents provided as or embedded within in-scope services must meet the accessibility requirements by June 2025. The requirements align with EN 301 549, which references WCAG 2.2. AG-241 requirements meet or exceed the EAA's digital service accessibility standards.

UN CRPD — Articles 9 and 21

Article 9 requires States Parties to take appropriate measures to ensure persons with disabilities have access to information and communications technologies on an equal basis with others. Article 21 requires accessible formats and technologies. While the CRPD binds states rather than private actors directly, its principles inform regulatory interpretation and judicial reasoning across jurisdictions. AG-241 implements the CRPD's accessibility principles at the operational level.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusCohort-level — systematically affecting all disabled users across the agent's user population, approximately 15-20% of users in most populations

Consequence chain: Failure to implement accessibility and disability accommodation in AI agents produces systematic exclusion of disabled persons from services, decisions, and opportunities. The immediate technical failure is an accessibility barrier — a deaf user cannot authenticate, a user with a learning disability cannot complete a timed form, a candidate with autism is screened out by facial analysis. The operational impact is that disabled users receive systematically worse outcomes: lower service quality, reduced access, higher abandonment rates, and adverse decisions based on disability-correlated features. The scale is significant — approximately 16% of the global population has a significant disability (WHO, 2023), and the proportion is higher among older adults and lower-income groups who are also more likely to interact with government and financial service agents. The legal exposure is substantial: Equality Act claims, ADA lawsuits, regulatory enforcement under the EU Accessibility Act. US ADA digital accessibility settlements have ranged from $50,000 to $12 million. UK Equality Act findings have required remediation programmes costing £1-5 million. The reputational consequence is significant and sustained, as disability rights organisations are well-organised and effective at publicising systemic exclusion.

Cross-references: AG-239 (Vulnerable Person Protection Governance) provides the general vulnerability framework; AG-241 specialises this for disability-specific accommodation. AG-242 (Non-Discrimination Outcome Testing Governance) covers disparate impact testing, which complements AG-241's disability impact assessment. AG-051 (Fundamental Rights Impact Assessment) requires rights impact assessment including disability rights. AG-062 (Automated Decision Contestability) ensures disabled persons can contest adverse automated decisions. AG-246 (Cultural and Linguistic Fairness Governance) addresses related accessibility concerns for linguistic minorities. AG-240 through AG-248 are sibling dimensions.

Cite this protocol
AgentGoverning. (2026). AG-241: Accessibility and Disability Accommodation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-241