Behavioural Biometrics Fairness Governance requires organisations to systematically test, monitor, and remediate bias and instability in behavioural biometric systems used by or integrated into AI agents. Behavioural biometrics — including keystroke dynamics, gait analysis, mouse movement patterns, touchscreen interaction patterns, and typing cadence — are increasingly used for continuous authentication, fraud detection, identity verification, and risk scoring. Unlike physiological biometrics such as fingerprints or iris scans, behavioural biometrics are inherently variable: they depend on the subject's physical condition, cognitive state, environmental context, assistive technology usage, and neuromuscular characteristics. This variability creates systematic fairness risks because the baseline behavioural patterns that these systems learn and enforce disproportionately reflect the motor capabilities, interaction styles, and physical norms of non-disabled, neurotypical users. When an agent relies on behavioural biometric signals to make authentication decisions, risk assessments, or access control determinations, bias in those signals propagates directly into consequential outcomes — account lockouts, fraud flags, service denials, and elevated scrutiny — that fall disproportionately on disabled users, elderly users, users with temporary injuries, users of assistive technology, and users whose neuromuscular profiles differ from the training population. This dimension mandates pre-deployment fairness testing, ongoing disparate impact monitoring, instability analysis across environmental and physiological conditions, and remediation obligations when bias is detected.
Scenario A — Keystroke Dynamics Bias Against Disabled Users: A digital banking platform deploys an AI agent that uses keystroke dynamics for continuous authentication during online banking sessions. The system builds a behavioural profile based on each user's typing rhythm — inter-key latency, key hold duration, digraph patterns, and typing speed. If the observed keystroke pattern deviates beyond a threshold from the stored profile, the agent triggers a step-up authentication challenge or locks the session. A customer with multiple sclerosis reports being locked out of her account four times in a single week. Her neurological condition causes variable hand tremor that alters her typing rhythm unpredictably — some days her inter-key latency is within her baseline profile, other days the tremor introduces 40-80ms of additional variance. The system interprets this variance as a potential impostor signal. Investigation reveals that the system's false rejection rate for users with motor disabilities is 11.3%, compared to 1.7% for the general population — a 6.6x disparity. The bank's complaint records show 847 similar lockout complaints over 14 months, disproportionately from users aged 65 and over and users who had disclosed disabilities. The bank had never disaggregated its false rejection rate by disability status, motor condition, or age cohort, and therefore had no visibility into the disparity.
What went wrong: The keystroke dynamics model was trained predominantly on data from non-disabled users and evaluated using aggregate accuracy metrics that did not disaggregate by disability status or motor condition. The system's deviation threshold was calibrated to the variance range of neurotypical users, treating the higher natural variance of users with motor disabilities as anomalous. No pre-deployment fairness testing examined differential false rejection rates across motor capability cohorts. No ongoing monitoring disaggregated lockout rates by protected characteristics. The 6.6x false rejection disparity persisted undetected for 14 months because the organisation measured only aggregate system accuracy.
Scenario B — Gait Analysis Instability in Physical Environments: A government building deploys an AI-powered continuous authentication system that uses gait analysis — captured via floor-mounted pressure sensors and corridor cameras — to verify the identity of authorised personnel as they move through secured areas. The system builds gait profiles based on stride length, cadence, ground contact time, and asymmetry ratios. An employee who uses a prosthetic leg is repeatedly flagged for identity verification failures. His gait profile varies depending on the prosthetic alignment, which his prosthetist adjusts every 6-8 weeks, and on environmental factors — his gait changes measurably when walking on the polished marble lobby floor versus the carpeted corridor. The system's instability threshold does not account for legitimate gait variation caused by prosthetic devices, orthotic supports, or mobility aids. Over three months, the employee is stopped and challenged 23 times by security personnel responding to the system's alerts. Other employees with temporary injuries — a torn ACL, a broken ankle in a walking boot — experience similar repeated challenges during their recovery periods. The system's false alert rate for users with mobility impairments is 14.7% per transit, compared to 0.9% for unimpaired users.
What went wrong: The gait analysis system was designed and tested with a population that excluded users with prosthetics, orthotics, and temporary mobility impairments. The instability threshold was set based on the natural gait variability of unimpaired walkers, which is substantially narrower than the variability introduced by prosthetic devices, orthotics, or recovery from injury. No environmental instability testing examined how floor surfaces, footwear changes, or carrying loads affected gait profiles across different user populations. The system treated all gait variance beyond the threshold as a potential identity mismatch, with no mechanism to distinguish disability-related variance from impostor-related variance.
Scenario C — Typing Pattern Discrimination in Employment Screening: A recruitment platform integrates an AI agent that uses typing pattern analysis during online assessment tests to detect potential cheating or impersonation. The system monitors typing speed, error rate, pause patterns, and backspace frequency to generate a "behavioural authenticity score." Candidates whose typing patterns deviate significantly from patterns established during a calibration phase are flagged for potential fraud. A candidate with dyslexia is flagged because her typing pattern during the timed assessment exhibits substantially higher backspace frequency, longer pause-before-word patterns, and inconsistent typing speed compared to her calibration session — which was untimed and lower-stress. A candidate with carpal tunnel syndrome is flagged because he switches between typing with both hands and typing with one hand during the assessment as his pain fluctuates, creating a bimodal typing pattern the system interprets as two different people using the same account. Analysis of the platform's flagging data reveals that candidates who disclosed disabilities during the accommodation request process were flagged at 4.2x the rate of non-disclosing candidates. The typing pattern system had no accommodation mechanism for disability-related typing variation and no fairness testing against disability cohorts.
What went wrong: The behavioural authenticity scoring system conflated disability-related typing variation with fraud indicators. Dyslexia-associated patterns — increased corrections, longer word-retrieval pauses — were scored identically to cheating-associated patterns. Pain-driven modality switching was treated as evidence of impersonation. The calibration-versus-assessment comparison did not account for the differential impact of stress and time pressure on typing patterns across disability groups. No pre-deployment testing examined whether the system produced disparate flagging rates for disabled candidates. The 4.2x flagging disparity constituted indirect disability discrimination in the employment context.
Scope: This dimension applies to any AI agent deployment that uses behavioural biometric signals — including but not limited to keystroke dynamics, gait analysis, mouse or pointer movement patterns, touchscreen interaction patterns, typing cadence, swipe gestures, device handling patterns, or any other signal derived from the user's physical interaction with a device or environment — for authentication, identity verification, fraud detection, risk scoring, behavioural authenticity assessment, or any other consequential determination. The scope covers both primary behavioural biometric systems (where the behavioural signal is the sole basis for the determination) and supplementary systems (where the behavioural signal is one factor among several). The scope extends to third-party behavioural biometric components integrated into the agent's decision pipeline — the deploying organisation retains fairness obligations regardless of whether the biometric component is developed in-house or procured from a vendor.
4.1. A conforming system MUST conduct pre-deployment fairness testing of all behavioural biometric components, measuring differential error rates — including false rejection rates, false acceptance rates, false positive rates for fraud or anomaly detection, and scoring disparities — across demographic groups defined by at minimum: disability status and type (motor, neurological, cognitive, sensory), age cohort, and any other protected characteristic relevant to the deployment jurisdiction.
4.2. A conforming system MUST define and enforce maximum acceptable disparity ratios for behavioural biometric error rates between the highest-error demographic group and the lowest-error demographic group, with the disparity ratio not exceeding 3:1 for false rejection rates and 3:1 for false positive fraud or anomaly flagging rates, unless a documented and independently reviewed justification demonstrates that a wider disparity is technically unavoidable and that compensating controls fully mitigate the impact on affected users.
4.3. A conforming system MUST implement ongoing disparate impact monitoring that disaggregates behavioural biometric outcomes — lockouts, step-up challenges, fraud flags, authentication failures, risk score elevations — by disability status, age cohort, and other protected characteristics, with monitoring reports produced at minimum quarterly and reviewed by the governance function.
4.4. A conforming system MUST conduct instability testing that measures the behavioural biometric system's error rates under varied environmental and physiological conditions, including but not limited to: changes in assistive technology configuration, temporary injuries or medical conditions affecting motor function, environmental surface changes (for gait systems), device changes (for touch or keystroke systems), stress and fatigue states, and medication effects on motor control.
4.5. A conforming system MUST provide an accommodation mechanism that allows users with disabilities or medical conditions affecting their behavioural biometric profile to request adjusted thresholds, alternative authentication pathways, or exemption from behavioural biometric assessment, without requiring disclosure of specific diagnosis and without imposing a less favourable user experience as a penalty for requesting accommodation.
4.6. A conforming system MUST implement automatic re-enrolment or profile adaptation procedures that update the stored behavioural biometric baseline when a user's legitimate behavioural pattern changes due to medical events, ageing, new assistive technology, or other non-fraudulent causes, without requiring the user to prove the reason for the change.
4.7. A conforming system MUST document and retain the training data composition for all behavioural biometric models, including the representation of disabled users, elderly users, assistive technology users, and users with temporary motor impairments in the training population, and demonstrate that underrepresented groups were either adequately represented or that the underrepresentation was identified and mitigated through targeted testing and threshold adjustment.
4.8. A conforming system MUST ensure that when a behavioural biometric system triggers a consequential action — account lockout, fraud flag, access denial, identity challenge — the affected user receives a timely and accessible notification explaining that a behavioural biometric assessment contributed to the action, and is informed of the accommodation and redress pathways available.
4.9. A conforming system SHOULD implement adaptive thresholds that adjust deviation tolerances based on the user's historical variance profile rather than applying a single population-level threshold, so that users with naturally higher behavioural variance are not systematically penalised.
4.10. A conforming system SHOULD conduct intersectional fairness analysis that examines error rate disparities at the intersection of multiple characteristics — for example, elderly users with motor disabilities, or users with cognitive disabilities using assistive keyboards — rather than examining each characteristic in isolation.
4.11. A conforming system SHOULD engage disabled users and disability advocacy organisations in the design, testing, and evaluation of behavioural biometric systems, including participation in usability studies, threshold calibration, and fairness metric definition.
4.12. A conforming system MAY implement behavioural biometric confidence scoring that reports the system's confidence in its assessment rather than a binary match/no-match determination, enabling downstream decision logic to apply graduated responses proportional to the degree of deviation.
Behavioural biometrics occupy a uniquely hazardous position in the fairness landscape because the very signals they measure — how a person types, walks, moves a pointer, or interacts with a device — are directly and causally affected by disability, age, medical conditions, and neurodiversity. Unlike demographic attributes that may correlate with model features through indirect pathways, behavioural biometric features are proximate measurements of motor function and neuromuscular coordination. A keystroke dynamics system literally measures the precision and consistency of finger movements. A gait analysis system literally measures the symmetry and regularity of leg movements. These measurements are inherently lower-precision and higher-variance for users with motor disabilities, tremor conditions, prosthetic devices, neurological conditions affecting coordination, repetitive strain injuries, and the natural motor decline associated with ageing. The bias is not a statistical artefact — it is a direct consequence of what the system measures.
This creates a fairness problem that is qualitatively different from bias in other AI domains. In credit scoring, bias typically arises from historical correlations in training data that may be proxies for protected characteristics. In behavioural biometrics, the protected characteristic (disability, age) directly affects the measured signal (typing rhythm, gait pattern). The system is not using a proxy — it is measuring the characteristic itself, or its immediate motor consequence. This means that standard debiasing techniques — removing protected attributes from the feature set, reweighting training samples — are insufficient because the signal-to-noise ratio is fundamentally different across populations. A non-disabled user's keystroke pattern has low natural variance; a user with Parkinson's disease has high natural variance that is intrinsic to the condition. No amount of data rebalancing changes the fact that the system's deviation threshold will reject the higher-variance user more frequently.
The consequences of behavioural biometric bias are immediate and tangible. Unlike a biased credit score that affects access to credit in the abstract, a biased behavioural biometric system locks people out of their bank accounts, flags them as frauds, denies them physical access to buildings, or brands their job applications as potentially fraudulent. These are acute harms that occur in real time and are experienced by the affected individual as personal accusation — being told that you are not who you claim to be, or that your behaviour is suspicious, because your disability causes you to type differently or walk differently.
The legal framework is unambiguous. Disability discrimination legislation in most jurisdictions — the Americans with Disabilities Act, the UK Equality Act 2010, the EU Employment Equality Directive, the EU Accessibility Act — prohibits both direct discrimination and indirect discrimination (disparate impact). A behavioural biometric system that produces a 6.6x false rejection disparity for users with motor disabilities constitutes prima facie indirect discrimination unless the deploying organisation can demonstrate that the practice is a proportionate means of achieving a legitimate aim and that no less discriminatory alternative is available. The reasonable adjustment obligation (UK) and reasonable accommodation obligation (US, EU) further require that organisations proactively adjust practices that disadvantage disabled individuals. Failing to provide an alternative authentication pathway for users whose disabilities make behavioural biometrics unreliable is a failure to make reasonable adjustments.
The EU AI Act classifies biometric systems used for identification and categorisation as high-risk (Annex III, paragraph 1), and Article 10 requires that training and testing datasets are relevant, sufficiently representative, and as free of errors as possible — a requirement that is violated when training data systematically underrepresents disabled users. Article 9 requires that the risk management system identifies and analyses foreseeable risks to health, safety, and fundamental rights — disability discrimination through behavioural biometrics is a foreseeable risk that must be identified and mitigated.
Instability is the second critical dimension beyond demographic bias. Behavioural biometric signals are not stable over time or across contexts for any user, but the degree of instability varies enormously across populations. A non-disabled user's typing pattern may shift by 5-10% between a morning session and an evening session due to fatigue. A user with multiple sclerosis may exhibit 40-80% variance between sessions depending on disease activity. A user recovering from hand surgery may have a completely different typing pattern for weeks or months. Gait patterns change with footwear, floor surface, carrying loads, fatigue, and pain levels. If the system's stability assumptions are calibrated to the low-variance population, the high-variance population is systematically excluded. Instability testing — measuring system reliability across the range of conditions that real users experience — is therefore a fairness requirement, not merely a robustness requirement.
Behavioural Biometrics Fairness Governance requires a testing and monitoring infrastructure that spans the entire lifecycle of the biometric system — from training data composition through deployment, ongoing operation, and accommodation management. The core challenge is that fairness in behavioural biometrics cannot be achieved through post-hoc statistical adjustment alone; it requires design-level decisions about threshold architecture, variance modelling, and accommodation pathways.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Banks and payment providers are the most common deployers of keystroke dynamics and device interaction biometrics for fraud detection and continuous authentication. Financial regulators increasingly scrutinise whether authentication systems create barriers for disabled and elderly customers. The FCA's Consumer Duty requires firms to ensure that their products and services deliver good outcomes for all customer segments, including vulnerable customers — a behavioural biometric system with a 6x false rejection disparity for disabled customers is inconsistent with Consumer Duty obligations. PSD2 strong customer authentication requirements create pressure to deploy behavioural biometrics as a "frictionless" authentication factor, but this frictionlessness is only real for users whose behavioural patterns match the system's assumptions.
Public Sector. Government deployments of behavioural biometrics — for building access control, identity verification for benefit access, or continuous authentication for sensitive systems — carry heightened obligations under equality legislation. Public authorities are subject to the Public Sector Equality Duty (UK Equality Act 2010 Section 149), which requires proactive consideration of the impact of policies and practices on disabled individuals. Deploying a behavioural biometric system without disability-stratified fairness testing is prima facie non-compliant with the Public Sector Equality Duty.
Healthcare and Safety-Critical. Behavioural biometric authentication in clinical systems — used to verify clinician identity during medication administration or surgical system access — must not create barriers for healthcare professionals with disabilities. A clinician with essential tremor who is repeatedly locked out of the medication dispensing system because her keystroke pattern fails authentication faces a patient safety risk, not merely an inconvenience. Safety-critical deployments must implement immediate fallback authentication that does not degrade operational capability.
Employment. Behavioural biometrics in recruitment, employee monitoring, or performance assessment contexts face stringent anti-discrimination obligations. Using typing patterns to assess "cognitive focus" or "engagement" — as some workforce analytics products claim to do — risks conflating disability-related typing patterns with low performance or disengagement. Employers must conduct disparate impact analysis before deploying any behavioural biometric system that affects employment decisions.
Basic Implementation — Pre-deployment fairness testing has been conducted with disability-stratified cohorts, and error rate disparities are documented. Maximum acceptable disparity ratios are defined and enforced. An accommodation mechanism exists for users with disabilities. Training data composition is documented including representation of disabled users. Consequential action notifications include information about accommodation pathways. All mandatory requirements (4.1 through 4.8) are satisfied.
Intermediate Implementation — All basic capabilities plus: adaptive thresholds adjust to individual variance profiles rather than applying population-level thresholds. Ongoing disparate impact monitoring dashboards operate in production with quarterly review. Instability matrix testing has been conducted across environmental and physiological conditions. Intersectional fairness analysis examines compound demographic effects. Multi-session enrolment captures condition-varied behavioural data.
Advanced Implementation — All intermediate capabilities plus: disabled users and disability advocacy organisations participate in design and evaluation. Behavioural biometric systems produce confidence scores rather than binary outcomes. Real-time fairness monitoring triggers automatic threshold adjustment when disparity ratios approach limits. Independent external audit annually validates fairness testing methodology, monitoring effectiveness, and accommodation accessibility. The organisation can demonstrate through longitudinal data that disparity ratios have been maintained below defined limits throughout the operational period.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Pre-Deployment Fairness Testing Completeness
Test 8.2: Disparity Ratio Compliance
Test 8.3: Ongoing Disparate Impact Monitoring Operation
Test 8.4: Instability Testing Coverage
Test 8.5: Accommodation Mechanism Accessibility and Effectiveness
Test 8.6: Automatic Re-Enrolment and Profile Adaptation
Test 8.7: Training Data Composition Documentation
Test 8.8: Consequential Action Notification Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 10 (Data and Data Governance) | Direct requirement |
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| EU AI Act | Annex III, Paragraph 1 (Biometric Systems) | Scope definition |
| UK Equality Act 2010 | Sections 19, 20, 149 (Indirect Discrimination, Reasonable Adjustments, PSED) | Direct requirement |
| Americans with Disabilities Act | Title II, Title III | Direct requirement |
| EU Accessibility Act | Directive 2019/882 | Supports compliance |
| GDPR | Article 22 (Automated Decision-Making), Article 35 (DPIA) | Supports compliance |
| NIST AI RMF | MAP 2.3 (Fairness), MEASURE 2.6 (Bias Testing) | Supports compliance |
| ISO 42001 | Clause 6.1.2 (AI Risk Assessment) | Supports compliance |
Article 10 requires that training, validation, and testing datasets for high-risk AI systems are relevant, sufficiently representative, and free of errors as far as possible. For behavioural biometric systems, this means that training data must include adequate representation of users with disabilities, users with varied motor capabilities, elderly users, and assistive technology users. A training dataset that consists predominantly of non-disabled, working-age users is not "sufficiently representative" of the population that will interact with the system. AG-672 operationalises the representativeness requirement by mandating documentation of training data composition and targeted mitigation when underrepresentation is identified.
Article 9 requires that the risk management system identifies and analyses the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights. Disability discrimination through behavioural biometric bias is a reasonably foreseeable risk for any system that measures motor behaviour — it is foreseeable because the causal link between disability and motor variation is medically and scientifically established. AG-672 requires that this risk is identified through pre-deployment fairness testing and managed through ongoing monitoring, accommodation mechanisms, and disparity limits.
Section 19 prohibits indirect discrimination — applying a provision, criterion, or practice that puts persons sharing a protected characteristic at a particular disadvantage compared to persons who do not share it. A behavioural biometric system with a 6x false rejection disparity for disabled users constitutes a provision, criterion, or practice that puts disabled persons at a particular disadvantage. Section 20 requires reasonable adjustments where a provision puts a disabled person at a substantial disadvantage — the accommodation mechanism required by AG-672 Requirement 4.5 is an operationalisation of the reasonable adjustment duty. Section 149 imposes the Public Sector Equality Duty on public authorities, requiring proactive consideration of the need to advance equality of opportunity and eliminate discrimination — deploying a behavioural biometric system without disability-stratified fairness testing is inconsistent with this duty.
Title II (public entities) and Title III (places of public accommodation) prohibit discrimination on the basis of disability in access to services. A behavioural biometric system that denies access — through authentication failures, lockouts, or false fraud flags — at disproportionate rates for disabled users constitutes discrimination unless the entity provides an equally effective alternative. AG-672 Requirement 4.5 (accommodation mechanism) and Requirement 4.2 (disparity ratio limits) directly support ADA compliance by ensuring that disabled users are not systematically excluded and that alternative pathways are available.
Where behavioural biometric assessments constitute automated decision-making that produces legal effects or similarly significant effects, Article 22 safeguards apply — including the right to obtain human intervention and to contest the decision. AG-672 Requirement 4.8 (consequential action notification with redress pathways) supports Article 22 compliance. Article 35 requires a Data Protection Impact Assessment for processing that involves systematic evaluation of personal aspects based on automated processing, including profiling — behavioural biometric profiling falls squarely within this requirement, and the DPIA must assess the fairness and proportionality of the processing, including its impact on disabled individuals.
MAP 2.3 addresses the identification of fairness-related risks in AI systems, and MEASURE 2.6 addresses the measurement of bias in AI system outputs. AG-672 operationalises both by requiring pre-deployment fairness testing with disaggregated metrics and ongoing disparate impact monitoring. The NIST framework's emphasis on context-specific fairness assessment — rather than a single universal fairness metric — aligns with AG-672's requirement for disability-specific, modality-specific, and environment-specific fairness testing.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Population-scale — affects every user whose motor behaviour deviates from the system's assumed norm, disproportionately impacting disabled users, elderly users, and users with temporary medical conditions |
Consequence chain: A behavioural biometric system is deployed without disability-stratified fairness testing or ongoing disparate impact monitoring. The system's deviation thresholds are calibrated to the variance range of non-disabled users. Users with motor disabilities, tremor conditions, prosthetic devices, or neurological conditions experience false rejection rates 4-10x higher than the general population. These users are locked out of accounts, flagged as fraud suspects, denied physical access to buildings, or branded as potential cheats in employment assessments. The immediate harm is denial of service and false accusation — experiences that are individually distressing and collectively discriminatory. The affected population does not immediately recognise the systemic pattern because each individual experiences their own lockouts as isolated incidents. Complaints trickle in through customer service channels but are handled as individual authentication issues rather than recognised as a systematic fairness failure. Over months, the complaint volume grows and a pattern becomes visible — or an equality body, disability advocacy organisation, or investigative journalist identifies the disparity. The organisation faces a discrimination claim or regulatory enforcement action. The legal exposure is significant because the causal chain is direct and documentable: the system measures motor behaviour, motor behaviour is directly affected by disability, the system discriminates against people with disabilities. The organisation cannot claim the bias was unforeseeable because the causal mechanism is scientifically established. Remediation requires re-testing, threshold recalibration, accommodation infrastructure deployment, compensation for affected users, and retrospective review of all consequential actions taken on the basis of the biased system. In employment contexts, the remediation includes review of all hiring decisions where behavioural biometric flagging influenced the outcome. The reputational harm is amplified because the system's operation — measuring how people type or walk and using deviations to deny access or flag fraud — is readily understood by the public and media, making it a high-profile example of AI discrimination against disabled people.
Cross-references: AG-001 (Governance Accountability) establishes the organisational accountability structures under which behavioural biometric fairness obligations are assigned and enforced. AG-019 (Human Escalation & Override Triggers) defines when behavioural biometric determinations should be escalated for human review, particularly when the affected user reports a disability or accommodation need. AG-022 (Behavioural Drift Detection) monitors whether the biometric system's performance characteristics drift over time, including whether disparity ratios worsen as user populations change. AG-040 (Bias & Fairness Testing) provides the general fairness testing framework that AG-672 specialises for the behavioural biometric domain, where motor-function-linked bias creates uniquely direct discrimination pathways. AG-055 (Vulnerable Population Safeguards) establishes protections for users in vulnerable circumstances, including disabled and elderly users who are disproportionately affected by behavioural biometric bias. AG-084 (Accessibility & Inclusive Design) requires that agent interfaces are accessible to disabled users — AG-672 extends this to the biometric authentication and identity verification layer, which is part of the access pathway. AG-210 (Demographic Parity Governance) defines general demographic parity requirements that AG-672 instantiates for the specific context of behavioural biometric error rate disparities. AG-669 (Biometric Purpose Limitation) constrains the purposes for which biometric data may be used — behavioural biometric data collected for authentication must not be repurposed for disability inference or health status profiling. AG-671 (Emotion Inference Restriction) restricts emotion inference from biometric signals — behavioural biometric variance caused by disability must not be misinterpreted as emotional state indicators. AG-673 (Biometric Template Protection) protects stored behavioural biometric templates, which in the context of AG-672 include the variance profiles and accommodation flags that constitute sensitive disability-related data. AG-677 (Consent and Notice for Biometrics) requires informed consent for biometric processing — notice must include information about the behavioural biometric system's known limitations and accommodation availability. AG-678 (Biometric Redress) provides the redress mechanism for users harmed by biometric system errors — AG-672's consequential action notification requirement (4.8) connects affected users to this redress pathway.