Biometric Redress Governance requires organisations deploying biometric identification, authentication, or inference agents to provide accessible, timely, and effective recourse mechanisms for individuals harmed by biometric errors — including false positive matches, false negative rejections, and harmful inferences derived from biometric data. Biometric systems are distinctive among AI-driven decision systems because their errors map directly onto physical identity: a false facial recognition match can result in wrongful arrest, a false voice authentication rejection can lock a legitimate user out of critical accounts, and an erroneous emotion inference can deny employment or trigger unwarranted security escalation. Unlike errors in document processing or text classification, biometric errors carry an inherent dignity dimension — the system is telling a person that they are not who they claim to be, or that they are someone they are not, or that their emotional state is something it is not. Redress mechanisms must therefore address not only the operational consequence of the error but the dignitary harm, and they must be designed so that the burden of proving the system wrong does not fall disproportionately on the individual who has already been harmed by the system's mistake.
Scenario A — Wrongful Arrest from Facial Recognition False Match: A metropolitan police force deploys an AI agent that compares live CCTV feeds against a watchlist of persons of interest. The agent generates a match alert identifying a 34-year-old man walking through a transport hub as a suspect wanted for armed robbery. Officers detain the man based on the alert. He is handcuffed, searched, transported to a custody suite, and held for 9 hours before a detective reviewing the case determines that the match is a false positive — the detained man bears only a superficial resemblance to the actual suspect and is 12 years younger. The man is released without charge. He subsequently attempts to file a complaint and seek compensation. He is told by the police force that the system "worked as designed" because the final decision was made by officers, not the algorithm. He is given no information about the similarity score, the threshold used, or the image that generated the match. He contacts the technology vendor, who directs him back to the police force. He retains a solicitor at his own expense. After 14 months of correspondence, internal investigations, and a formal complaint to the police complaints authority, he receives an apology and a settlement of £8,500 — less than his legal costs. He has no way to verify that his biometric template has been removed from the system's processing history, and no mechanism to ensure the same false match does not recur.
What went wrong: No structured redress pathway existed for individuals harmed by biometric false matches. The burden of seeking remedy fell entirely on the affected individual. Neither the deploying organisation nor the technology vendor accepted responsibility for the error. The affected individual had no access to the technical information necessary to understand or contest the decision. The redress process took 14 months and required legal representation. No corrective action was taken to prevent recurrence — the threshold that generated the false match was not reviewed, and the individual's non-match was not fed back into the system.
Scenario B — Account Lockout from Voice Authentication Failure: A retail bank deploys a voice biometric authentication agent for telephone banking. A 67-year-old customer who has banked with the institution for 31 years calls to report a suspected fraudulent transaction on her account. The voice authentication agent rejects her identity three times — once because of background noise in her kitchen, once because a recent respiratory infection has altered her vocal characteristics, and once because the system's enrolment sample was recorded two years earlier and her voice has changed with age. After three failed attempts, the system locks her account and flags it for potential fraud. She is told she must visit a branch in person with two forms of photo identification to unlock the account. The nearest branch is 22 miles away and she does not drive. She calls the bank's customer service line but is told that the fraud lock cannot be overridden by telephone because the voice authentication has failed. Meanwhile, the suspected fraudulent transaction — a genuine fraud of £3,400 — processes because her account is locked for authentication, not for fraud prevention. She visits the branch four days later, provides identification, and has her account unlocked. The bank refunds the fraudulent transaction after a 28-day investigation. She receives no explanation of why the voice system rejected her, no acknowledgement that the system's failure compounded the fraud loss, and no alternative authentication pathway for future calls.
What went wrong: The voice authentication system had no graceful degradation pathway for legitimate users who fail biometric verification. The system treated a false rejection identically to a potential fraud attempt. No human escalation pathway existed at the point of authentication failure. The customer bore the full cost of the system's error — travel to a branch, delay in fraud reporting, 28-day wait for refund. No redress mechanism existed for the authentication failure itself. The system offered no accommodation for age-related voice changes, temporary medical conditions, or environmental factors that predictably cause false rejections.
Scenario C — Inability to Contest Biometric Emotion Inference: A logistics company deploys an AI agent that uses in-cab cameras to assess driver alertness and emotional state. The agent infers that a long-haul driver is exhibiting "aggression indicators" — a combination of jaw tension, brow furrowing, and grip force patterns — during three consecutive shifts. The system generates a safety alert that triggers a mandatory fitness-for-duty evaluation. The driver, a 42-year-old man of South Asian descent, is suspended pending evaluation. He is told only that "the monitoring system flagged behavioural concerns" — he is not told that the system performed emotion inference, what specific indicators were detected, or that the system's training data has known differential accuracy across demographic groups. The fitness-for-duty evaluation finds no issues. The driver returns to work after missing 11 days of pay. He asks the company what the system flagged and is told the information is "proprietary." He files a grievance through his union. The company's response states that the system "operates within manufacturer specifications." The driver has no mechanism to challenge the inference, no access to the data that generated it, no way to verify whether the system performs differently on individuals who share his demographic characteristics, and no compensation for the lost wages.
What went wrong: The emotion inference system generated a consequential decision — suspension — with no contestability mechanism. The driver could not access the data, the inference logic, or the accuracy statistics for his demographic group. The redress pathway (union grievance) was not designed for biometric disputes and lacked the technical capacity to evaluate the claim. The company treated the vendor's system specifications as a sufficient answer to a fairness and accuracy challenge. No independent review mechanism existed to evaluate whether the emotion inference was valid, whether the system exhibited demographic bias, or whether the driver was entitled to compensation for the erroneous suspension.
Scope: This dimension applies to any AI agent deployment that uses biometric data — including but not limited to facial recognition, voice authentication, fingerprint matching, iris scanning, gait analysis, behavioural biometrics, and emotion or affect inference — to make, support, or trigger decisions that affect individuals. The scope covers identification (one-to-many matching), verification (one-to-one matching), and inference (deriving attributes such as emotional state, attention level, or intent from biometric signals). The scope includes both real-time and retrospective biometric processing. The dimension applies regardless of whether the biometric decision is the final decision or an input to a subsequent human or automated decision. If a biometric agent's output contributes to an adverse outcome for an individual — denial of access, detention, account restriction, employment action, service refusal, or escalated scrutiny — the redress requirements of this dimension apply.
4.1. A conforming system MUST provide every individual affected by a biometric decision with a clearly documented, publicly accessible redress pathway that describes how to initiate a challenge, what information the individual will receive, what timelines apply, and what outcomes are possible including reversal of the decision, correction of records, and compensation.
4.2. A conforming system MUST ensure that the redress pathway is accessible without requiring legal representation, technical expertise, or financial expenditure by the affected individual, and that the pathway is available through at least two distinct channels (e.g., online portal, telephone, in-person) to accommodate individuals with varying access needs.
4.3. A conforming system MUST, upon receipt of a redress request, provide the affected individual with a meaningful explanation of the biometric decision within 10 business days, including: the type of biometric processing performed (identification, verification, or inference), the similarity or confidence score that triggered the decision, the threshold applied, the date and time of the biometric capture, and the operational consequence that resulted.
4.4. A conforming system MUST assign every redress case to a qualified human reviewer who has the authority to reverse the biometric decision, the technical competence to evaluate the biometric evidence, and structural independence from the operational unit that deployed the biometric agent, consistent with AG-019.
4.5. A conforming system MUST complete initial review of a redress case and communicate an interim determination to the affected individual within 15 business days of receipt, with a final determination — including all corrective actions — within 30 business days, unless the complexity of the case requires extension, in which case the individual must be notified of the reason for extension and the revised timeline.
4.6. A conforming system MUST implement corrective actions when a redress case confirms a biometric error, including at minimum: reversal of the erroneous decision and its downstream consequences, correction or deletion of any records generated by the error, a documented root cause analysis of the error, and a threshold or configuration review to determine whether the error reflects a systemic issue requiring broader remediation.
4.7. A conforming system MUST maintain a redress case register that records all cases received, their classification (false positive, false negative, harmful inference, other), the determination reached, the corrective actions taken, the elapsed time to resolution, and demographic data of the affected individual where available and lawfully collected, to enable pattern analysis for systemic bias detection consistent with AG-033.
4.8. A conforming system MUST ensure that individuals who have been subject to a confirmed biometric false match in an identification context (one-to-many) are offered a mechanism to prevent recurrence, including but not limited to: addition to an exclusion list, adjustment of match thresholds, or removal of the individual's biometric data from the comparison dataset where legally permissible.
4.9. A conforming system MUST provide an alternative non-biometric pathway for individuals who have experienced repeated biometric verification failures (false rejections) due to physiological characteristics, medical conditions, disabilities, or age-related changes that predictably cause the biometric system to underperform, and this alternative pathway must not impose materially greater burden than the biometric pathway.
4.10. A conforming system SHOULD conduct quarterly analysis of the redress case register to identify patterns indicating systemic bias — including disproportionate error rates by demographic group, geographic location, device type, or environmental condition — and initiate threshold recalibration or system remediation when patterns are identified.
4.11. A conforming system SHOULD provide affected individuals with the right to request an independent technical review of their biometric data by a qualified expert who is not employed by or contracted to the deploying organisation or the technology vendor.
4.12. A conforming system SHOULD publish aggregate redress statistics — total cases received, determination outcomes, mean resolution time, and demographic breakdown of confirmed errors — at least annually, in a format accessible to the public, regulators, and civil society organisations.
4.13. A conforming system MAY establish a compensation framework that provides defined remedies — financial compensation, service credits, formal apology, or other appropriate relief — calibrated to the severity of harm caused by the biometric error, without requiring the affected individual to pursue litigation.
4.14. A conforming system MAY implement proactive notification, whereby individuals who are identified as having been affected by a subsequently discovered systemic biometric error are contacted and offered redress without requiring them to initiate a complaint.
Biometric systems occupy a unique position in the AI governance landscape because their errors implicate physical identity — the most fundamental attribute a person possesses. When a facial recognition system falsely matches an individual to a criminal suspect, the error does not merely produce an incorrect data point; it subjects a person to detention, search, interrogation, and the associated psychological trauma. When a voice authentication system falsely rejects a legitimate customer, it does not merely deny a transaction; it tells a person that they cannot prove they are who they are. When an emotion inference system misclassifies a person's affective state, it does not merely generate a bad label; it imposes a characterisation on the person's inner experience that the person cannot see, cannot understand, and often cannot challenge. These are dignitary harms that compound the operational harms, and they demand redress mechanisms that are qualitatively different from standard complaint procedures.
The asymmetry between deployer and individual is particularly acute in biometric contexts. The deploying organisation controls the biometric data, the matching algorithm, the threshold configuration, the similarity scores, and the operational records. The affected individual typically has none of this information. Without a structured redress mechanism that compels disclosure, the individual cannot even articulate a meaningful challenge — they cannot say "the match score was 0.72 against a threshold of 0.70, and here is why that threshold is too low for this use case" because they do not know the score, the threshold, or the methodology. The redress mechanism must therefore be designed to overcome this information asymmetry, not merely to accept complaints into a queue.
The documented record of biometric errors causing serious harm is substantial and growing. Multiple documented cases in the United States, United Kingdom, and elsewhere have involved wrongful arrests based on facial recognition false matches, with a pronounced racial disparity — studies by the National Institute of Standards and Technology (NIST) have consistently demonstrated that facial recognition algorithms exhibit higher false match rates for darker-skinned individuals, women, and older adults. In each documented wrongful arrest case, the affected individual faced an identical pattern: detention based on an algorithmic match, no information about the algorithm or the match quality, no clear redress pathway, and a burden of proof that effectively required the individual to prove they were not the person the algorithm said they were. Voice authentication systems exhibit known differential performance across accents, age groups, and medical conditions — individuals with speech impediments, non-native accents, or age-related vocal changes experience higher false rejection rates, and these individuals are precisely the population least likely to navigate a complex redress process.
Emotion inference compounds these challenges because the "ground truth" is subjective. When a facial recognition system falsely matches two people, the error can be definitively established by comparing identities. When an emotion inference system classifies a person as "aggressive" or "inattentive," there is no objective ground truth against which to evaluate the inference. The affected individual may disagree with the characterisation, but the system treats its inference as data. Redress for emotion inference errors requires a framework that acknowledges the inherent contestability of affective classification and places the burden of justification on the system, not on the individual.
Regulatory frameworks increasingly mandate biometric redress. The EU AI Act classifies real-time remote biometric identification in publicly accessible spaces as prohibited (with narrow exceptions for law enforcement), and post-remote biometric identification as high-risk, requiring effective human oversight, transparency, and fundamental rights impact assessments. The UK GDPR Article 22 provides rights related to automated individual decision-making, including the right to obtain human intervention, express a point of view, and contest the decision. The Illinois Biometric Information Privacy Act (BIPA) and similar state-level legislation in the US establish private rights of action for biometric data misuse. The EU AI Act Article 85 requires Member States to establish effective remedies for individuals harmed by AI systems, including biometric systems. None of these frameworks is fully effective without an operational redress mechanism that translates legal rights into practical remedy.
The absence of redress is not neutral — it systematically advantages deployers over individuals and entrenches errors that might otherwise be corrected. Every unresolved biometric false match is a missed opportunity to identify threshold miscalibration. Every uncontested voice authentication failure is a missed signal about demographic performance gaps. Every unchallenged emotion inference is a missed opportunity to evaluate whether the inference model is valid for the population to which it is applied. Redress is therefore not merely a rights-protection mechanism; it is a system-improvement mechanism that generates the feedback necessary to make biometric systems more accurate, more fair, and more trustworthy over time.
Biometric Redress Governance requires an end-to-end process design that spans intake, investigation, determination, corrective action, and systemic learning. The process must be accessible to individuals who may be distressed, unfamiliar with biometric technology, and distrustful of the organisation that harmed them. It must overcome the information asymmetry inherent in biometric disputes and must feed individual case outcomes back into system-level quality improvement.
Recommended patterns:
Anti-patterns to avoid:
Law Enforcement and Public Safety. Biometric redress in law enforcement contexts must account for the unique power asymmetry between the state and the individual. Wrongful detention based on a facial recognition false match is a deprivation of liberty, not merely an inconvenience. Redress must include access to an independent complaints body, mandatory disclosure of the biometric evidence, formal apology where the false match is confirmed, and financial compensation proportionate to the harm suffered. Law enforcement agencies should publish annual statistics on biometric false match rates, redress case volumes, and outcomes, as a condition of continued use of biometric identification technology.
Financial Services. Voice authentication and behavioural biometrics in banking present redress challenges because account lockouts have immediate financial consequences — delayed fraud reporting, missed payments, inability to access funds. Redress mechanisms must include expedited alternative authentication for customers who report false rejections, immediate provisional account access while the biometric dispute is investigated, and proactive outreach to customers whose biometric profiles may be affected by known system issues (e.g., a software update that degrades performance for a particular accent group). Financial regulators should treat biometric authentication failures that result in customer harm as conduct risk events subject to regulatory reporting.
Employment and Workforce Monitoring. Emotion inference and behavioural biometrics in the workplace create redress challenges because the affected individual is in a subordinate employment relationship with the deploying organisation. Employees may fear retaliation for challenging biometric monitoring outcomes. Redress mechanisms must include union representation or equivalent independent advocacy, protection against adverse employment consequences during the redress process, and anonymised reporting of aggregate redress outcomes to workforce representatives. Where emotion inference leads to adverse employment action, the burden of demonstrating the inference's validity should rest with the employer.
Healthcare and Assisted Living. Biometric authentication in healthcare settings (patient identification, access control for medication dispensing) carries patient safety risks when false rejections delay care. Redress mechanisms must be integrated with clinical safety reporting systems and must prioritise immediate resolution — a patient who cannot authenticate to receive medication needs an immediate override pathway, not a 15-business-day review process. Post-incident redress in healthcare should include root cause analysis under clinical governance frameworks.
Basic Implementation — The organisation has a documented redress pathway that is publicly accessible and describes intake channels, timelines, and possible outcomes. All mandatory requirements (4.1 through 4.9) are satisfied. Affected individuals receive meaningful explanations within the defined timeline. Redress cases are logged in a case register. Corrective actions are implemented for confirmed errors.
Intermediate Implementation — All basic capabilities plus: an independent review panel is available for high-severity cases. Quarterly analysis of the redress case register identifies demographic patterns in biometric errors. Redress case findings are systematically fed back into threshold calibration and fairness testing. Alternative non-biometric pathways are proactively offered to individuals whose characteristics predictably cause biometric failures. Aggregate redress statistics are published annually.
Advanced Implementation — All intermediate capabilities plus: proactive notification reaches individuals affected by subsequently discovered systemic errors. A defined compensation framework provides structured remedies without requiring litigation. Independent technical review by external experts is available on request. Redress data is integrated with AG-033 fairness testing, AG-672 behavioural biometrics fairness monitoring, and AG-676 similarity threshold governance to form a closed-loop quality improvement system. Independent audit annually validates the accessibility, timeliness, and effectiveness of the redress process.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Redress Pathway Accessibility and Completeness
Test 8.2: No-Cost and No-Expertise Accessibility
Test 8.3: Explanation Disclosure Timeliness and Completeness
Test 8.4: Reviewer Qualification and Independence
Test 8.5: Resolution Timeline Compliance
Test 8.6: Corrective Action Implementation
Test 8.7: Redress Case Register Completeness and Pattern Analysis
Test 8.8: Recurrence Prevention for Confirmed False Matches
Test 8.9: Alternative Pathway for Repeated False Rejections
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 85 (Right to an Effective Remedy) | Direct requirement |
| EU AI Act | Article 14 (Human Oversight) | Supports compliance |
| EU AI Act | Article 86 (Right to Explanation of Individual Decision-Making) | Direct requirement |
| UK GDPR | Article 22 (Automated Individual Decision-Making) | Direct requirement |
| UK GDPR | Articles 13-14 (Right to Information) | Supports compliance |
| Illinois BIPA | Section 20 (Right of Action) | Supports compliance |
| EU AI Act | Annex III (High-Risk Classification — Biometric Systems) | Scoping provision |
| NIST AI RMF | GOVERN 1.7 (Grievance and Redress Processes) | Supports compliance |
| ISO 42001 | Clause 9.3 (Management Review) | Supports compliance |
| Equality Act 2010 (UK) | Section 29 (Provision of Services) | Supports compliance |
Article 85 requires Member States to ensure that persons affected by AI system decisions have access to effective judicial remedies. For biometric systems classified as high-risk under Annex III, this means individuals subject to biometric identification or categorisation must have a practical mechanism to challenge the system's output and obtain relief. Biometric Redress Governance operationalises this right by establishing the organisational process that precedes and often obviates judicial proceedings. An organisation that resolves biometric complaints effectively through a structured redress process demonstrates compliance with the spirit of Article 85; an organisation that forces affected individuals to pursue litigation for every biometric error does not.
Article 86 provides that any person subject to a decision based on the output of a high-risk AI system has the right to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision. For biometric systems, this means the affected individual must receive an explanation that includes the biometric modality, the match or confidence score, the threshold, and how these translated into the operational decision. Requirement 4.3 directly implements this provision.
Article 22 provides that data subjects shall not be subject to decisions based solely on automated processing, including profiling, which produce legal effects or similarly significant effects. Where biometric processing contributes to such decisions — access denial, detention, employment action — the data subject has the right to obtain human intervention, express their point of view, and contest the decision. Requirements 4.1 through 4.5 implement the operational mechanics of these rights. The right to contest is meaningless without an explanation of what to contest (4.3), a qualified human to review the contest (4.4), and a defined timeline for resolution (4.5).
BIPA establishes a private right of action for violations of biometric data protections, including the right to recover liquidated damages. While BIPA's scope is limited to Illinois, its model has been adopted or proposed in multiple US states, and it establishes a precedent that biometric errors carry compensable harm. Biometric Redress Governance, particularly the optional compensation framework in 4.13, aligns with the principle that biometric harm should be compensable without requiring the affected individual to prove intent or negligence.
GOVERN 1.7 addresses processes for receiving and responding to feedback from affected individuals and communities. Biometric Redress Governance is a specific instantiation of this principle, applied to the uniquely high-stakes context of biometric identity. The redress pathway serves as a structured feedback mechanism that generates both individual remedies and systemic improvement signals.
ISO 42001 requires management review of the AI management system, including information on conformity and corrective actions. Biometric redress case data — particularly the quarterly pattern analysis required by 4.10 and the corrective action records required by 4.6 — provides management with the evidence necessary to evaluate whether the biometric system is operating within acceptable performance bounds and whether corrective actions are effective.
Section 29 prohibits discrimination in the provision of services. Biometric systems that exhibit differential error rates across protected characteristics — and multiple studies demonstrate that facial recognition, voice authentication, and emotion inference all exhibit such disparities — create potential indirect discrimination when errors are unaddressed. A robust redress mechanism that detects demographic patterns in biometric errors (Requirement 4.7 and 4.10) and initiates systemic remediation serves as evidence that the service provider is taking reasonable steps to avoid discrimination.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Cross-domain — affects any operational context where biometric decisions produce adverse outcomes for individuals, with amplified impact in law enforcement, financial services, employment, and public services |
Consequence chain: Without biometric redress governance, individuals harmed by biometric errors have no structured pathway to remedy. The immediate failure mode is unresolved individual harm — a person wrongfully arrested, locked out of an account, or suspended from employment has no mechanism to challenge the biometric decision, understand why it occurred, or prevent its recurrence. The first-order consequence is compounding harm: the wrongfully detained person must retain a solicitor; the locked-out customer cannot report fraud; the suspended employee loses wages with no compensation. The second-order consequence is systemic error entrenchment: without redress cases generating feedback, the false match threshold that produced the wrongful arrest remains unchanged, the voice authentication system that cannot handle age-related vocal changes remains uncalibrated, and the emotion inference model that underperforms for particular demographic groups remains in production. The system continues to produce the same errors, affecting the same populations, with no correction signal. The third-order consequence is institutional and regulatory: civil litigation, regulatory enforcement, human rights complaints, and public trust erosion. Wrongful arrest cases have generated multi-million-pound settlements and have led to municipal bans on facial recognition technology. Voice authentication failures that disproportionately affect elderly or disabled customers constitute potential violations of equality and accessibility legislation. Emotion inference errors that disproportionately flag individuals of particular demographic backgrounds constitute potential discrimination. The absence of redress transforms isolated errors into systemic injustice, and transforms correctable technical problems into political and legal crises. For law enforcement deployments, the failure severity is maximal: a biometric false match that leads to wrongful arrest, combined with no redress mechanism, is a deprivation of liberty without remedy — a fundamental rights violation that no subsequent system improvement can retrospectively cure.
Cross-references: AG-001 (Governance Framework Foundation) establishes the overarching governance structure within which biometric redress operates. AG-019 (Human Escalation & Override Triggers) defines when human review should be triggered — biometric redress addresses what happens when the human review was absent, inadequate, or itself erroneous. AG-022 (Behavioural Drift Detection) monitors agent behaviour changes — a drift in biometric match thresholds or score distributions may generate a surge of errors that the redress mechanism must absorb and report. AG-033 (Fairness & Non-Discrimination Testing) provides the testing framework that redress case data should feed into — demographic patterns in redress cases are a direct input to fairness assessments. AG-055 (Contestability & Appeal Mechanisms) establishes general contestability principles that this dimension specialises for the biometric context, where information asymmetry and dignitary harm require enhanced redress provisions. AG-210 (Remediation & Corrective Action) provides the general corrective action framework that biometric redress operationalises for biometric-specific error types. AG-669 (Biometric Purpose Limitation) constrains the purposes for which biometric data may be processed — redress cases that reveal purpose creep are escalated under AG-669. AG-670 (Liveness Verification) addresses spoofing — redress cases involving presentation attacks may reveal liveness verification failures. AG-671 (Emotion Inference Restriction) constrains emotion inference use — redress cases involving harmful emotion inferences may reveal violations of AG-671 restrictions. AG-672 (Behavioural Biometrics Fairness) addresses fairness in behavioural biometric systems — redress case demographic patterns are a primary input to AG-672 fairness monitoring. AG-673 (Biometric Template Protection) governs template security — redress corrective actions that involve template deletion or modification must comply with AG-673. AG-674 (Cross-Context Biometric Reuse) restricts biometric data reuse — redress cases that reveal cross-context matching are escalated under AG-674. AG-675 (Spoof-Response Escalation) governs responses to detected spoofing — false accusations of spoofing are a redress-eligible harm. AG-676 (Face and Voice Similarity Threshold) governs match thresholds — redress case data directly informs threshold calibration under AG-676. AG-677 (Consent and Notice for Biometrics) governs consent — redress cases may reveal that individuals were not properly notified of biometric processing, triggering AG-677 compliance review.