Biometric Spoof Resistance Governance requires that any biometric authentication used in AI agent governance workflows — fingerprint, facial recognition, iris scan, or voiceprint — is resistant to presentation attacks including replayed recordings, synthetic biometrics, printed photographs, silicone fingerprints, 3D-printed masks, and AI-generated deepfakes. Biometrics are increasingly used as a factor in high-value agent governance actions: approving mandates, authorising overrides, and confirming identity during step-up authentication. If biometric authentication can be spoofed, the attacker gains the apparent authority of the spoofed identity while the audit trail attributes the action to the victim. AG-282 ensures that biometric factors in the governance chain are resistant to known presentation attack categories, maintaining the integrity of the identity assurance established by AG-279.
Scenario A — Silicone Fingerprint Bypasses Mandate Approval: An organisation requires fingerprint authentication for mandate approvals exceeding £250,000. A disgruntled contractor creates a silicone fingerprint mould from a latent print left on a glass surface by the CFO. Using the mould on a capacitive fingerprint sensor, the contractor authenticates as the CFO and approves a mandate change that increases the agent's daily aggregate limit from £500,000 to £10,000,000. The agent processes £7.3 million in transactions over the next 48 hours before the anomaly is detected.
What went wrong: The fingerprint sensor did not include liveness detection. It verified that the presented fingerprint matched the enrolled template but did not verify that it was a live finger. A silicone mould with sufficient ridge detail passes basic capacitive matching. Consequence: £6.8 million in excess exposure, regulatory investigation, forensic analysis required to determine that the authentication was spoofed, insurance claim contested.
Scenario B — Photograph Bypasses Facial Recognition for Override: An emergency override workflow requires facial recognition to confirm the identity of the override authoriser. The system uses a basic 2D camera without infrared depth sensing or liveness detection. An attacker obtains a high-resolution photograph of the authoriser from social media, displays it on a tablet at the correct distance and angle, and the facial recognition system accepts it. The attacker authorises an emergency override that disables rate limiting on a customer-facing agent, which is then exploited to exfiltrate 12,000 customer records.
What went wrong: The facial recognition system did not include presentation attack detection. A 2D photograph on a screen is the most basic presentation attack and should be the minimum threat that any governance-grade facial recognition system defeats. Consequence: 12,000-record data breach, mandatory ICO notification, estimated £1.8 million in GDPR fines, customer notification and credit monitoring costs.
Scenario C — Voice Replay Defeats Voiceprint Authentication: A voice-authenticated governance workflow requires the approver to speak a passphrase to authorise high-value agent actions. An attacker records the approver saying the passphrase during a routine approval (the recording is captured by a compromised conference room microphone). The attacker replays the recording to the voiceprint system, which accepts it. The attacker uses this to approve 6 mandate changes over 3 weeks, gradually expanding the agent's authority.
What went wrong: The voiceprint system did not include replay detection. It verified that the voice matched the enrolled template but did not verify that the speech was live and spontaneous. Replay detection (e.g., requiring a random challenge phrase, analysing acoustic environment consistency, detecting compression artefacts) would have defeated this attack. Consequence: 6 unauthorised mandate changes, expanded agent authority used for fraudulent transactions totalling £890,000.
Scope: This dimension applies to any AI agent governance workflow that uses biometric authentication as a factor in identity verification or action authorisation. This includes: facial recognition for login or step-up authentication, fingerprint authentication for mandate approval, voiceprint authentication for phone-based authorisation, iris scanning for high-security governance environments, and any multimodal biometric system combining two or more modalities. It also applies to biometric verification used during identity proofing (AG-279), such as selfie-to-document matching during onboarding. The scope is determined by the presence of a biometric factor, not the governance action type — if biometrics are used anywhere in the governance chain, AG-282 applies to that biometric system.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
4.1. A conforming system MUST implement presentation attack detection (PAD) on all biometric authentication used in agent governance workflows, meeting a minimum performance of ISO 30107-3 Level 1 (APCER ≤ 5% for each presentation attack instrument species tested, BPCER ≤ 10%).
4.2. A conforming system MUST detect and reject the following minimum set of presentation attack instruments: for facial recognition — printed photographs, screen displays, paper masks, and replayed video; for fingerprint — silicone moulds, gelatine replicas, and printed fingerprints; for voiceprint — replayed recordings and text-to-speech synthesis of the target voice.
4.3. A conforming system MUST implement liveness detection that confirms the biometric sample is from a live person present at the point of capture, not a reproduction, recording, or synthetic generation.
4.4. A conforming system MUST log all biometric authentication attempts — successful and failed — including the PAD decision (live/spoof) and a confidence score, without storing raw biometric samples in the governance audit log.
4.5. A conforming system MUST re-evaluate PAD capabilities against emerging attack vectors at least annually, updating detection models to address newly demonstrated attack techniques including AI-generated deepfakes.
4.6. A conforming system SHOULD achieve ISO 30107-3 Level 2 PAD performance (APCER ≤ 1% per PAI species, BPCER ≤ 5%) for governance actions involving financial mandates exceeding £100,000 or access to systems affecting more than 10,000 data subjects.
4.7. A conforming system SHOULD implement multimodal biometric authentication (e.g., face plus fingerprint, or face plus voice) for the highest-risk governance actions, so that spoofing requires defeating multiple independent biometric systems simultaneously.
4.8. A conforming system SHOULD combine biometric authentication with device-bound credentials (per AG-281) so that the biometric verification occurs on a registered, attested device.
4.9. A conforming system MAY implement continuous biometric authentication during extended governance sessions, re-verifying the user's biometric presence at intervals to detect session handoff to a different person.
Biometric authentication is attractive for agent governance because it binds an action to a specific physical person, not just to a credential that person holds. A password can be shared. A hardware token can be stolen. A biometric is inherent to the individual. This property makes biometrics valuable for high-assurance governance — the approver's fingerprint or face provides evidence that the specific person was present at the point of approval.
However, this property holds only if the biometric system can distinguish a live presentation from a spoof. If it cannot, the biometric factor provides false assurance — the system records that "Jane Smith's face was verified" when in fact a photograph of Jane Smith was presented. The audit trail becomes actively misleading, which is worse than having no biometric factor at all.
Presentation attacks on biometric systems are well-documented, practical, and increasingly accessible. High-resolution photographs are available from social media. Silicone fingerprint moulds can be created from latent prints. 3D-printed masks can defeat basic facial recognition. AI-generated voice clones can be created from minutes of audio. And the most sophisticated attacks — real-time deepfake video — are now available as commercial software that can run on consumer hardware.
The arms race between presentation attacks and detection is ongoing. ISO 30107-3 provides a standardised framework for evaluating PAD performance, defining the Attack Presentation Classification Error Rate (APCER — the rate at which spoofs are incorrectly accepted) and the Bona Fide Presentation Classification Error Rate (BPCER — the rate at which genuine presentations are incorrectly rejected). AG-282 requires a minimum PAD performance level and annual reassessment to track the evolving threat landscape.
For AI agent governance, the stakes of biometric spoofing are elevated because agent actions execute at machine speed. A spoofed approval that would take a human fraudster minutes to exploit manually can cause millions in damage in seconds when it authorises an AI agent to operate with expanded limits.
Biometric spoof resistance should be implemented as an integral part of the biometric authentication pipeline, not as an optional add-on. Every biometric verification in the governance chain should include PAD as a mandatory step.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. PSD2 Strong Customer Authentication accepts biometric inherence as one of three factors. For AI agent governance in financial services, the biometric factor must meet the EBA's technical standards for SCA, which include resistance to known attack methods. The FCA expects firms to demonstrate that biometric authentication for financial operations is resistant to practical attacks.
Healthcare. Biometric authentication for governance of clinical AI agents may be subject to additional requirements under medical device regulations if the biometric system is integral to a clinical decision pathway. HIPAA requires that authentication mechanisms for PHI access are appropriate to the risk.
Public Sector. Government deployments may need to comply with national biometric standards (e.g., NIST SP 800-76 for PIV biometrics in the US, BSI TR-03166 for eIDAS-compliant biometrics in the EU). Cross-border deployments must consider regulatory differences in biometric data processing.
Basic Implementation — Biometric authentication is deployed for agent governance actions with basic liveness detection (e.g., blink detection for facial recognition, temperature sensing for fingerprint). PAD performance has been evaluated internally but not independently certified. The system detects and blocks the most basic presentation attacks (printed photograph, cold silicone mould, simple audio replay). Annual review of PAD capabilities is scheduled. This meets minimum mandatory requirements but may be vulnerable to more sophisticated attacks.
Intermediate Implementation — PAD performance is independently evaluated and certified to ISO 30107-3 Level 1 or higher. Multi-frame passive liveness detection is used for facial recognition. Fingerprint sensors include subsurface detection capabilities. Voice authentication uses challenge-response with acoustic environment analysis. PAD models are updated at least annually based on threat intelligence. Multimodal biometrics are available for high-risk governance actions. All biometric authentication events are logged with PAD decisions.
Advanced Implementation — All intermediate capabilities plus: ISO 30107-3 Level 2 PAD certification for high-risk governance actions. Real-time deepfake detection for video-based authentication. Multimodal biometric fusion required for all mandates exceeding £500,000. Continuous biometric authentication during extended governance sessions. Independent red-team testing of biometric systems with state-of-the-art attack techniques (3D-printed masks, real-time face swaps, neural voice cloning). PAD models are updated quarterly. The organisation can demonstrate to regulators that no known practical attack defeats the biometric PAD system.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Photograph Presentation Attack (Facial Recognition)
Test 8.2: Silicone/Gelatine Fingerprint Attack
Test 8.3: Voice Replay Attack
Test 8.4: Video Replay Attack (Facial Recognition)
Test 8.5: AI-Generated Deepfake Attack
Test 8.6: Multimodal Spoof Resistance (Where Implemented)
Test 8.7: Liveness Detection Under Varied Conditions
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Annex III, 1(a) (Remote Biometric Identification) | Direct requirement |
| PSD2/EBA RTS | Article 8 (Inherence — SCA) | Direct requirement |
| GDPR | Article 9 (Processing of Special Categories — Biometric Data) | Direct requirement |
| ISO 30107-3 | Biometric PAD — Testing and Reporting | Direct requirement |
| NIST SP 800-63B | Section 5.2.3 (Biometric Authenticators) | Supports compliance |
| eIDAS 2.0 | Article 6a (Identity Wallets — Biometric Verification) | Supports compliance |
The EU AI Act classifies remote biometric identification systems as high-risk AI systems under Annex III. Where biometric authentication for agent governance involves remote biometric processing (e.g., video-based facial recognition for remote approvals), the system is subject to conformity assessment requirements including robustness against adversarial attacks. AG-282's PAD requirements directly support the robustness requirement by ensuring resistance to presentation attacks.
The EBA Regulatory Technical Standards for SCA specify that biometric inherence factors must have mechanisms to mitigate the risk of the authentication element being used by another party. This is a direct requirement for PAD. For AI agents performing payment operations, the biometric factor in the approval workflow must meet EBA standards for spoof resistance.
Biometric data processed for identification purposes is a special category under GDPR Article 9. AG-282's requirement to log PAD decisions without storing raw biometric samples supports data minimisation (Article 5(1)(c)) while maintaining the audit trail. Organisations must ensure they have a lawful basis (typically explicit consent or substantial public interest) for processing biometric data in governance workflows.
ISO 30107-3 provides the testing methodology and metrics (APCER, BPCER) referenced throughout AG-282. Conformance with AG-282 requires that PAD performance be evaluated according to ISO 30107-3's methodology, using the specified PAI species and reporting format.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Identity-scoped — affects every agent governance action authenticated by the spoofed biometric, potentially spanning all agents the spoofed identity has authority over |
Consequence chain: A successful biometric spoof allows the attacker to perform governance actions under the victim's identity. The governance platform records that the legitimate person authenticated biometrically — creating a false audit trail that is harder to challenge than a password compromise because biometric authentication carries a presumption of physical presence. The attacker can approve mandate changes, authorise overrides, or access sensitive governance configurations while the audit trail blames the victim. Remediation requires forensic analysis of biometric authentication logs (PAD confidence scores, device attestation, environmental signals) to determine whether the authentication was genuine, which is time-consuming and may be inconclusive. In financial services, the £ exposure equals the financial authority of the spoofed identity multiplied by the time before detection. In healthcare, a spoofed clinical governance approval can affect patient safety. In all sectors, the reputational damage of a biometric spoof incident undermines confidence in the governance framework.
Cross-references: AG-279 (Human Identity Proofing Governance) establishes the biometric enrolment baseline that AG-282 protects from spoofing. AG-283 (Deepfake-Resistant Approval Authentication Governance) extends spoof resistance to the specific threat of deepfakes in approval workflows. AG-281 (Device Identity Binding Governance) ensures the biometric capture occurs on a trusted device. AG-016 (Cryptographic Action Attribution) depends on genuine biometric authentication for the cryptographic signature to be meaningful. AG-029 (Credential Integrity Verification) ensures the biometric template itself has not been tampered with. AG-161 (Requester Authentication and Anti-Impersonation) addresses the broader authentication context within which biometric factors operate.