Liveness Verification Governance requires that any AI agent making, influencing, or gating identity decisions on the basis of biometric signals performs robust liveness detection before treating those signals as authentic. Liveness detection — also called presentation attack detection (PAD) — is the technical discipline of distinguishing a live, physically present human being from an artefact designed to impersonate that human: a printed photograph held before a camera, a replayed video on a screen, a silicone mask, a deepfake video stream injected into a capture pipeline, a synthesised voice sample played through a speaker, or a digitally generated synthetic identity that has no corresponding living person. Without liveness verification, every biometric gate is vulnerable to presentation attacks that bypass identity controls entirely, granting unauthorised actors access to financial accounts, government services, physical facilities, and safety-critical systems. This dimension mandates that liveness checks are architecturally enforced — not optional, not bypassable by configuration, and not degradable under load — and that the liveness verification mechanism itself is subject to continuous adversarial testing, threshold governance, and demographic fairness assessment, because a liveness check that rejects legitimate users from certain demographic groups at disproportionate rates is itself a governance failure even if it successfully blocks attacks.
Scenario A — Deepfake Video Injection Bypasses Remote Account Opening: A digital bank deploys an AI agent to handle remote customer onboarding. The agent captures a selfie video, compares it against the photograph on a government-issued identity document, and opens the account if the match exceeds a similarity threshold. The agent does not perform liveness detection — it relies solely on image similarity. An organised fraud ring obtains stolen identity documents and uses commercially available deepfake generation software to create realistic face-swap video streams. The attackers inject the deepfake video directly into the capture pipeline using a virtual camera driver, bypassing the physical camera entirely. The agent's facial comparison algorithm matches the deepfake video against the document photograph at a 96.2% similarity score — above the 90% threshold — because the deepfake was generated from the same document photograph. Over a 4-month period, the fraud ring opens 1,340 accounts using 890 unique stolen identities, drawing down overdrafts and credit facilities totalling £4.7 million before detection. The bank's fraud investigation reveals that none of the 1,340 onboarding sessions involved a physically present human being. Every session was a deepfake injection attack. The bank faces regulatory action for inadequate customer due diligence under anti-money-laundering regulations, a £3.2 million write-off on irrecoverable credit, and an £860,000 remediation programme to re-verify the identities of 23,000 customers onboarded through the same channel during the affected period.
What went wrong: The agent had no liveness verification. It could determine whether the face in the video resembled the face on the document, but it could not determine whether the face in the video belonged to a physically present person. The deepfake injection exploited this gap precisely — the attacker did not need to fool a human; they needed only to present a synthetic video stream that matched the document photograph. Without liveness detection, the facial comparison algorithm became a tool for the attacker, not a barrier against them. The attack was undetectable at the individual transaction level because each deepfake was visually convincing; detection required either liveness verification at the point of capture or statistical analysis of session metadata revealing that all 1,340 sessions originated from the same small set of devices using virtual camera drivers.
Scenario B — Silicone Mask Defeats Physical Access Control: A pharmaceutical manufacturing facility deploys an AI-controlled biometric access gate at the entrance to its controlled substance storage area. The agent uses facial recognition to verify that only authorised personnel enter the area. The system performs basic liveness detection — it checks for eye blink — but does not implement multi-factor liveness checks or depth-sensing analysis. A terminated employee, whose biometric template has been removed from the authorised list, obtains a hyper-realistic silicone mask modelled on the face of a current authorised employee. The mask includes embedded mechanical eyelid mechanisms that simulate blink on command. The terminated employee approaches the access gate wearing the mask, triggers a blink when prompted, and gains entry. Over three occasions across two weeks, the individual accesses the controlled substance storage area and removes pharmaceutical-grade opioids with a street value of £290,000. The facility discovers the breach only when a quarterly inventory audit reveals the discrepancy. The subsequent investigation determines that the single-factor liveness check — blink detection — was trivially defeated by the mechanical mask. The facility faces regulatory enforcement from the pharmaceutical regulator for inadequate controlled substance security, potential criminal liability for diversion of controlled substances, and a complete overhaul of its access control infrastructure at a cost of £1.4 million.
What went wrong: The liveness check was insufficiently robust — a single passive signal (blink detection) that could be replicated by a physical artefact. The system did not employ multi-modal liveness verification (depth sensing, texture analysis, infrared reflectance, challenge-response sequences) that would have detected the silicone mask. The liveness check created a false sense of security: the facility believed it had biometric access control, but the control was trivially bypassable by an attacker with modest resources and commercially available mask-making technology. The single-signal liveness approach treated liveness as a checkbox rather than a layered defence.
Scenario C — Synthetic Identity Fraud Exploits Voice Verification Without Liveness: A government benefits agency deploys an AI agent for telephone-based identity verification. Claimants call to verify their identity before receiving benefit payments. The agent captures a voice sample, compares it against a stored voiceprint, and authorises payment if the match exceeds the threshold. The agent does not perform voice liveness detection — it does not distinguish between a live speaker and a recorded or synthesised voice sample. A fraud network uses text-to-speech synthesis tools trained on publicly available audio — social media videos, podcast appearances, public meeting recordings — to generate synthetic voice samples for 67 benefit claimants. The attackers call the agency, play the synthesised voice samples through a speaker held to the telephone handset, and successfully authenticate as the claimants. Over 8 months, the network diverts £1.1 million in benefit payments to accounts controlled by the attackers. The fraud is detected only when legitimate claimants report that their payments have been redirected. The agency's investigation reveals that the voice verification system had no mechanism to detect that the voice samples were synthesised rather than spoken by a live person. The agency faces a parliamentary inquiry, a £2.3 million remediation programme, and significant reputational damage.
What went wrong: The voice verification system treated any voice sample matching the voiceprint as authentic, without verifying that the voice was produced by a living human in real time. Synthesised voice samples, generated from publicly available audio, were indistinguishable from live speech to the matching algorithm. The absence of voice liveness detection — challenge-response prompts, acoustic environment analysis, spectral artefact detection for synthesised audio — left the system completely vulnerable to replay and synthesis attacks. The agency's reliance on voice biometrics without liveness created a single point of failure that was exploitable at scale.
Scope: This dimension applies to every AI agent deployment where biometric signals — facial imagery, voice samples, fingerprint scans, iris patterns, gait analysis, vein patterns, or any other physiological or behavioural biometric — are used to make, influence, or gate identity decisions. Identity decisions include but are not limited to: authentication (verifying a claimed identity), identification (determining identity from a population), access control (granting or denying physical or logical access), onboarding (establishing a new identity record), and authorisation (permitting a transaction or action contingent on identity verification). The scope covers all capture modalities — in-person sensors, remote camera-based capture, telephone-based voice capture, and any channel where biometric data is acquired for identity purposes. The scope extends to agents operating at the edge, in embodied robotic platforms, in kiosk deployments, and in any environment where the biometric capture device is physically accessible to potential attackers. The scope includes both one-to-one verification (comparing a sample against a claimed identity template) and one-to-many identification (comparing a sample against a population of templates), because both are vulnerable to presentation attacks.
4.1. A conforming system MUST perform liveness verification on every biometric sample before that sample is used for any identity decision, with no exception for low-risk transactions, returning users, or operational load conditions.
4.2. A conforming system MUST implement liveness verification using at minimum two independent detection signals — such as depth analysis and texture analysis for facial biometrics, or challenge-response prompts and spectral artefact analysis for voice biometrics — so that compromise of a single liveness signal does not defeat the entire liveness check.
4.3. A conforming system MUST reject biometric samples that fail liveness verification and MUST NOT fall back to a weaker verification method (such as knowledge-based authentication alone) without logging the fallback as a liveness failure event and applying compensating controls defined in the Liveness Failure Policy.
4.4. A conforming system MUST enforce liveness verification at the architectural level — in the capture pipeline, in the matching service, or in a dedicated liveness service that cannot be bypassed by client-side configuration, API manipulation, or injection of pre-recorded or synthetic biometric data directly into the processing pipeline.
4.5. A conforming system MUST detect and reject injection attacks — where synthetic or pre-recorded biometric data is introduced into the capture pipeline without passing through the physical sensor — using mechanisms such as device attestation, sensor integrity verification, challenge-response binding, or capture-session cryptographic signing.
4.6. A conforming system MUST subject its liveness verification mechanism to adversarial testing at defined intervals — at minimum annually and after any significant change to the liveness detection algorithm, capture hardware, or threat landscape — using presentation attack instruments that reflect the current state of the art, including deepfake video, synthesised voice, silicone masks, printed photographs, and screen replay attacks.
4.7. A conforming system MUST define and document liveness verification thresholds — the confidence score or decision boundary above which a sample is accepted as live — and MUST calibrate those thresholds to achieve a presentation attack detection error rate that meets or exceeds ISO/IEC 30107-3 Level 1 requirements, or a stricter standard where the risk profile demands it.
4.8. A conforming system MUST assess the demographic impact of its liveness verification mechanism across relevant demographic groups — including but not limited to skin tone, age, sex, facial hair, head coverings, and disability-related factors — and MUST demonstrate that the false rejection rate for liveness does not disproportionately affect any protected group beyond a documented and justified differential.
4.9. A conforming system MUST log every liveness verification decision — pass, fail, and inconclusive — with sufficient metadata to support forensic investigation, including the capture device identifier, session timestamp, liveness signals evaluated, confidence scores, and the identity decision outcome.
4.10. A conforming system MUST maintain a Liveness Failure Policy that defines the operational response when liveness verification fails or is unavailable, including: the compensating controls that apply, the maximum duration of degraded operation without liveness, the escalation path for repeated liveness failures by the same identity, and the notification requirements for governance authorities.
4.11. A conforming system SHOULD implement adaptive liveness challenges that vary between sessions — randomised prompts for head movement, speech content, or gesture — to prevent pre-recorded attack sequences that replay a fixed liveness challenge response.
4.12. A conforming system SHOULD monitor liveness failure rates in real time and trigger alerts when failure rates exceed baseline thresholds, as a spike in liveness failures may indicate a coordinated presentation attack campaign.
4.13. A conforming system MAY implement continuous liveness verification for extended sessions — re-verifying liveness at intervals during a session rather than only at session initiation — to detect session hijacking where a live person authenticates and then yields the session to an attacker or an automated tool.
Biometric authentication without liveness verification is not authentication — it is pattern matching against an unverified input. The biometric matching algorithm determines whether a presented sample resembles a stored template. It does not, and cannot, determine whether the presented sample originates from a living person. That determination requires a separate, purpose-built mechanism: liveness detection. Without it, every biometric gate is equivalently vulnerable to any artefact that replicates the biometric pattern with sufficient fidelity — a printed photograph, a video replay, a deepfake injection, a silicone mask, a synthesised voice, or a lifted fingerprint reproduced on a gelatin mould. The sophistication required for such artefacts has decreased dramatically and continues to decrease. Deepfake generation tools are freely available, require no technical expertise, and can produce face-swap videos from a single photograph in under 60 seconds. Voice synthesis tools can clone a voice from 15 seconds of sample audio. Silicone mask fabrication is a commercial service. The barrier to presentation attacks is no longer technical sophistication — it is the presence or absence of liveness detection.
The threat landscape has evolved from opportunistic individual fraud to industrialised attack operations. Organised fraud networks use deepfake injection at scale to open thousands of accounts, exploit government benefit systems, and conduct authorised push payment fraud. The UK's National Fraud Intelligence Bureau reported a 300% increase in deepfake-related fraud referrals between 2022 and 2024. Europol's Internet Organised Crime Threat Assessment identifies synthetic identity fraud — where entirely fabricated identities are created using AI-generated biometric artefacts — as a Tier 1 threat. In these attacks, there is no "real" person to verify against: the attacker generates a synthetic face, pairs it with fabricated identity documents, and presents the synthetic face to the biometric capture system. Without liveness detection, the system has no basis to reject the synthetic identity because the face matches the document — both were generated by the same tool.
The regulatory environment increasingly mandates liveness detection either explicitly or implicitly. The European Banking Authority's Guidelines on Remote Customer Onboarding require that video identification procedures include mechanisms to ensure that the person is physically present and not using pre-recorded or manipulated imagery. The EU Digital Identity Wallet Regulation (eIDAS 2.0) will require high-assurance identity verification for wallet issuance, which implies liveness detection as a minimum safeguard. ISO/IEC 30107 (the Biometric Presentation Attack Detection standard) provides the technical framework for evaluating liveness detection mechanisms, defining presentation attack instruments, error metrics, and testing methodologies. NIST SP 800-63B (Digital Identity Guidelines) specifies that biometric authentication at Authenticator Assurance Level 2 and above must include presentation attack detection. The direction is clear: liveness detection is transitioning from a best practice to a regulatory requirement across financial services, government identity, and critical infrastructure.
Single-signal liveness detection is insufficient against current attack sophistication. Early liveness systems relied on a single signal — eye blink detection, head movement detection, or smile detection — that was trivially defeated by presentation attack instruments incorporating the same signal. A video replayed on a screen includes the original person's blinks. A silicone mask with mechanical eyelids defeats blink detection. A deepfake can be rendered with any expression or movement the attacker specifies. Multi-signal liveness detection — combining depth analysis, texture analysis, infrared reflectance, temporal consistency, and challenge-response interaction — raises the attack cost by requiring the artefact to simultaneously satisfy multiple independent detection criteria. No single signal is sufficient; the security derives from the combination.
Injection attacks represent a category of threat that bypasses the physical sensor entirely. Rather than presenting an artefact to the camera, the attacker injects a synthetic video stream directly into the capture pipeline using a virtual camera driver, a compromised application, or a man-in-the-middle attack on the data path between the sensor and the processing service. Against injection attacks, sensor-side liveness detection is irrelevant because the sensor is not involved. Defence against injection attacks requires device attestation (verifying that the capture device is a genuine physical sensor), pipeline integrity verification (ensuring the data path has not been intercepted), and session-bound cryptographic signing (binding the captured data to a specific session and device so that replayed or injected data is detectable). Requirement 4.5 addresses this category specifically because it is the fastest-growing attack vector and the one most frequently absent from legacy biometric systems.
Demographic fairness in liveness verification is a non-negotiable governance requirement. Liveness detection algorithms — particularly those based on texture analysis, skin reflectance, or depth mapping — can exhibit differential performance across demographic groups. Darker skin tones may produce different reflectance patterns that some algorithms misinterpret as non-live. Older individuals may have skin texture characteristics that reduce liveness confidence scores. Head coverings or facial hair may interfere with depth analysis. If liveness verification disproportionately rejects legitimate users from protected groups, the system creates a discriminatory barrier to services even if it successfully blocks presentation attacks. This is not a trade-off to be optimised — it is a constraint to be satisfied: liveness verification must be both effective against attacks and equitable across demographic groups.
Liveness verification governance requires coordinated implementation across capture infrastructure, detection algorithms, threshold management, adversarial testing, and demographic fairness monitoring. The core architectural principle is that liveness verification is a mandatory, non-bypassable stage in the biometric processing pipeline — not a feature flag, not an optional enhancement, and not a client-side check that can be circumvented by a modified application.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Remote customer onboarding and transaction authentication are primary targets for deepfake and synthetic identity attacks. Financial institutions subject to AML regulations must implement liveness detection as part of their customer due diligence procedures. The European Banking Authority's Guidelines on Remote Customer Onboarding explicitly require mechanisms to ensure physical presence. PSD2 Strong Customer Authentication requirements, when met through biometric factors, implicitly require liveness detection to ensure the biometric factor is genuine. Financial institutions should implement the highest tier of liveness verification — multi-signal detection with injection attack defence — for account opening and high-value transaction authorisation.
Government and Public Sector. Government identity verification — for benefit claims, tax filings, passport issuance, and access to public services — is a high-value target for both individual fraud and organised crime. The EU Digital Identity Wallet will require high-assurance identity proofing, and national identity verification schemes (such as the UK's GOV.UK Verify successor framework) increasingly rely on biometric verification. Government deployments must ensure that liveness verification does not create accessibility barriers for elderly citizens, individuals with disabilities, or populations with limited access to modern capture devices. Fallback procedures must be defined for populations that cannot complete biometric liveness checks.
Safety-Critical and Physical Access. Biometric access control for critical infrastructure — power generation, pharmaceutical manufacturing, data centres, defence facilities — faces physical presentation attack threats (masks, spoofed fingerprints) in addition to digital threats. On-premises deployments should leverage hardware-based liveness signals (depth sensors, infrared cameras, multi-spectral imaging) that are not available in remote scenarios. The physical security of the capture device itself must be considered — an attacker who can physically tamper with the sensor can potentially bypass sensor-side liveness detection.
Embodied and Edge Agents. Robotic platforms and edge-deployed agents that perform biometric verification in the field — delivery robots verifying recipient identity, autonomous vehicles verifying occupant identity, border patrol robots performing identity checks — face unique constraints: limited computational resources, variable lighting and environmental conditions, and physical exposure to adversarial manipulation. Liveness verification for edge deployments must be computationally efficient, robust to environmental variation, and resistant to physical tampering with the capture hardware. Where edge resources are insufficient for full liveness analysis, the architecture should stream capture data to a cloud-based liveness service with session-bound integrity verification.
Basic Implementation — The system performs liveness verification on every biometric sample using at minimum two independent signals. Injection attack defences are implemented. Liveness failures are logged and trigger the documented Liveness Failure Policy. Liveness thresholds are documented and meet ISO/IEC 30107-3 Level 1 requirements. Demographic fairness has been assessed at deployment. This level meets the minimum mandatory requirements.
Intermediate Implementation — All basic capabilities plus: adversarial testing is conducted at least annually with current-generation presentation attack instruments, including deepfake injection and silicone mask attacks. Demographic fairness is re-evaluated annually and after algorithm updates. Adaptive challenge-response sequences are implemented with session-specific randomisation. Real-time monitoring detects liveness failure rate spikes. Liveness thresholds are risk-calibrated across transaction tiers.
Advanced Implementation — All intermediate capabilities plus: continuous liveness verification operates during extended sessions. Adversarial testing is conducted quarterly and incorporates emerging attack methodologies within 90 days of public disclosure. The system can demonstrate through empirical data that its liveness detection remains effective against attacks that defeat peer systems. Demographic fairness differentials are below published best-practice thresholds across all evaluated groups. The liveness verification mechanism is independently audited by a qualified third party.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Universal Liveness Enforcement Verification (Requirement 4.1)
Test 8.2: Multi-Signal Independence Verification (Requirement 4.2)
Test 8.3: Liveness Failure Handling Verification (Requirement 4.3)
Test 8.4: Architectural Bypass Resistance Verification (Requirement 4.4)
Test 8.5: Injection Attack Detection Verification (Requirement 4.5)
Test 8.6: Adversarial Testing Programme Verification (Requirement 4.6)
Test 8.7: Liveness Threshold Documentation Verification (Requirement 4.7)
Test 8.8: Demographic Fairness Verification (Requirement 4.8)
Test 8.9: Liveness Decision Logging Verification (Requirement 4.9)
Test 8.10: Liveness Failure Policy Completeness Verification (Requirement 4.10)
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 6, Annex III(1) (Biometric Identification) | Direct requirement |
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| eIDAS 2.0 | Article 6a (European Digital Identity Wallet) | Supports compliance |
| EBA Guidelines | Remote Customer Onboarding Guidelines | Direct requirement |
| NIST SP 800-63B | Section 5.2.3 (Biometric Authentication) | Direct requirement |
| ISO/IEC 30107 | Parts 1-3 (Biometric Presentation Attack Detection) | Technical standard |
| GDPR | Article 9 (Special Categories — Biometric Data) | Supports compliance |
| UK DIATF | Digital Identity and Attributes Trust Framework | Supports compliance |
The EU AI Act classifies real-time and post remote biometric identification systems used in publicly accessible spaces as high-risk AI systems under Annex III, point 1. High-risk classification triggers the full suite of conformity requirements under Chapter 2, including risk management (Article 9), data governance (Article 10), technical documentation (Article 11), and human oversight (Article 14). Liveness verification is an essential component of the risk management system for any biometric identification AI: without it, the system cannot distinguish genuine subjects from spoofed presentations, creating a fundamental reliability failure. AG-670 operationalises the risk management obligation by requiring presentation attack detection that is robust, tested, and fair across demographic groups, directly supporting conformity assessment for biometric AI systems.
The European Banking Authority's Guidelines on the use of Remote Customer Onboarding Solutions (EBA/GL/2022/15) require that credit and financial institutions verify that the person presenting an identity document during remote onboarding is the legitimate holder of that document and is physically present. The guidelines specifically address the risk of pre-recorded, manipulated, or deepfake imagery being used to circumvent identity verification. Liveness verification is the primary technical mechanism for satisfying this requirement. AG-670's requirements for multi-signal liveness detection, injection attack defence, and adversarial testing against deepfake instruments directly address the EBA's expectations for remote onboarding security.
NIST SP 800-63B specifies that biometric authentication systems at Authenticator Assurance Level 2 (AAL2) and above shall employ presentation attack detection. The standard requires that the biometric system shall demonstrate a presentation attack detection error rate of no more than specified thresholds, tested against a representative set of presentation attack instruments. AG-670's requirement for multi-signal liveness detection, threshold documentation, and adversarial testing aligns directly with NIST's PAD requirements. Organisations implementing NIST-aligned digital identity frameworks should treat AG-670 as the operational governance layer that ensures NIST PAD requirements are met and maintained over time.
ISO/IEC 30107 is the international standard for biometric presentation attack detection, providing terminology (Part 1), data formats (Part 2), and testing and reporting requirements (Part 3). AG-670 references ISO/IEC 30107-3 as the baseline standard for evaluating liveness detection performance and requires that liveness thresholds achieve error rates meeting or exceeding Level 1 requirements. Organisations should use ISO/IEC 30107-3 as the testing methodology for the adversarial testing programme required by Requirement 4.6 and as the reporting framework for the adversarial test reports required in Section 7.
GDPR Article 9 prohibits the processing of biometric data for the purpose of uniquely identifying a natural person except under specified conditions (explicit consent, substantial public interest, etc.). Liveness verification processes biometric data as part of the identity verification pipeline and is therefore subject to Article 9's protections. AG-670 supports GDPR compliance by ensuring that biometric data used for identity decisions is genuine — not spoofed or synthetic — thereby protecting the integrity of the processing purpose and reducing the risk that biometric data processing produces incorrect identity outcomes that harm data subjects.
The UK's Digital Identity and Attributes Trust Framework specifies requirements for identity service providers operating in the UK market, including requirements for biometric verification at higher confidence levels. The framework requires that identity verification processes include presentation attack detection appropriate to the confidence level claimed. AG-670 provides the governance structure for meeting DIATF's PAD requirements, ensuring that liveness verification is not merely implemented but is continuously tested, demographically fair, and architecturally robust.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | System-wide — every identity decision made without effective liveness verification is potentially fraudulent, affecting all users and all transactions processed through the biometric channel |
Consequence chain: Liveness verification is absent, insufficient (single-signal), or bypassable (client-side only, not architecturally enforced). An attacker identifies the weakness — through reconnaissance, through public vulnerability disclosure, or through trial-and-error probing. The attacker creates or obtains presentation attack instruments appropriate to the biometric modality: deepfake video generated from a target's social media photographs, a synthesised voice cloned from publicly available audio, a silicone mask fabricated from a 3D face model, or a lifted fingerprint reproduced on a gelatin mould. The attacker presents the artefact to the biometric capture system. Without effective liveness verification, the system treats the artefact as a genuine biometric sample and passes it to the matching algorithm. The matching algorithm confirms the identity — correctly, from its perspective, because the artefact was designed to match the target's stored template. The system grants access, opens an account, authorises a transaction, or issues a credential. The attacker now possesses authenticated access under a stolen or synthetic identity. The exploitation scales: the same technique is applied to hundreds or thousands of identities using automated tooling, deepfake generation pipelines, and virtual camera injection. For financial institutions, the consequence is mass account fraud, credit losses, and regulatory enforcement for inadequate customer due diligence. For government services, the consequence is benefit fraud, identity document issuance to non-existent persons, and compromise of national identity infrastructure. For physical access control at critical facilities, the consequence is unauthorised entry with potential for theft, sabotage, or harm to human safety. The remediation is extensive: every identity decision made during the period of compromised liveness verification must be re-verified, affected accounts must be reviewed, and the biometric infrastructure must be rebuilt with architecturally enforced, multi-signal liveness detection. The cost of remediation invariably exceeds the cost of proper liveness implementation by one to two orders of magnitude, and the reputational and regulatory consequences compound the financial loss.
Cross-references: AG-669 (Biometric Purpose Limitation) constrains the purposes for which biometric data may be processed; AG-670 ensures the biometric data is genuine before it enters the purpose-limited pipeline. AG-673 (Biometric Template Protection) protects stored templates from extraction; AG-670 prevents attackers from using stolen templates by requiring proof of liveness. AG-675 (Spoof-Response Escalation) defines the operational response when a spoof is detected; AG-670 ensures the detection mechanism exists and functions. AG-676 (Face and Voice Similarity Threshold) governs matching thresholds; AG-670 governs liveness thresholds — both must be satisfied for an identity decision. AG-005 (Instruction Integrity Verification) ensures that agent instructions have not been tampered with; AG-670 ensures that biometric inputs have not been fabricated. AG-042 (Encryption & Cryptographic Control) protects biometric data in transit and at rest; AG-670 uses cryptographic session binding to protect biometric data from injection. AG-043 (Access Control & Credential) governs access decisions; AG-670 ensures the biometric factor in those decisions is authentic. AG-210 (Adversarial Input Resilience) addresses adversarial inputs broadly; AG-670 addresses the specific category of adversarial biometric inputs — presentation attacks and injection attacks.