Biometric Purpose Limitation Governance requires that every AI agent processing biometric data — including facial geometry, voiceprints, iris patterns, gait signatures, fingerprints, and behavioural biometric identifiers — restricts that processing to the explicit, documented, and approved purposes for which biometric data was originally collected, and prevents any secondary use, repurposing, or scope expansion without independent governance review and, where required, fresh consent. Biometric identifiers are uniquely sensitive because they are irrevocable: unlike passwords, tokens, or even government-issued identifiers, a compromised or misused biometric cannot be reissued, rotated, or replaced. This dimension mandates that purpose boundaries are enforced technically — not merely stated in policy — through architectural controls that prevent biometric templates and raw biometric data from being consumed by processing pipelines, model training workflows, or downstream agents for purposes beyond the original authorisation.
Scenario A — Facial Recognition Creep from Access Control to Performance Monitoring: A logistics company deploys facial recognition kiosks at 34 warehouse entry points for employee access control, collecting facial templates from 2,800 workers under a purpose statement limited to "facility access authentication." The system vendor releases a software update that includes a "workforce analytics" module capable of tracking employee movement patterns, dwell times at workstations, and break duration by correlating facial recognition events across 112 interior cameras. The operations director activates the module without updating the purpose statement, privacy impact assessment, or employee notice. Over 14 months, the system processes 9.4 million facial recognition events for movement analytics — a purpose never disclosed to employees. When an employee files a complaint with the state attorney general citing the Illinois Biometric Information Privacy Act (BIPA), the investigation reveals that the company collected biometric data for "access control" but used it for "performance monitoring" — a distinct purpose not covered by the original consent. Under BIPA Section 15(b), each of the 9.4 million scans constitutes a separate violation carrying a $5,000 statutory damage for intentional violations.
What went wrong: The biometric system lacked any architectural control preventing purpose expansion. The vendor's analytics module consumed facial templates from the same database used for access control, with no technical boundary distinguishing authorised from unauthorised purposes. The purpose statement was a policy document, not a technical enforcement mechanism. No governance review gate existed between the vendor's feature release and the operations director's activation decision. Consequence: BIPA class action exposure of $47 billion in statutory damages (reduced to a $45.6 million settlement), reputational harm, employee trust collapse requiring decommissioning of all biometric systems, and $3.2 million in remediation and legal costs.
Scenario B — Voice Biometric Data Repurposed for Emotion Inference in a Call Centre: A financial services firm collects voiceprints from 1.2 million customers for telephone banking authentication, under a consent notice stating: "Your voice will be used to verify your identity when you call." The firm's customer experience division requests access to the voice authentication recordings to train an emotion inference model that detects customer frustration, enabling real-time agent coaching. The data engineering team provisions access to the voice recording pipeline, reasoning that the recordings "already exist" and that emotion inference is a "reasonable extension" of the customer relationship. The emotion model is trained on 4.7 million voice recordings and deployed to 600 call centre agents over 8 months. A GDPR Article 77 complaint triggers a supervisory authority investigation. The authority finds that voice recordings collected for identity verification (GDPR Article 9(2)(a) — explicit consent for biometric processing for identification purposes) were repurposed for emotion inference — a fundamentally different processing purpose that was never disclosed, never consented to, and constitutes processing of special category data under Article 9(1) without a lawful basis.
What went wrong: No technical control prevented the voice authentication pipeline from being consumed by the emotion inference training pipeline. The data engineering team treated access provisioning as a technical request rather than a governance decision. The firm had no purpose-enforcement architecture that restricted biometric data flows to approved purposes. The consent obtained for identity verification did not extend to emotion inference — these are categorically different processing purposes under GDPR. Consequence: Supervisory authority fine of EUR 14.5 million under GDPR Article 83(5)(a) for unlawful processing of special category data, mandatory deletion of the emotion inference model and all derived training data, 18-month remediation programme, and loss of customer trust resulting in a 12% decline in voice biometric enrolment for the legitimate authentication service.
Scenario C — Biometric Templates Shared Across Municipal Agencies Without Purpose Reassessment: A city government collects fingerprint biometrics from 340,000 residents for a public transit fare-payment system. The purpose is documented as "contactless fare payment authentication." The city's public safety department requests access to the fingerprint template database to support criminal investigation matching. The IT department provisions a read-only database replica to the public safety department without conducting a Data Protection Impact Assessment, without updating the public notice, and without obtaining fresh consent. Over 22 months, 14,200 fingerprint comparison queries are executed against the transit database for law enforcement purposes. A civil rights organisation discovers the cross-agency sharing through a freedom of information request and files a legal challenge. The court finds that the city collected biometric data for transit fare payment under one legal basis and repurposed it for law enforcement under a fundamentally different legal basis — violating the purpose limitation principle under both GDPR Article 5(1)(b) and the city's own data protection ordinance.
What went wrong: No architectural boundary prevented the transit biometric database from being accessed by a non-transit agency. The database replica was provisioned as an infrastructure request, bypassing the governance process that would have identified the purpose mismatch. There was no purpose-tagging mechanism on the biometric templates that would have restricted queries to transit-authentication purposes. The IT department lacked authority to evaluate purpose compatibility, but also lacked a governance gate that would have escalated the request to a data protection officer. Consequence: Court-ordered deletion of all law enforcement query results and derived intelligence, injunction against cross-agency biometric sharing, $8.4 million settlement to affected residents, and a 3-year independent monitoring agreement.
Scope: This dimension applies to every AI agent that collects, processes, stores, transmits, or derives biometric data — defined as any measurement of physical, physiological, or behavioural characteristics that can be used to identify, authenticate, or classify a natural person. This includes but is not limited to: facial geometry and templates, voiceprints and speaker embeddings, fingerprint minutiae, iris and retinal patterns, gait and movement signatures, keystroke dynamics, vein patterns, and any other biometric modality. The scope covers both raw biometric data (images, audio recordings, sensor readings) and derived representations (templates, embeddings, feature vectors, hash representations) because derived representations retain the capacity for identification and cannot be reversed to a non-biometric form without irreversible transformation. The scope extends to biometric data at rest, in transit, and in processing — including temporary copies created during model inference, training data pipelines, and inter-agent communication channels. The scope includes all deployment contexts: customer-facing authentication, employee access control, public safety identification, healthcare patient matching, border control, and any other context where biometric processing occurs.
4.1. A conforming system MUST maintain a Biometric Purpose Register that enumerates every approved purpose for which biometric data may be processed, the legal basis for each purpose, the data subjects covered, the biometric modalities involved, and the date of governance approval. No biometric processing may occur for a purpose not listed in the register.
4.2. A conforming system MUST enforce purpose limitation technically — through access controls, data flow restrictions, API-level purpose parameters, or architectural segmentation — such that biometric data collected for one approved purpose cannot be consumed by a processing pipeline, model training workflow, downstream agent, or human operator for a different purpose without passing through a governance approval gate.
4.3. A conforming system MUST tag or label all biometric data — at the template, record, or dataset level — with the approved purpose(s) for which it was collected, and enforce those tags at every processing stage so that purpose violations are rejected programmatically rather than detected only in retrospective audit.
4.4. A conforming system MUST require an independent governance review — involving at minimum the data protection function and a senior governance authority — before any new biometric processing purpose is added to the Biometric Purpose Register or before any existing purpose is expanded in scope.
4.5. A conforming system MUST generate an immutable audit log entry for every biometric data access event, recording the identity of the requesting system or agent, the stated purpose of the access, the biometric modality accessed, the number of records affected, and the timestamp — sufficient to detect and investigate purpose violations.
4.6. A conforming system MUST reject any biometric data access request that does not include a declared purpose matching an entry in the Biometric Purpose Register, returning an explicit denial with a reason code that identifies the purpose mismatch.
4.7. A conforming system MUST conduct a Data Protection Impact Assessment (or equivalent risk assessment) before any new biometric processing purpose is activated, and retain the completed assessment as evidence.
4.8. A conforming system MUST ensure that biometric data collected under one consent basis or legal authority is not merged, linked, or cross-referenced with biometric data collected under a different consent basis or legal authority, unless the merger is independently approved through the governance review process defined in Requirement 4.4.
4.9. A conforming system SHOULD implement automated anomaly detection on biometric data access patterns, flagging access volumes, access timing, or requesting-agent identities that deviate from the established baseline for each approved purpose.
4.10. A conforming system SHOULD implement purpose-specific retention schedules for biometric data, such that biometric templates collected for a time-limited purpose (e.g., event access) are automatically deleted when the purpose expires, independent of other retention schedules that may apply to biometric data collected for different purposes.
4.11. A conforming system SHOULD provide data subjects with a purpose transparency mechanism — accessible through a privacy dashboard, notice, or on-request disclosure — that identifies every purpose for which their biometric data is currently being processed.
4.12. A conforming system MAY implement cryptographic purpose binding, where biometric templates are encrypted with purpose-specific keys such that only processing pipelines authorised for a given purpose hold the decryption key, making unauthorised purpose access technically infeasible rather than merely policy-prohibited.
Biometric data occupies a uniquely privileged position in data protection law and governance practice because of three properties that distinguish it from all other categories of personal data. First, irrevocability: a compromised biometric cannot be reissued. If a facial template is misused, the data subject cannot change their face. If a voiceprint is repurposed, the data subject cannot change their voice. Every other form of credential — passwords, tokens, government identifiers — can be revoked and reissued. Biometrics cannot. This means that the harm from purpose violation is permanent and compounding: a biometric misused once remains vulnerable to misuse indefinitely. Second, universality: biometric identifiers are inherently tied to the person, not to a relationship or context. A password exists only in the authentication system that issued it. A biometric exists in every camera, microphone, and sensor that can capture it. This means that biometric data collected in one context creates exposure in every other context where the same modality is observable. Third, inferential richness: biometric data — particularly face images, voice recordings, and behavioural patterns — encodes far more information than is needed for the stated purpose. A facial image collected for access authentication also encodes age, gender presentation, ethnic characteristics, emotional state, and health indicators. A voice recording collected for speaker verification also encodes stress, fatigue, cognitive load, and linguistic markers of identity. Purpose limitation is therefore not merely about controlling who accesses the biometric database — it is about controlling what inferences are drawn from biometric data that inherently supports inferences far beyond the original purpose.
The regulatory landscape reflects this sensitivity. GDPR Article 9(1) classifies biometric data processed for identification as special category data, subject to the strictest processing conditions. Article 5(1)(b) requires that personal data be "collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes." The Illinois BIPA — the most litigated biometric privacy statute in the world — requires written informed consent for collection and prohibits use beyond the stated purpose, with statutory damages of $1,000 per negligent violation and $5,000 per intentional violation. The Texas Capture or Use of Biometric Identifier Act (CUBI), the Washington Biometric Identifier statute, and an expanding set of state-level biometric privacy laws impose similar purpose restrictions. The EU AI Act Article 6 and Annex III classify biometric identification systems as high-risk, triggering the full compliance framework including data governance requirements under Article 10 that mandate purpose limitation.
Purpose creep — the gradual expansion of biometric processing beyond the originally authorised scope — is the most common and most damaging biometric governance failure. It follows a predictable pattern: biometric data is collected for a legitimate, well-defined purpose (access control, payment authentication, patient identification); the data is stored in a centralised system; a new use case emerges (workforce analytics, emotion detection, law enforcement matching); the new use case is framed as a "natural extension" or "secondary benefit" of the existing collection; the data is provisioned to the new use case through an infrastructure request rather than a governance decision; no fresh consent is obtained, no DPIA is conducted, and no purpose boundary is updated. By the time the purpose violation is discovered — through complaint, audit, or litigation — the unauthorised processing has been ongoing for months or years, affecting millions of records.
Technical purpose enforcement is the only reliable mitigation. Policy-based purpose limitation — where the purpose restriction exists as a document but is not enforced by the system architecture — fails predictably because it depends on every data access decision being made by someone who knows the purpose restriction, understands its legal significance, and has the authority and incentive to enforce it. The data engineer provisioning a database replica does not evaluate purpose compatibility. The vendor releasing a software update does not assess whether the new feature respects the original purpose statement. The operations manager activating an analytics module does not conduct a DPIA. Only architectural enforcement — where the system rejects purpose-violating access requests programmatically — prevents purpose creep at the point of occurrence rather than discovering it months later.
Biometric purpose limitation governance requires architectural controls that enforce purpose boundaries at the data layer, the API layer, and the processing pipeline layer. The core principle is that purpose enforcement must be technical, not merely procedural: the system must reject unauthorised purpose access, not merely log it for future review.
Recommended patterns:
AUTH_VOICE_BANKING. When the emotion inference pipeline requests access with a declared purpose of ANALYTICS_EMOTION, the storage layer rejects the request because ANALYTICS_EMOTION is not in the template's authorised purpose set. This pattern converts purpose limitation from a policy constraint into an access control constraint.Anti-patterns to avoid:
Financial Services. Voice biometrics for telephone banking authentication is widespread, with over 350 million voiceprints enrolled globally across financial institutions. The primary purpose creep risk is repurposing voice recordings for fraud pattern analytics, emotion inference, or customer segmentation. Financial regulators — including the FCA and the CFPB — increasingly scrutinise biometric processing in financial services. Firms should ensure that voice biometric systems enforce purpose boundaries between authentication and analytics, and that consent obtained for "voice verification" is not stretched to cover "voice analytics."
Healthcare. Biometric patient matching (palm vein, fingerprint, iris) is used in hospital systems to prevent duplicate medical records and patient misidentification. The purpose creep risk is using patient biometrics for clinical research, insurance verification, or identity verification beyond the healthcare relationship. HIPAA does not specifically classify biometric identifiers as a distinct category, but state-level biometric privacy laws apply regardless of HIPAA compliance. Healthcare organisations operating in BIPA jurisdictions must ensure that patient biometrics are not consumed by analytics pipelines beyond the patient matching purpose.
Public Sector. Government biometric systems present the most acute purpose limitation risks because governments often operate multiple agencies with different legal authorities, and biometric data collected by one agency under one legal basis may be valuable to another agency under a completely different legal authority. Transit biometrics repurposed for law enforcement (Scenario C) is a documented pattern. Public sector biometric governance must enforce inter-agency purpose boundaries, require fresh legal basis assessments for cross-agency access, and provide meaningful public notice of all biometric processing purposes.
Retail and Hospitality. Facial recognition for loss prevention, customer identification, and personalised service is expanding in retail environments. The purpose creep risk is using loss-prevention cameras for customer analytics, marketing profiling, or employee monitoring. Retailers operating in BIPA, CCPA, or similar jurisdictions face significant litigation exposure if facial recognition data collected for security is repurposed for marketing.
Embodied and Edge Agents. Robots, kiosks, and edge devices that capture biometric data present unique purpose limitation challenges because biometric processing may occur locally on the device, beyond the reach of centralised governance controls. Edge biometric systems must enforce purpose limitation locally — through on-device purpose tags, restricted model access, and local audit logging — rather than relying solely on network-level controls that are unavailable when the device operates offline.
Basic Implementation — The organisation maintains a Biometric Purpose Register documenting all approved biometric processing purposes. Biometric data access is logged with purpose attribution. A governance review process exists for adding new purposes. DPIA is conducted before new biometric processing purposes are activated. Purpose limitation is enforced through access controls, though not necessarily at the template or API level. This level meets the minimum mandatory requirements of Requirements 4.1, 4.4, 4.5, and 4.7.
Intermediate Implementation — All basic capabilities plus: biometric data is tagged with approved purposes at the storage layer. API-level purpose enforcement rejects requests with unauthorised or missing purpose declarations. Automated anomaly detection monitors access patterns for purpose deviations. Purpose-specific retention schedules automatically delete biometric data when purposes expire. Data subjects have access to a transparency mechanism showing active processing purposes. This level meets all MUST requirements and most SHOULD requirements.
Advanced Implementation — All intermediate capabilities plus: cryptographic purpose binding ensures that biometric templates are technically accessible only to pipelines holding purpose-specific decryption keys. Purpose enforcement is validated through regular red-team testing that attempts to access biometric data for unauthorised purposes. Cross-jurisdictional purpose mapping ensures compliance with all applicable biometric privacy laws simultaneously. Independent external audit validates purpose enforcement effectiveness. The organisation can demonstrate through empirical evidence that no biometric data access outside approved purposes has occurred in the audit period.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Biometric Purpose Register Existence and Completeness
Test 8.2: Technical Purpose Enforcement — Unauthorised Purpose Rejection
Test 8.3: Purpose Tagging at Collection
Test 8.4: Governance Review Gate for Purpose Expansion
Test 8.5: Immutable Audit Logging of Biometric Access
Test 8.6: Purpose-Mismatched Access Request Rejection
Test 8.7: DPIA Completion Before Purpose Activation
Test 8.8: Cross-Consent-Basis Merge Prevention
| Regulation | Provision | Relationship Type |
|---|---|---|
| GDPR | Article 5(1)(b) (Purpose Limitation) | Direct requirement |
| GDPR | Article 9(1) (Special Category — Biometric Data) | Direct requirement |
| GDPR | Article 35 (Data Protection Impact Assessment) | Direct requirement |
| EU AI Act | Article 6 & Annex III (High-Risk Classification — Biometric Identification) | Direct requirement |
| EU AI Act | Article 10 (Data and Data Governance) | Supports compliance |
| Illinois BIPA | Section 15(a)–(e) (Collection, Use, and Storage of Biometric Data) | Direct requirement |
| Texas CUBI | Business & Commerce Code Chapter 503 | Supports compliance |
| NIST AI RMF | GOVERN 1.7 (Data Governance), MAP 2.3 | Supports compliance |
| ISO 42001 | Clause 6.1.2 (AI Risk Assessment), Annex A.8 | Supports compliance |
| CCPA/CPRA | Cal. Civ. Code 1798.140(c) (Biometric Information Definition) | Supports compliance |
Article 5(1)(b) establishes the purpose limitation principle as a foundational pillar of EU data protection law: personal data must be "collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes." For biometric data, Article 9(1) imposes an additional layer of restriction by classifying biometric data processed for identification purposes as special category data, which may be processed only under one of the exhaustive legal bases in Article 9(2) — most commonly explicit consent under Article 9(2)(a). The combination of Articles 5(1)(b) and 9 means that biometric purpose limitation is doubly constrained: the purpose must be specified and the legal basis must be explicit. AG-669 operationalises both requirements by mandating a Biometric Purpose Register (specified purposes), technical enforcement of purpose boundaries (no incompatible further processing), and governance review for purpose expansion (re-assessment of legal basis). The EUR 14.5 million fine in Scenario B reflects the supervisory authority's view that repurposing voice biometric data from authentication to emotion inference is not a "compatible" further purpose under Article 6(4), and that the processing lacked a valid Article 9(2) basis.
BIPA is the most consequential biometric privacy statute in the United States and the most litigated. Section 15(a) requires a written policy establishing a retention schedule and destruction guidelines. Section 15(b) requires written informed consent before collecting biometric identifiers, specifying the purpose for collection and the length of storage. Section 15(d) prohibits disclosure of biometric identifiers without consent. Critically, the Illinois Supreme Court's 2019 Rosenbach v. Six Flags decision held that a plaintiff need not allege actual harm to bring a BIPA claim — the statutory violation itself is sufficient. This means that purpose creep affecting millions of biometric records creates per-violation statutory damages that can reach catastrophic levels. The $47 billion theoretical exposure in Scenario A — 9.4 million violations at $5,000 each — illustrates why BIPA compliance requires technical enforcement, not merely procedural safeguards. AG-669's requirement for programmatic purpose rejection (Requirement 4.6) directly addresses the BIPA risk by preventing purpose-violating access at the point of occurrence.
The EU AI Act classifies biometric identification systems as high-risk under Article 6(2) read with Annex III, Point 1. High-risk systems are subject to the full compliance framework, including data governance requirements under Article 10. Article 10(2) requires that training, validation, and testing datasets are "relevant, sufficiently representative, and to the extent possible, free of errors and complete" — but crucially, Article 10(5) requires that special category data under GDPR Article 9(1) is processed only to the extent strictly necessary. This means that biometric data used to train or validate high-risk AI systems must be purpose-limited to the training purpose, and may not be repurposed for operational inference or analytics without independent legal basis. AG-669's governance review gate (Requirement 4.4) and technical purpose enforcement (Requirement 4.2) ensure that biometric data consumed by AI training pipelines respects the purpose limitations imposed by both GDPR and the AI Act.
GOVERN 1.7 addresses processes for data governance, including data collection, processing, and use limitations. MAP 2.3 addresses the identification of risks related to data, including data that is repurposed beyond its original collection context. AG-669 operationalises both by requiring explicit purpose documentation (the Biometric Purpose Register), governance review for purpose changes, and technical controls that prevent repurposing. Organisations seeking alignment with the NIST AI RMF should demonstrate that their biometric purpose limitation controls address both the governance (GOVERN) and mapping (MAP) functions of the framework.
The California Consumer Privacy Act, as amended by the California Privacy Rights Act, defines biometric information broadly and classifies it as sensitive personal information subject to additional protections under Cal. Civ. Code 1798.121. Consumers have the right to limit the use of sensitive personal information to purposes that are "necessary to perform the services or provide the goods reasonably expected by an average consumer." This "necessity" standard imposes a purpose limitation that is tighter than general-purpose processing: biometric data collected for authentication may not be repurposed for marketing analytics because marketing is not "necessary" to perform the authentication service. AG-669's purpose register and technical enforcement mechanisms support CCPA/CPRA compliance by ensuring that biometric processing is restricted to the purposes that are necessary for the service context.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Population-scale — affects every individual whose biometric data is processed beyond the authorised purpose, potentially millions of data subjects across multiple jurisdictions |
Consequence chain: Biometric purpose limitation fails when a new processing purpose is activated without governance review, when biometric data is provisioned to an unauthorised pipeline, or when a vendor feature expansion introduces processing beyond the original purpose statement. The immediate technical consequence is that biometric data — facial templates, voiceprints, fingerprint minutiae — is consumed by a system or agent that was never authorised to process it. Because biometric data is inherently re-identifiable, every unauthorised access event creates a privacy violation for every affected data subject. The violation accumulates silently: no alert fires, no access is denied, no data subject is notified. The unauthorised processing continues until it is discovered through audit, complaint, or litigation — typically months or years later. By the time of discovery, the scope of the violation is measured in millions of records and thousands of data subjects. The legal exposure is catastrophic in BIPA jurisdictions, where per-violation statutory damages create aggregate exposure that exceeds the organisation's market capitalisation. In GDPR jurisdictions, the fine for unlawful processing of special category data reaches up to EUR 20 million or 4% of global annual turnover, whichever is greater. Beyond financial penalties, the organisation faces mandatory deletion of all data processed in violation — including any models trained on the data, any derived analytics, and any decisions made using the unauthorised processing. If the biometric data was shared with third parties or law enforcement, the organisation faces additional liability for each downstream recipient. The reputational impact compounds the regulatory impact: data subjects who consented to biometric authentication lose trust in all biometric systems operated by the organisation, resulting in opt-out rates that undermine the legitimate biometric services the organisation intended to provide. For public sector organisations, the consequence extends to political accountability, legislative backlash, and potential moratoriums on government biometric programmes. The consequence chain terminates in a permanent reduction in the organisation's ability to deploy biometric technology for any purpose — a strategic loss that far exceeds the cost of implementing the purpose limitation controls that would have prevented it.
Cross-references: AG-001 (Operational Boundary Enforcement) defines the boundaries within which agents operate; AG-669 applies that principle specifically to biometric processing purposes. AG-029 (Data Classification Enforcement) classifies biometric data as sensitive; AG-669 ensures that classification drives purpose-specific access controls. AG-033 (Consent Lifecycle Governance) manages consent; AG-669 ensures that consent boundaries are technically enforced for biometric data. AG-036 (Data Retention & Disposal Governance) governs retention; AG-669 adds purpose-specific retention for biometric data. AG-037 (Anonymisation & Pseudonymisation Governance) governs de-identification; AG-669 addresses the fact that biometric templates may resist effective anonymisation. AG-040 (Sensitive Category Data Processing Governance) governs special category processing; AG-669 operationalises that governance for the biometric modality. AG-055 (Audit Trail Immutability & Completeness) ensures audit logs are tamper-evident; AG-669 requires that biometric access logs meet that standard. AG-210 (Multi-Jurisdictional Regulatory Mapping) maps regulatory requirements across jurisdictions; AG-669 requires that purpose limitation controls satisfy all applicable biometric privacy laws simultaneously. AG-670 (Liveness Verification) addresses spoofing at the point of biometric capture; AG-669 addresses what happens after capture — ensuring the captured data is used only for approved purposes. AG-673 (Biometric Template Protection) protects template integrity; AG-669 protects purpose integrity. AG-674 (Cross-Context Biometric Reuse) addresses the specific risk of templates being reused across contexts; AG-669 provides the governance framework within which reuse decisions are assessed. AG-677 (Consent and Notice for Biometrics) ensures data subjects are informed; AG-669 ensures that informational promises are technically enforced.