AG-671

Emotion Inference Restriction Governance

Biometrics, Emotion & Identity Analytics ~31 min read AGS v2.1 · April 2026
EU AI Act GDPR NIST ISO 42001

2. Summary

Emotion Inference Restriction Governance requires organisations to prevent AI agents from inferring, classifying, or acting upon human emotions, moods, affective states, or psychological dispositions unless the specific use case is both lawful under all applicable jurisdictions and demonstrably justified by a documented necessity that cannot be met by less intrusive means. Emotion inference — also referred to as affect recognition, sentiment detection from biometric signals, or emotion AI — encompasses any computational process that takes as input facial expressions, vocal prosody, gait, posture, physiological signals (heart rate, galvanic skin response, pupil dilation), keystroke dynamics, or other behavioural and biometric data, and produces as output a classification, score, or ranking purporting to describe the subject's emotional state, affective disposition, or psychological condition. This dimension establishes a default-deny posture: emotion inference capabilities are prohibited unless a specific exception has been authorised through a documented governance process that includes legal review, proportionality assessment, scientific validity evaluation, and ongoing monitoring for discriminatory impact. The default-deny posture reflects the convergence of regulatory prohibitions (the EU AI Act's classification of emotion recognition in workplaces and educational institutions as prohibited), scientific criticism (the absence of consensus that facial expressions or vocal patterns map reliably to internal emotional states across populations), and fundamental rights concerns (emotion inference creates asymmetric surveillance power that chills expression and association).

3. Example

Scenario A — Hiring Platform Deploys Affect Analysis in Video Interviews: A multinational employer deploys a Customer-Facing Agent that conducts initial screening of video interviews for graduate recruitment. The agent's vendor integrates an emotion inference module that scores candidates on "enthusiasm," "confidence," and "stress tolerance" by analysing facial micro-expressions, vocal pitch variation, and speech cadence during 15-minute recorded interviews. The scores are combined with CV-parsing results to rank candidates. In the first recruitment cycle, the system processes 14,200 video interviews across 8 countries. An internal review triggered by a discrimination complaint reveals three failures. First, the emotion inference module has never been validated for the demographic composition of the candidate pool — candidates whose first language is not English, candidates with facial paralysis or neurodivergent presentation (autism spectrum conditions affecting facial expression), and candidates using low-bandwidth connections (which degrade video quality and distort micro-expression analysis) receive systematically lower "enthusiasm" and "confidence" scores. Second, the system operates in Germany, France, and the Netherlands, where the EU AI Act Article 5(1)(f) prohibits emotion recognition systems in workplaces — the hiring context falls squarely within the prohibition. Third, no candidate was informed that emotion inference was being performed; the privacy notice referenced only "automated video interview analysis" without disclosing the affective component. The employer faces regulatory investigation in three EU member states, withdraws 14,200 interview decisions for re-evaluation at a cost of £2.8 million, and settles a discrimination claim for £680,000.

What went wrong: The organisation deployed emotion inference without a jurisdictional legality assessment, without scientific validity evaluation for the target population, without disclosure to data subjects, and without any governance process that would have identified the EU AI Act prohibition before deployment. The vendor's marketing materials described the module as "behavioural analytics" rather than emotion recognition, obscuring the regulatory classification. No default-deny posture existed — the emotion inference module was treated as a feature enhancement rather than a prohibited practice requiring explicit authorisation.

Scenario B — Workplace Monitoring Agent Infers Employee Stress Levels: A large logistics company deploys an Enterprise Workflow Agent to monitor warehouse employee productivity. A software update introduces a "wellbeing analytics" module that analyses voice patterns during team radio communications and webcam feeds from break rooms to estimate employee stress levels. The stated purpose is to identify workers at risk of burnout and trigger wellness interventions. The system classifies 340 employees into "low stress," "moderate stress," and "high stress" categories weekly. Over six months, managers begin using the stress classifications informally to make shift assignment decisions — employees classified as "high stress" are removed from overtime rotas and passed over for team leader promotions. An employment tribunal claim by a passed-over employee reveals that the "high stress" classification correlates strongly with female employees returning from parental leave and employees with disclosed mental health conditions. The emotion inference module was never subjected to an equality impact assessment. The company faces a £1.4 million tribunal award for indirect discrimination, a data protection authority enforcement notice for unlawful processing of special category data (health-related inferences), and a £3.2 million remediation programme to remove the module, retrain managers, and review all affected employment decisions.

What went wrong: The emotion inference capability was introduced through a routine software update without triggering any governance review. The "wellbeing" framing disguised what was functionally workplace emotion surveillance. No legality assessment was performed — the EU AI Act Article 5(1)(f) prohibits emotion recognition in the workplace, and the processing of inferred emotional states constitutes special category data under GDPR Article 9. No proportionality assessment evaluated whether less intrusive means (anonymous surveys, occupational health referrals) could achieve the stated wellbeing objective. The downstream use of emotion classifications for employment decisions was neither anticipated nor governed.

Scenario C — Public Safety Agent Infers Aggression from CCTV Feeds: A municipal government deploys a Safety-Critical Agent integrated with public CCTV to identify individuals displaying "aggressive" emotional states in a city-centre pedestrian zone. The system analyses gait patterns, facial expressions, and body posture to assign an "aggression probability" score. Scores above a threshold trigger an alert to police dispatch. In the first 90 days, the system generates 2,847 alerts. A civil liberties organisation obtains performance data through a freedom-of-information request and reveals that 78% of alerts are false positives — individuals flagged as "aggressive" who were subsequently observed or contacted and displayed no aggressive behaviour. Furthermore, the alert rate is 3.4 times higher for Black males aged 18-30 than for white males of the same age group, after controlling for location and time of day. The municipality faces a judicial review for discriminatory surveillance, a data protection authority investigation for unlawful processing of biometric data for law enforcement purposes without adequate legal basis, and sustained public opposition that forces discontinuation of the programme. Total wasted expenditure: £4.6 million in procurement, deployment, and legal costs.

What went wrong: The system inferred emotional states (aggression) from physical appearance and behaviour — a use case with no validated scientific basis and severe discriminatory impact. No governance process evaluated the scientific validity of inferring aggression from gait, posture, and facial expression. No fairness assessment tested for demographic bias prior to deployment. No proportionality assessment considered whether the same public safety objective could be achieved through less intrusive means (increased patrols, environmental design, community engagement). The emotion inference was dressed in public safety language ("threat detection") that obscured its fundamental nature as affect recognition.

4. Requirement Statement

Scope: This dimension applies to any AI agent that processes human biometric, behavioural, or physiological data in a manner capable of producing inferences about emotional states, moods, affective dispositions, psychological conditions, or personality traits. The scope includes agents that perform emotion inference as a primary function (dedicated emotion recognition systems) and agents that perform emotion inference as a secondary or embedded function (agents that incorporate affect analysis within a broader decision pipeline). The scope covers all modalities of emotion inference: facial expression analysis, vocal prosody and speech pattern analysis, physiological signal analysis (heart rate, galvanic skin response, electrodermal activity, pupil dilation), gait and posture analysis, keystroke and interaction pattern analysis, and multimodal fusion of these signals. The scope extends to agents that produce outputs functionally equivalent to emotion inference — such as "engagement scoring," "attentiveness detection," "stress estimation," "sentiment analysis from biometric signals," or "behavioural risk scoring" — regardless of the label applied to the output. Labelling an emotion inference function as "behavioural analytics" or "wellbeing monitoring" does not remove it from scope. The scope excludes text-based sentiment analysis of written communications (which is governed by AG-040 and other applicable dimensions) unless the text analysis is combined with biometric signal processing.

4.1. A conforming system MUST maintain a default-deny posture for emotion inference: no AI agent shall perform emotion inference unless a specific, documented authorisation has been granted through the governance process defined in this dimension.

4.2. A conforming system MUST maintain an Emotion Inference Register that catalogues every instance where emotion inference is performed or proposed, recording: the specific emotional states or affective dimensions being inferred, the input modalities used, the intended purpose, the legal basis under each applicable jurisdiction, the scientific validity evidence, and the authorisation status (authorised, denied, or pending).

4.3. A conforming system MUST perform a jurisdictional legality assessment before any emotion inference capability is deployed, covering all jurisdictions in which the agent operates or in which data subjects are located, explicitly evaluating whether the intended use falls within a prohibited category under applicable law — including, but not limited to, the EU AI Act Article 5(1)(f) prohibition on emotion recognition in workplaces and educational institutions, and any analogous national or subnational prohibition.

4.4. A conforming system MUST perform a scientific validity assessment for each emotion inference use case, evaluating whether the claimed relationship between the input modality (facial expression, vocal pattern, physiological signal) and the inferred emotional state is supported by peer-reviewed scientific evidence, is robust across the demographic composition of the target population, and achieves a documented accuracy threshold that is appropriate for the downstream decision context.

4.5. A conforming system MUST perform a proportionality assessment for each emotion inference use case, evaluating whether the stated objective can be achieved through less intrusive means that do not require inferring emotional states, and documenting the specific reasons why emotion inference is necessary rather than merely convenient.

4.6. A conforming system MUST perform a fairness impact assessment for each emotion inference use case before deployment, testing for differential accuracy, differential false positive rates, and differential false negative rates across protected demographic groups — including but not limited to race, ethnicity, sex, age, disability status, neurodivergent conditions, and cultural background — using representative test populations.

4.7. A conforming system MUST ensure that no emotion inference output is used as the sole or determinative factor in any decision that materially affects an individual's rights, opportunities, or access to services — including employment decisions, educational assessments, criminal justice decisions, insurance underwriting, credit decisions, or access to public services — without human review by a qualified person who has been informed that the input includes emotion inference data.

4.8. A conforming system MUST provide clear, specific, and timely notice to data subjects when emotion inference is performed, disclosing: the fact that emotional states are being inferred, the specific modalities used (facial analysis, voice analysis, physiological monitoring), the purpose of the inference, the downstream decisions that may be influenced, and the mechanism for objecting to or opting out of the inference.

4.9. A conforming system MUST implement technical controls that prevent emotion inference outputs from being persisted, shared, or repurposed beyond the specific authorised use case documented in the Emotion Inference Register, including automated deletion or anonymisation of emotion inference data after the authorised retention period.

4.10. A conforming system MUST conduct ongoing monitoring of deployed emotion inference systems at intervals not exceeding 6 months, re-evaluating jurisdictional legality (accounting for regulatory changes), scientific validity (accounting for new research), fairness impact (using production data rather than test data), and proportionality (accounting for the availability of less intrusive alternatives that may have emerged since initial authorisation).

4.11. A conforming system SHOULD implement technical architecture controls — such as feature flags, modular pipeline design, or capability-based access control — that enable emotion inference components to be disabled independently without affecting the remainder of the agent's functionality.

4.12. A conforming system SHOULD require that the authorisation decision for each emotion inference use case is made by an individual or committee with governance authority independent of the business unit proposing the use case, to prevent commercial incentives from overriding rights and proportionality considerations.

4.13. A conforming system MAY implement real-time confidence thresholds that suppress emotion inference outputs when the system's confidence in the inference falls below a defined minimum, reducing the propagation of low-confidence affective classifications into downstream decisions.

4.14. A conforming system MAY implement subject-initiated emotion inference challenge mechanisms that allow data subjects to contest a specific emotion inference and request human re-evaluation of the underlying data.

5. Rationale

Emotion inference occupies a uniquely problematic position in the landscape of AI capabilities. Unlike object recognition, speech-to-text transcription, or document classification — where the relationship between input and output is empirically grounded and verifiable — emotion inference rests on contested scientific foundations and produces outputs that are inherently unverifiable by external observation. No third party can confirm whether an individual is "stressed," "enthusiastic," or "aggressive" by examining a facial expression or vocal pattern. The inference is a probabilistic guess projected onto the subject, and the subject's own self-report of their emotional state may contradict the system's classification — yet the system's classification, not the subject's self-report, is what enters the decision pipeline.

The scientific critique is substantial and directly relevant to governance. The theory of basic emotions — the proposition that a small set of universal emotions (happiness, sadness, anger, fear, disgust, surprise) map reliably to distinct facial expressions — has been challenged by decades of research demonstrating that facial expressions are culturally variable, context-dependent, and an unreliable indicator of internal emotional states. A 2019 meta-analysis by Lisa Feldman Barrett and colleagues, published in Psychological Science in the Public Interest, concluded that "there is no reliable scientific evidence that emotion recognition technology works as claimed." The Association for Psychological Science published findings that the same facial configuration can correspond to different emotions depending on context, culture, and individual variation. This scientific uncertainty means that emotion inference systems deployed at scale will produce systematic errors that disproportionately affect populations whose emotional expression patterns diverge from the training data — typically Western, neurotypical, young adult populations.

The regulatory trajectory is unambiguously restrictive. The EU AI Act, finalised in 2024, classifies emotion recognition in workplaces and educational institutions as a prohibited practice under Article 5(1)(f). The prohibition reflects the European Parliament's assessment that workplace and educational emotion surveillance creates an unacceptable power asymmetry between the observer (employer, institution) and the observed (employee, student). The prohibition is categorical — it does not admit exceptions based on consent, proportionality, or claimed benefit. Beyond the EU, the Illinois Biometric Information Privacy Act (BIPA) requires informed consent before collecting biometric identifiers, which courts have interpreted to include facial geometry data used for emotion analysis. Maryland's HB 1202 restricts the use of facial recognition technology in hiring decisions, capturing emotion inference from video interviews. The regulatory environment is expanding, not contracting.

The fundamental rights dimension extends beyond privacy. Emotion inference creates a surveillance capability that chills freedom of expression, freedom of assembly, and freedom of thought. If individuals know that their emotional states are being monitored and classified — in workplaces, public spaces, educational settings, or commercial interactions — they modify their behaviour to conform to expected emotional norms. This chilling effect is particularly acute for individuals whose natural emotional expression diverges from majority norms: neurodivergent individuals whose facial expressions do not conform to neurotypical patterns, individuals from cultures with different norms for emotional display, individuals with conditions affecting facial musculature or vocal production, and individuals who are simply more emotionally expressive or reserved than the population average. Emotion inference does not merely observe — it normalises, creating pressure to perform the "correct" emotional state rather than express authentic internal experience.

The preventive posture of this dimension — default-deny rather than default-allow with safeguards — reflects the combined weight of these concerns. A default-allow approach would require every organisation to independently evaluate the scientific validity, legal permissibility, and ethical proportionality of each emotion inference use case — an evaluation that most organisations are not equipped to perform. A default-deny approach ensures that emotion inference is deployed only where a governance process has affirmatively determined that the specific use case is lawful, scientifically grounded, proportionate, and fair. This approach aligns with the precautionary principle embedded in EU fundamental rights law and with the regulatory direction across jurisdictions.

6. Implementation Guidance

Emotion Inference Restriction Governance requires both organisational process controls (governance workflows for authorisation, review, and monitoring) and technical controls (architectural mechanisms to enforce the default-deny posture and prevent unauthorised emotion inference). The core implementation challenge is detection — identifying when an agent performs or could perform emotion inference, including cases where the capability is embedded within a vendor-supplied component and not prominently disclosed.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Retail and Customer Service. Customer-facing agents in retail environments may be offered vendor modules that infer customer satisfaction, frustration, or purchase intent from facial expressions or voice patterns. These use cases are not categorically prohibited under the EU AI Act (which targets workplaces and educational institutions), but they remain subject to GDPR special category data rules if the inference relates to health or psychological state, and they require explicit consent and proportionality assessment. Organisations should evaluate whether conventional feedback mechanisms (post-interaction surveys, NPS scores, complaint tracking) achieve the same objective without biometric emotion inference.

Education. The EU AI Act categorically prohibits emotion recognition in educational institutions. Agents deployed in educational settings — including online learning platforms, examination proctoring systems, and classroom monitoring tools — must have emotion inference capabilities fully disabled. "Engagement detection" and "attentiveness monitoring" in educational contexts fall within the prohibition when they rely on biometric signals to infer affective states.

Healthcare. Emotion inference in healthcare settings (e.g., monitoring patient distress, assessing mental health conditions) may be lawful under medical device and clinical research exemptions, but requires classification as a medical device, clinical validation, and informed patient consent. Organisations must not deploy healthcare emotion inference under general-purpose AI governance — it requires medical device regulatory compliance.

Law Enforcement and Public Safety. Emotion inference for law enforcement purposes (e.g., assessing deception, predicting aggression) is subject to heightened scrutiny under the EU AI Act, the Law Enforcement Directive, and national police data processing laws. The scientific validity of deception detection through biometric signals is widely disputed. Organisations should apply the strictest interpretation of proportionality and necessity requirements.

Maturity Model

Basic Implementation — The organisation has established a default-deny posture for emotion inference. An Emotion Inference Register exists and is populated with all known instances. Jurisdictional legality assessments have been completed for all deployed use cases. Emotion inference capabilities in prohibited jurisdictions or contexts (EU AI Act workplaces and educational institutions) have been disabled. Data subjects are notified when emotion inference is performed. This level meets the minimum mandatory requirements.

Intermediate Implementation — All basic capabilities plus: scientific validity assessments are conducted by qualified reviewers for each use case. Proportionality assessments are documented using a structured template. Fairness impact assessments are performed before deployment with representative test populations. Technical architecture controls enforce the default-deny posture (feature flags, capability-based access). Ongoing monitoring re-evaluates legality, validity, fairness, and proportionality at 6-month intervals. The Emotion Inference Register is integrated with the broader governance configuration under AG-001.

Advanced Implementation — All intermediate capabilities plus: production fairness monitoring disaggregates emotion inference accuracy by demographic group in real time. An independent scientific advisory function reviews all authorisation decisions. Architecture-level controls are cryptographically enforced for edge and robotic agents. Subject-initiated challenge mechanisms allow data subjects to contest specific emotion inferences. The organisation can demonstrate through empirical data that authorised emotion inference use cases maintain accuracy and fairness across all monitored demographic groups. Authorisation decisions are audited annually by an independent party.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Default-Deny Posture Verification

Test 8.2: Emotion Inference Register Completeness

Test 8.3: Jurisdictional Legality Assessment — EU AI Act Prohibition

Test 8.4: Scientific Validity Assessment Existence and Quality

Test 8.5: Proportionality Assessment Verification

Test 8.6: Fairness Impact Assessment — Demographic Differential Testing

Test 8.7: Data Subject Notice Verification

Test 8.8: Technical Scope Limitation — Persistence and Repurposing Prevention

Test 8.9: Ongoing Monitoring Cycle Compliance

Test 8.10: Human Review Requirement for Material Decisions

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 5(1)(f) (Prohibited Practices — Emotion Recognition)Direct requirement
EU AI ActArticle 6–7, Annex III (High-Risk AI System Classification)Supports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 14 (Human Oversight)Supports compliance
GDPRArticle 9 (Processing of Special Categories of Data)Direct requirement
GDPRArticles 13–14 (Information to Data Subjects)Direct requirement
GDPRArticle 22 (Automated Individual Decision-Making)Supports compliance
GDPRArticle 35 (Data Protection Impact Assessment)Supports compliance
Illinois BIPASection 15 (Informed Consent for Biometric Identifiers)Supports compliance
NIST AI RMFMAP 2.3 (Scientific Integrity), MEASURE 2.6Supports compliance
ISO 42001Clause 6.1.2 (AI Risk Assessment), Annex ASupports compliance

EU AI Act — Article 5(1)(f) (Prohibited Practices — Emotion Recognition)

Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the areas of workplace and education, except where the AI system is intended to be placed on the market for medical or safety reasons. This is a categorical prohibition — not a high-risk classification requiring safeguards, but an outright ban. The prohibition reflects the assessment that workplace and educational emotion surveillance creates unacceptable power asymmetries that cannot be mitigated through transparency, consent, or technical safeguards. AG-671 operationalises this prohibition by requiring jurisdictional legality assessments that explicitly test each emotion inference use case against Article 5(1)(f) and by implementing technical controls that enforce the prohibition at the architecture level. Organisations deploying agents in EU member states must verify that no emotion inference component operates in a workplace or educational context, regardless of the component's labelling. A vendor module marketed as "engagement analytics" that infers affect from biometric signals in a workplace context falls within the prohibition. The narrow exceptions for medical and safety purposes require specific legal analysis and cannot be assumed.

GDPR — Article 9 (Special Categories of Data)

Emotion inference from biometric data produces outputs that may constitute special category data under GDPR Article 9 — specifically data concerning health (inferred psychological or emotional conditions) or biometric data processed for the purpose of uniquely identifying a natural person. Processing special category data is prohibited unless an Article 9(2) exception applies. Explicit consent under Article 9(2)(a) is the most commonly invoked exception, but consent in employment and educational contexts is rarely freely given due to the power imbalance between employer/institution and employee/student — a consideration explicitly acknowledged in GDPR Recital 43. AG-671 addresses this by requiring that the legal basis for emotion inference is assessed per jurisdiction, accounting for the practical limitations of consent as a legal basis in asymmetric power relationships.

GDPR — Articles 13–14 (Information to Data Subjects)

Articles 13 and 14 require that data subjects are informed about the processing of their personal data, including the purposes of processing, the categories of data processed, and the existence of automated decision-making under Article 22. Emotion inference triggers all of these requirements: the data subject must be informed that emotional states are being inferred (purpose), that facial, vocal, or physiological data is being processed (categories), and that the inference may influence decisions affecting them (automated decision-making). AG-671's notice requirement (Requirement 4.8) operationalises Articles 13–14 for the specific context of emotion inference, requiring disclosure that is specific enough to be meaningful — "we analyse your facial expressions to infer your emotional state" rather than "we process your data to improve our services."

Illinois BIPA — Section 15

BIPA requires private entities to obtain informed written consent before collecting biometric identifiers or biometric information. Courts have interpreted "biometric identifier" to include facial geometry data, which is the primary input for facial emotion inference systems. BIPA's consent requirement is stricter than GDPR's — it requires written consent, a published retention and destruction policy, and prohibits the sale or profit from biometric data. AG-671's notice and consent requirements support BIPA compliance for agents deployed in Illinois or processing data of Illinois residents, though organisations must verify that their specific implementation meets BIPA's written consent standard.

NIST AI RMF — MAP 2.3 (Scientific Integrity)

MAP 2.3 addresses the scientific integrity of AI systems, including the validity of the claims made about system capabilities and the soundness of the evidence supporting those claims. Emotion inference systems make scientific claims — that facial expressions map to emotional states, that vocal patterns indicate stress, that physiological signals reveal affect. AG-671's scientific validity assessment requirement (Requirement 4.4) directly supports MAP 2.3 by requiring organisations to evaluate whether these claims are supported by peer-reviewed evidence and are generalisable to the target population. This is particularly important given the contested scientific status of emotion inference — an area where vendor marketing claims frequently exceed what the scientific evidence supports.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusCross-population — affects every individual subjected to unauthorised or ungoverned emotion inference, with amplified impact on protected demographic groups

Consequence chain: Ungoverned emotion inference produces a cascade of harms across legal, scientific, ethical, and operational dimensions. The immediate failure is the deployment of an emotion inference capability without legality assessment — the organisation unknowingly operates a prohibited practice in jurisdictions where emotion recognition is banned (EU AI Act workplaces and educational institutions), creating immediate regulatory non-compliance with potential penalties of up to EUR 35 million or 7% of global annual turnover under the EU AI Act's Article 99 penalty framework for prohibited practices. Concurrently, the absence of scientific validity assessment means the system produces affect classifications based on contested or unfounded input-output relationships, generating systematic errors that are invisible to operators because emotional states are inherently unverifiable by external observation — no one can confirm whether the system's classification of "stressed" or "disengaged" is correct. These unvalidated classifications flow into downstream decisions: hiring decisions that discriminate against neurodivergent candidates whose facial expressions do not match neurotypical norms, employee evaluations that penalise individuals from cultures with different emotional display norms, public safety alerts that disproportionately flag members of specific racial groups. The fairness impact is not marginal — studies of deployed emotion inference systems have documented accuracy differentials of 20-40% across demographic groups, with darker-skinned individuals, women, and older adults experiencing the highest error rates. Without data subject notice, individuals cannot contest the inferences, creating an information asymmetry where the organisation holds affect classifications about individuals who do not know the classifications exist. Without persistence and repurposing controls, emotion inference data — originally collected for one purpose — migrates to other decision contexts: stress scores collected for "wellbeing" informing promotion decisions, affect classifications from customer interactions informing creditworthiness assessments. Each secondary use compounds the original harm. The remediation cost is characteristically high because emotion inference failures affect large populations processed over extended periods — re-evaluating decisions made on the basis of invalid or discriminatory affect classifications across thousands of individuals and multiple decision contexts.

Cross-references: AG-001 (Governance Configuration Control) provides the configuration management framework within which the Emotion Inference Register operates. AG-019 (Human Escalation & Override Triggers) ensures that human review is triggered when emotion inference outputs influence material decisions. AG-022 (Behavioural Drift Detection) monitors for drift in emotion inference model behaviour that may indicate degrading accuracy or emerging bias. AG-029 (Regulatory Compliance Mapping) maintains the jurisdictional mapping that AG-671 consumes for legality assessments. AG-033 (Fairness & Non-Discrimination Testing) provides the methodology framework for the fairness impact assessments required by AG-671. AG-040 (Sensitive Category Data Processing) governs the broader category of special category data processing within which emotion inference data falls. AG-055 (Data Minimisation & Retention) provides the retention and deletion framework that AG-671's persistence controls build upon. AG-084 (Model Training Data Governance) governs the training data used to build or fine-tune emotion inference models, ensuring training data representativeness and provenance. AG-210 (Prohibited Practice Screening) provides the overarching framework for identifying and preventing prohibited AI practices, of which certain emotion inference uses are a specific instance. AG-669 (Biometric Purpose Limitation) constrains biometric data to declared purposes, reinforcing AG-671's requirement that emotion inference outputs not be repurposed beyond the authorised use case. AG-677 (Consent and Notice for Biometrics) provides the notice and consent framework that AG-671 specialises for the emotion inference context. AG-678 (Biometric Redress) provides the redress mechanisms that data subjects can invoke when emotion inference produces incorrect or harmful classifications.

Cite this protocol
AgentGoverning. (2026). AG-671: Emotion Inference Restriction Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-671