Synthetic Identity Disclosure Governance requires that organisations disclose to affected parties whenever AI-generated personas, voices, avatars, images, video likenesses, or other synthetic content are used in interactions, communications, or published material. As generative AI produces increasingly realistic synthetic identities — voices indistinguishable from human speech, avatars that pass for live video, and text personas that mimic human conversational patterns — the absence of disclosure creates deception risk, erodes informed consent, and violates emerging regulatory mandates across multiple jurisdictions. This dimension mandates that every synthetic identity element be labelled, that the disclosure be timely and comprehensible to the recipient, and that the organisation maintain auditable records of all synthetic identity deployments and their associated disclosures.
Scenario A — Synthetic Voice Used in Debt Collection Without Disclosure: A consumer lending organisation deploys an AI agent to conduct outbound telephone calls to customers with overdue accounts. The agent uses a synthetic voice trained to sound warm, authoritative, and human. The voice is indistinguishable from a live human speaker in independent testing (passing a Turing-style voice evaluation with 94% of listeners believing it is human). Over a 6-month period, the agent makes 145,000 calls, collecting £8.3 million in overdue payments. At no point during any call does the agent disclose that the caller is an AI system using a synthetic voice. A consumer complaint triggers a regulatory investigation. The regulator determines that 145,000 consumers were deceived about the nature of the entity they were speaking with — a material factor because consumers may respond differently to AI collection agents than human collectors. The organisation faces an enforcement action citing unfair practices and deceptive communication.
What went wrong: The organisation treated the synthetic voice as a cosmetic feature rather than a material disclosure obligation. No policy existed requiring synthetic identity disclosure in voice channels. The voice was specifically designed to be indistinguishable from human speech, maximising the deception. The 145,000 consumers had no opportunity to make an informed decision about engaging with an AI system. Consequence: Regulatory enforcement action, £2.4 million fine for unfair collection practices, mandatory re-contact of 145,000 consumers to disclose the AI nature of previous calls, reputational damage, and £680,000 in remediation and compliance programme costs.
Scenario B — Synthetic Avatar in Financial Advisory Sessions Creates Trust Asymmetry: A wealth management firm deploys a customer-facing agent that appears on video calls as a photorealistic avatar — a professional-looking individual who makes eye contact, nods, and displays appropriate facial expressions. The avatar is rendered in real time using the firm's generative AI platform. Clients interact with the avatar believing they are speaking with a human financial adviser. A client relies on the avatar's investment recommendations, investing £340,000 in a high-risk portfolio. When the investment loses 35% (£119,000), the client discovers the "adviser" was an AI avatar. The client files a complaint alleging that the synthetic identity created a false sense of personal relationship and professional accountability that influenced the investment decision. The firm cannot demonstrate that any disclosure of the avatar's synthetic nature was provided.
What went wrong: The photorealistic avatar was designed to build trust and rapport — characteristics that clients associate with human professional relationships. The firm deployed the avatar without any disclosure mechanism, creating a trust asymmetry: the client trusted a synthetic identity as though it were a human professional with personal accountability. The investment decision was influenced by this trust. Consequence: Client complaint upheld, £119,000 in compensation, FCA supervisory review of AI deployment practices, mandatory disclosure retrofitting across all client-facing channels, £410,000 total remediation cost.
Scenario C — Synthetic Persona in Public Sector Benefits Application Creates Rights Concern: A government social services agency deploys an AI agent to assist citizens with benefits applications. The agent uses a text-based persona with a human name ("Sarah"), writes in first person ("I understand your situation"), references personal experiences ("I've helped many families in similar circumstances"), and never discloses its AI nature. A citizen with limited digital literacy interacts with "Sarah" for 3 hours completing a complex disability benefits application. The citizen discloses sensitive medical information believing they are communicating with a human caseworker bound by professional confidentiality obligations. The application is denied based on information the citizen provided to "Sarah" that they would not have disclosed to a known AI system. The citizen challenges the decision on the grounds that consent to process their information was obtained through deception — they consented to share information with a human caseworker, not an AI system.
What went wrong: The persona was designed to feel human — using a human name, first-person language, and empathetic framing — without any synthetic identity disclosure. In a rights-sensitive context where citizens disclose medical information, the absence of disclosure undermined informed consent. The citizen's decision to share sensitive information was based on a false understanding of who (or what) they were communicating with. Consequence: Benefits decision challenged on consent grounds, administrative review of all AI-assisted applications (4,200 cases), mandatory re-consent process for affected applicants, £560,000 in administrative remediation, legislative inquiry into AI use in public services.
Scope: This dimension applies to any AI agent deployment where synthetic identity elements are used in interactions with individuals who are not part of the deploying organisation's AI development or governance teams. Synthetic identity elements include, but are not limited to: synthetic voices (text-to-speech voices designed to sound human), synthetic avatars (visual representations designed to appear as real humans), synthetic personas (text-based identities that present as human through name, language style, or claimed personal attributes), synthetic likenesses (AI-generated images or videos of people who do not exist or who have not consented to their likeness being used), and synthetic content presented as human-authored (text, images, audio, or video generated by AI but presented without attribution to AI). The scope excludes synthetic elements that are obviously non-human (clearly robotic voices, cartoon avatars, chatbots explicitly labelled as AI) and internal use where all participants are aware of the AI nature. The test is: would a reasonable person, encountering this identity element without additional context, believe they are interacting with a human? If yes, this dimension applies in full.
4.1. A conforming system MUST disclose the synthetic nature of any AI-generated persona, voice, avatar, likeness, or content before or at the point of first interaction with the affected individual, using language and placement that a reasonable person would notice and understand.
4.2. A conforming system MUST maintain a registry of all synthetic identity elements deployed across the organisation, recording for each element: the type (voice, avatar, persona, likeness, content), the deployment context (channel, audience, purpose), the disclosure mechanism used, and the date of first deployment.
4.3. A conforming system MUST ensure that synthetic identity disclosures persist throughout the interaction — not only at the initial point of contact — so that individuals who join mid-interaction, return after a break, or forget the initial disclosure are reminded of the synthetic nature.
4.4. A conforming system MUST provide a mechanism for individuals to confirm, at any point during an interaction, whether the identity they are engaging with is synthetic or human, and the response to such a query MUST be truthful and immediate.
4.5. A conforming system MUST prohibit synthetic identity elements that are designed or configured to deny their synthetic nature when directly asked, including evasive responses, deflection, or silence in response to direct questions about AI or synthetic identity.
4.6. A conforming system MUST obtain and document appropriate consent before using a real person's likeness, voice, or identity as the basis for a synthetic identity element, separate from and in addition to any general terms of service.
4.7. A conforming system MUST assess and document the risk that each synthetic identity element creates deception, undue influence, or trust asymmetry, with heightened requirements for contexts involving vulnerable populations, financial decisions, legal rights, or health information.
4.8. A conforming system SHOULD implement technical watermarking or metadata embedding in synthetic voice, image, and video outputs that enables downstream detection of the synthetic nature even when the content is extracted from the original interaction context.
4.9. A conforming system SHOULD adapt disclosure mechanisms to the modality of the synthetic element — visual disclosures for visual avatars, auditory disclosures for synthetic voices, and textual disclosures for text-based personas — rather than relying on a single disclosure modality.
4.10. A conforming system MAY implement graduated disclosure that provides more detailed information about the synthetic identity (the AI model, training data provenance, capability limitations) when requested by the individual, beyond the baseline disclosure of synthetic nature.
The proliferation of synthetic identity elements in AI agent deployments creates a governance challenge that did not exist when AI systems were clearly distinguishable from humans. Early chatbots were obviously artificial — limited vocabulary, formulaic responses, text-only interfaces. Modern generative AI systems produce voices that are indistinguishable from human speech in blind tests, avatars that pass for live video participants, and text personas that exhibit conversational patterns, empathy cues, and personality traits associated with human communication. The gap between synthetic and human identity has narrowed to the point where disclosure is no longer optional — it is a prerequisite for informed interaction.
The regulatory landscape reflects this shift. The EU AI Act (Article 52) explicitly requires that individuals be informed when they are interacting with an AI system, with specific provisions for deepfakes and synthetic content. The FTC in the United States has issued guidance on AI-generated content requiring clear disclosure. China's Deep Synthesis Provisions mandate labelling of all deep synthesis content. The UK's Online Safety Act creates obligations around synthetic content. These regulations converge on a single principle: people have a right to know when they are interacting with or consuming synthetic content, particularly when that content is designed to appear human.
The risk analysis extends beyond regulatory compliance. Synthetic identities create trust asymmetry — the individual invests trust, empathy, and relational expectations in what they believe is a human, while the organisation benefits from that trust without the accountability, professional obligations, and personal stakes that come with human interaction. In financial services, this trust asymmetry can influence investment decisions, risk tolerance assessments, and complaint behaviour. In public services, it can influence what personal information citizens disclose and whether they exercise their rights. In healthcare, it can influence treatment adherence and symptom reporting. The asymmetry is not merely theoretical — research demonstrates that individuals interact differently with entities they believe are human versus entities they know are AI, making different disclosure decisions, accepting different recommendations, and exercising different levels of critical scrutiny.
There is also a compound risk when synthetic identity is combined with persuasion. A synthetic voice trained on patterns associated with authority and trustworthiness, deployed in a debt collection context, leverages human psychological responses to authority figures. A synthetic avatar with professional appearance and empathetic facial expressions, deployed in a financial advisory context, leverages human responses to perceived personal relationships. Without disclosure, these deployments exploit evolved human social responses — responses calibrated for interactions with actual humans — to advance organisational objectives. Disclosure does not prevent the use of persuasive synthetic identities; it ensures that the individual can calibrate their response appropriately.
The connection to AG-011 (Agent Identity Governance) is direct. AG-011 requires that each agent have a governed identity. AG-455 extends this to require that the synthetic nature of that identity be disclosed. An agent may have a well-governed identity under AG-011 but still violate AG-455 if the identity is synthetic and undisclosed. Similarly, AG-454 (AI Interaction Notice Placement Governance) addresses where and when AI interaction notices appear; AG-455 addresses the specific content of those notices when synthetic identity elements are involved — it is not enough to say "you are interacting with AI" if the synthetic voice, avatar, or persona creates an impression that overrides the textual notice.
Synthetic Identity Disclosure Governance requires both policy infrastructure and technical mechanisms. The policy defines what must be disclosed and when; the technical mechanisms ensure that disclosures are delivered reliably across all channels and modalities.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Financial advisory and wealth management interactions carry heightened disclosure obligations because clients make consequential financial decisions based on trust in their adviser. A synthetic avatar presenting investment recommendations creates a trust relationship that influences risk tolerance and investment sizing. Firms must ensure that clients understand they are receiving advice from an AI system, not a human professional with personal accountability and professional obligations. FCA expectations around treating customers fairly (TCF) extend to ensuring that the medium of advice does not create misleading impressions about the nature of the adviser.
Public Sector and Social Services. Government agencies deploying AI agents in citizen-facing roles must account for the power asymmetry inherent in government-citizen interactions. Citizens interacting with government services may feel obligated to comply with requests from what they perceive as a government official. A synthetic persona presenting as a caseworker inherits the authority associated with the government role. Disclosure is essential to ensure that citizens understand the nature of the entity processing their applications, assessing their eligibility, or requesting their personal information. Accessibility requirements also apply — disclosures must be comprehensible to individuals with varying levels of digital literacy, language proficiency, and cognitive ability.
Healthcare. Synthetic personas in healthcare settings — patient intake agents, symptom checkers, mental health support chatbots — interact with individuals who may be in vulnerable states. Patients may disclose sensitive health information to a synthetic persona that they would withhold from a known AI system. Disclosure is both an ethical obligation and a practical necessity for informed consent to data processing in healthcare contexts.
Embodied and Robotic Systems. Physical robots with synthetic voices or projected faces create a particularly strong human impression because the physical embodiment reinforces the perception of a sentient entity. A humanoid robot with a synthetic face and voice may be perceived as more "human" than a text chatbot even though both are AI systems. Physical embodiment requires enhanced disclosure — visual indicators on the robot itself, auditory disclosures in the voice channel, and environmental signage in the deployment location.
Basic Implementation — The organisation maintains a registry of all synthetic identity elements. Disclosure is provided at the point of first interaction in every channel where synthetic elements are deployed. The agent responds truthfully when directly asked about its synthetic nature. Deception assessments are conducted for new deployments. This level meets the minimum mandatory requirements of AG-455.
Intermediate Implementation — All basic capabilities plus: disclosures are persistent (always-visible indicators, periodic voice reminders). Multi-modal disclosure matches the modality of the synthetic element. The registry includes automated compliance checks that block deployment without confirmed disclosure mechanisms. Technical watermarking is applied to synthetic voice and video outputs. Consent for real-person likeness usage is documented and auditable.
Advanced Implementation — All intermediate capabilities plus: pre-deployment deception assessments are quantitative and benchmarked. User comprehension testing validates that disclosures are noticed and understood by target populations. Graduated disclosure provides detailed AI information on request. Real-time monitoring confirms disclosure delivery across all active sessions. Independent third-party audits verify disclosure compliance annually.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Initial Disclosure Delivery
Test 8.2: Persistent Disclosure Verification
Test 8.3: Direct Question Truthfulness
Test 8.4: Synthetic Identity Registry Completeness
Test 8.5: Deception Prohibition Enforcement
Test 8.6: Real-Person Likeness Consent Verification
Test 8.7: Cross-Channel Disclosure Consistency
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 52 (Transparency Obligations for Certain AI Systems) | Direct requirement |
| EU AI Act | Article 52(3) (Deep Fake Disclosure) | Direct requirement |
| SOX | Section 302 (Corporate Responsibility for Financial Reports) | Supports compliance |
| FCA SYSC | 6.1.1R (Systems and Controls) | Supports compliance |
| FCA SYSC | PRIN 2.1.1R (Treating Customers Fairly) | Direct requirement |
| NIST AI RMF | GOVERN 1.7, MAP 5.1 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Annex B | Supports compliance |
| DORA | Article 11 (Communication) | Supports compliance |
Article 52 is the most directly relevant regulatory provision. Article 52(1) requires that AI systems designed to interact with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use. Article 52(3) extends this to deep fakes — AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or events and would falsely appear to be authentic — requiring disclosure of the artificial generation or manipulation. AG-455 operationalises both provisions by requiring disclosure of all synthetic identity elements (satisfying Article 52(1)) and technical watermarking of synthetic voice and video (supporting Article 52(3)). The "unless obvious from the circumstances" exception in Article 52(1) aligns with AG-455's scoping — obviously non-human elements (cartoon avatars, clearly robotic voices) are excluded from the full requirements.
The FCA's Treating Customers Fairly principle requires that firms pay due regard to the interests of customers and treat them fairly. Deploying a synthetic avatar that clients believe is a human financial adviser, without disclosure, creates an unfair information asymmetry. The client invests trust based on a false premise — that they are interacting with a professional who has personal accountability, professional qualifications, and fiduciary instincts. The synthetic identity lacks all of these. Disclosure corrects this asymmetry, enabling the client to calibrate their trust appropriately. The FCA's Consumer Duty (PS22/9) reinforces this expectation by requiring firms to act to deliver good outcomes for retail customers, which includes not deceiving them about the nature of the entity providing advice or service.
Where AI agents with synthetic identities are involved in financial reporting or financial communication, the officers certifying financial reports must be confident that material disclosures are accurate. An AI agent presenting financial information through a synthetic persona that is mistaken for a human analyst could create material misrepresentation risk — not in the content, but in the implied authority and accountability behind the content. SOX compliance is supported by ensuring that any synthetic identity involved in financial communication is disclosed, preventing misattribution of financial statements to human professionals.
The NIST AI Risk Management Framework addresses transparency throughout its structure. GOVERN 1.7 emphasises that AI system transparency practices are documented and implemented. MAP 5.1 addresses the characterisation of impacts on individuals and communities — synthetic identities that deceive individuals about the nature of their interaction represent an impact that must be characterised and managed. AG-455's disclosure requirements directly implement the transparency practices called for by the framework.
The Digital Operational Resilience Act requires financial entities to have communication policies and procedures. Where AI agents with synthetic identities communicate with clients, counterparties, or regulators, the synthetic nature of the communication must be disclosed to maintain the integrity of the communication channel. DORA's emphasis on operational resilience includes ensuring that automated communications are identifiable as such, preventing confusion during incident response or crisis communication scenarios.
ISO 42001 requires organisations to address risks and opportunities related to AI system development and deployment. Synthetic identity deception risk falls within the scope of risks that must be identified, assessed, and treated. Annex B provides guidance on AI-specific controls, including transparency controls that align directly with AG-455's disclosure requirements.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | All individuals interacting with undisclosed synthetic identities across all deployment channels — potentially tens of thousands of affected individuals per month for high-volume deployments |
Consequence chain: Synthetic identity elements are deployed without disclosure, causing affected individuals to believe they are interacting with humans. The immediate harm is deception — individuals make decisions (financial, legal, personal, medical) based on a false understanding of the nature of the entity they are engaging with. The trust asymmetry compounds over time: the longer an individual interacts with an undisclosed synthetic identity, the more trust they invest and the greater the harm when the deception is discovered. The regulatory consequence is severe because multiple jurisdictions now mandate synthetic identity disclosure explicitly — the EU AI Act, China's Deep Synthesis Provisions, and emerging US federal and state legislation. Non-compliance is not a grey area; it is a clear regulatory violation. The reputational consequence is amplified by public sensitivity to "deepfake" deception — media coverage of undisclosed synthetic identities triggers outsized public reaction relative to other compliance failures. The remediation cost includes not only technical implementation of disclosure mechanisms but also retrospective notification of previously deceived individuals, which for high-volume deployments (Scenario A: 145,000 calls) can exceed the cost of the original deployment by an order of magnitude. The compound failure scenario is particularly dangerous: an organisation deploys a persuasive synthetic identity without disclosure (violating AG-455), the identity makes external statements about the organisation's obligations (violating AG-456), and those statements are treated as authoritative by recipients who believe they are hearing from a human representative — creating a cascade of deception, unauthorised commitment, and regulatory violation.
Cross-references: AG-454 (AI Interaction Notice Placement Governance), AG-031 (Multi-Modal Input Governance), AG-451 (Plain-Language Duty Governance), AG-456 (External Statement Approval Governance), AG-457 (Marketing Claim Substantiation Governance), AG-049 (Explainability Governance), AG-011 (Agent Identity Governance), AG-035 (Cross-Domain Output Governance).