AI Interaction Disclosure and Mode Transparency Governance requires that every person interacting with an AI agent is clearly and proactively informed that they are interacting with an AI system, and that the mode of AI involvement (fully autonomous, human-supervised, human-in-the-loop, or AI-assisted human) is transparently communicated before substantive interaction begins. This dimension addresses the fundamental right of individuals to know whether they are communicating with a human or a machine, and to understand the degree to which AI is driving the interaction. The disclosure must be unambiguous, unavoidable, and persistent — not a one-time disclaimer buried in terms of service, but a visible, ongoing indicator that the counterparty can rely on throughout the interaction. AG-172 prevents organisations from gaining advantage by concealing AI involvement, and ensures that individuals can calibrate their trust, expectations, and legal reliance accordingly.
Scenario A — Customer Deceived by Human-Seeming AI Agent: An insurance company deploys a customer-facing AI agent to handle claims inquiries. The agent uses a human name ("Sarah Thompson, Claims Specialist"), a human profile photo, and natural language patterns designed to mimic human conversation (including typing delays, occasional corrections, and personal anecdotes like "I had a similar situation with my own car insurance"). A customer, believing they are speaking with a human claims specialist, discloses sensitive medical information relevant to a life insurance claim. The customer later discovers they were speaking with an AI agent and files a complaint, arguing they would not have disclosed the medical information to a machine.
The Information Commissioner's Office (ICO) investigates and determines that the customer's consent to data processing was not freely given because they were materially deceived about who they were sharing information with. The insurance company's privacy notice stated "your data may be processed by our team" — technically not false, but materially misleading when "our team" includes AI agents masquerading as humans.
What went wrong: The AI agent was deliberately designed to appear human. No disclosure was provided at any point during the interaction. The customer's consent to data processing was obtained through deception about the nature of the processing entity. Consequence: ICO enforcement notice, GBP 500,000 fine for inadequate privacy disclosure, class action risk from other customers who shared sensitive information with the undisclosed AI, reputational damage.
Scenario B — Mode Transition Without Disclosure: A customer support platform uses a hybrid model: AI agents handle initial triage, then escalate complex cases to human agents. The customer sees no indication of the transition. They begin interacting with an AI agent (believing it to be human) for 15 minutes, then are seamlessly transitioned to a human agent (believing it to be the same entity). The customer repeats information they already provided, not realising the "handoff" occurred. More critically, the human agent makes a verbal commitment about a resolution that the customer attributes to the AI agent's earlier statements, creating confusion about accountability when the commitment is not honoured.
What went wrong: Neither the AI-to-human transition nor the human-to-AI transition was disclosed. The customer could not distinguish between AI-generated and human-generated responses. When the commitment dispute arose, the customer could not identify which entity made the commitment, complicating resolution. Consequence: Customer complaint escalation, internal investigation costing GBP 15,000 in staff time, compensatory payment of GBP 5,000 to the customer, Ofcom investigation for consumer-facing service transparency.
Scenario C — AI-Generated Content Presented as Human Expert Opinion: A financial advisory firm uses an AI agent to draft investment research reports. The reports are published under the name of a human analyst ("Dr. James Chen, CFA") with no disclosure of AI involvement. The reports influence investment decisions by the firm's clients. When a recommended stock declines 40%, a client sues the firm, arguing they relied on the expert judgment of a named CFA charterholder. In discovery, the client's legal team discovers that the entire report was AI-generated. The court considers whether the representation of human authorship constitutes a material misrepresentation that affected the client's reliance and, consequently, their investment decision.
What went wrong: AI-generated content was presented as the work product of a named human expert with specific professional qualifications. The client's reliance was based partly on the perceived expertise and accountability of the named author. Concealing AI involvement undermined the basis on which the client made their investment decision. Consequence: GBP 2.1 million in claimed investment losses, professional conduct investigation for the named analyst by the CFA Institute, FCA enforcement for misleading communications under COBS 4.2, reputational damage to the firm's research franchise.
Scope: This dimension applies to all AI agents that interact with humans — whether those humans are external (customers, counterparties, regulators, the public) or internal (employees, contractors, governance reviewers). The scope covers all interaction modalities: text chat, email, voice, video, document generation, and any other medium through which the agent communicates. It also covers AI-generated content that is presented to humans, even if there is no direct real-time interaction (e.g., AI-generated reports, recommendations, or communications published under a human name or without disclosure of AI involvement). The scope excludes purely machine-to-machine interactions where no human is a direct party. The test for inclusion is: will a human receive a communication from, or engage in an interaction with, this agent? If yes, disclosure requirements apply.
4.1. A conforming system MUST disclose to every human interacting with an AI agent that they are interacting with an AI system before any substantive interaction begins — where "substantive" means before the exchange of information, the provision of advice, or the discussion of any topic beyond the disclosure itself.
4.2. A conforming system MUST identify the mode of AI involvement clearly: fully autonomous (AI operating without real-time human supervision), human-supervised (AI operating with human monitoring but not per-action approval), human-in-the-loop (AI proposing actions that a human approves), or AI-assisted human (human making decisions with AI-provided information).
4.3. A conforming system MUST disclose mode transitions in real time — when an interaction transitions from AI to human, from human to AI, or between AI modes, the transition must be disclosed before the next substantive exchange.
4.4. A conforming system MUST not use human names, human likenesses, human profile images, or human-mimicking behavioural patterns (such as simulated typing delays or personal anecdotes) to create the impression that an AI agent is a human.
4.5. A conforming system MUST disclose AI involvement in all content presented to humans, including documents, reports, recommendations, and communications, in a manner that is visible at the point of consumption — not solely in metadata, footnotes, or separate disclosure pages.
4.6. A conforming system MUST ensure that disclosure is persistent throughout the interaction — not a one-time statement at the beginning that can be forgotten or overlooked — using a visible, ongoing indicator (e.g., a persistent badge, header, or watermark).
4.7. A conforming system MUST log every disclosure event with a timestamp, the recipient, the disclosure content, and the delivery mechanism, in a tamper-evident record per AG-006.
4.8. A conforming system SHOULD provide the human counterparty with the option to request a human agent at any point during an AI interaction, and disclose this option as part of the initial disclosure.
4.9. A conforming system SHOULD disclose the specific capabilities and limitations of the AI agent relevant to the interaction context (e.g., "I am an AI agent. I can provide information about your account balance and recent transactions. I cannot authorise refunds or changes to your account — those require a human agent.").
4.10. A conforming system MAY implement a standardised disclosure format (e.g., an industry-agreed icon, badge, or prefix) to provide consistent recognition across different organisations and platforms.
The right to know whether you are interacting with a human or a machine is not a niche privacy concern — it is fundamental to informed consent, meaningful choice, and the integrity of communication. When a person does not know they are talking to an AI, they cannot calibrate their trust appropriately: they may disclose information they would not share with a machine, they may attribute expertise and accountability to an entity that has neither, and they may rely on commitments that have a different legal character when made by AI rather than by a human.
The EU AI Act (Article 52) explicitly requires transparency obligations for certain AI systems, including the requirement to notify individuals that they are interacting with an AI system. This is not limited to high-risk systems — it applies broadly to AI systems that interact with natural persons. The principle is clear: concealing AI involvement is a form of deception that undermines the individual's autonomy and informed decision-making.
Mode transparency extends the disclosure beyond the binary "AI or human" question. Modern deployments frequently use hybrid models: an AI handles initial triage, a human handles complex cases, an AI drafts a document that a human reviews, a human makes a decision using AI-provided analysis. The individual receiving the output needs to understand the mode of involvement to calibrate their reliance. An AI-generated investment recommendation has a different trust profile than a recommendation reviewed and endorsed by a CFA charterholder, which in turn differs from one drafted by a human and reviewed by AI for errors.
AG-172 requires disclosure that is proactive (provided before the counterparty asks), unambiguous (clear enough that a reasonable person understands), unavoidable (not hidden in terms of service or footer text), and persistent (maintained throughout the interaction, not just at the beginning). These requirements ensure that disclosure is genuinely informative, not merely a compliance formality.
Disclosure and mode transparency must be implemented at the interaction layer — where the AI agent's output meets the human counterparty. The disclosure must be technically enforced, not left to the agent's discretion, because an agent optimising for task completion may deprioritise disclosure if it perceives it as friction.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. FCA COBS 4.2 requires communications to be fair, clear, and not misleading. Concealing AI involvement in a financial advisory interaction is misleading because the client's reliance is based partly on the perceived identity of the adviser. MiFID II Article 24 requires firms to act honestly, fairly, and professionally — concealing AI involvement violates professional dealing standards.
Healthcare. Patients have a right to know whether clinical information, recommendations, or triage decisions are provided by an AI or a human clinician. The GMC's Good Medical Practice guidance requires transparency in the care pathway. AI involvement in clinical decisions must be disclosed to both the patient and the clinician who is ultimately responsible.
Public Sector. Government services must disclose AI involvement to citizens. The UK CDDO Algorithmic Transparency Recording Standard requires public sector organisations to publish information about algorithmic tools used in decision-making. The EU AI Act Article 52 explicitly requires disclosure to individuals interacting with AI systems.
Consumer Protection. The Consumer Protection from Unfair Trading Regulations 2008 prohibit misleading omissions — failing to provide material information that the average consumer needs to make an informed decision. The identity of the party (human or AI) is material information in most consumer contexts.
Basic Implementation — All customer-facing AI agents disclose AI identity at the start of each interaction. AI-generated content is labelled as AI-generated. Human names and photos are not used for AI agents. Disclosure events are logged. Coverage: all external-facing interactions.
Intermediate Implementation — All basic capabilities plus: persistent visual indicators maintain disclosure throughout interactions. Mode transitions are disclosed in real time. The interaction wrapper prevents agents from bypassing disclosure. Voice interactions include audio disclosure. A standardised disclosure format is used across all channels. The option to request a human agent is disclosed. Coverage: all external and internal interactions.
Advanced Implementation — All intermediate capabilities plus: content provenance labelling is embedded in all AI-generated or AI-assisted documents. Capability and limitation disclosures are context-specific. The disclosure framework has been tested with user research to verify that recipients understand the disclosure (comprehension testing, not just delivery confirmation). The organisation can demonstrate to regulators that no interaction occurs without disclosure, and that disclosure is understood by recipients.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Pre-Interaction Disclosure Delivery
Test 8.2: Persistent Indicator Presence
Test 8.3: Mode Transition Disclosure
Test 8.4: Human Impersonation Prevention
Test 8.5: Content Provenance Labelling
Test 8.6: Disclosure Bypass Resistance
Test 8.7: Disclosure Logging Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 52 (Transparency Obligations) | Direct requirement |
| EU AI Act | Article 14 (Human Oversight) | Supports compliance |
| GDPR | Articles 13, 14 (Right to Information) | Supports compliance |
| GDPR | Article 22 (Automated Decision-Making) | Supports compliance |
| FCA COBS | 4.2 (Fair, Clear and Not Misleading Communications) | Direct requirement |
| Consumer Protection from Unfair Trading Regulations 2008 | Regulation 6 (Misleading Omissions) | Direct requirement |
| NIST AI RMF | GOVERN 1.5, MAP 1.5 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks) | Supports compliance |
Article 52(1) requires that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system, unless this is obvious from the circumstances. AG-172 implements this requirement through proactive, persistent disclosure. The Article 52 requirement is not limited to high-risk systems — it applies to all AI systems that interact with people, making AG-172 broadly applicable.
Articles 13 and 14 require that individuals are informed about the processing of their personal data, including the existence of automated decision-making. When an AI agent processes personal data during an interaction, the individual has the right to know that AI is involved. AG-172's pre-interaction disclosure supports the Article 13/14 information requirements.
Regulation 6 defines a misleading omission as failing to provide material information that the average consumer needs to make an informed transactional decision. The identity of the party the consumer is interacting with (human or AI) is material information. An AI agent that does not disclose its nature is committing a misleading omission under this regulation.
FCA COBS 4.2 requires that communications with clients are fair, clear, and not misleading. Concealing AI involvement in a financial services interaction is misleading because the client's expectations, trust, and reliance are calibrated to the perceived identity of the communicating party.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Customer-facing — affects every individual who interacts with the undisclosed AI agent |
Consequence chain: Without AI interaction disclosure, individuals interact with AI agents believing they are interacting with humans. The immediate harm is to autonomy and informed consent — the individual's choices (what to disclose, what to rely on, what to consent to) are made on a false premise. The legal consequence is that consent obtained through non-disclosure may be vitiated: GDPR consent requires it to be "freely given, specific, informed, and unambiguous" — consent given without knowledge of AI involvement is not informed. The regulatory consequence is enforcement under multiple regimes: EU AI Act Article 52 fines (up to EUR 20 million or 1% of global turnover), GDPR fines (up to EUR 20 million or 4% of global turnover), consumer protection enforcement, and sector-specific regulatory action. The reputational consequence is significant: public discovery that an organisation concealed AI involvement typically generates negative media coverage and erodes customer trust. The business consequence includes customer attrition (estimated at 15-30% of affected customers based on trust survey data), class action risk if non-disclosure was systematic, and the cost of retrofitting disclosure across all channels (estimated at GBP 200,000-500,000 for a mid-size organisation). The failure is amplified by scale: a non-disclosed AI agent interacting with 10,000 customers per day creates 10,000 instances of non-disclosure per day, each a potential regulatory violation and customer complaint.
Cross-references: AG-160 (Anti-Impersonation and Authenticated Sender Governance) for preventing agents from impersonating specific humans; AG-033 (Implied Authority Detection) for detecting AI agents that imply human authority; AG-162 (Accountable Principal Assignment Governance) for identifying the human principal responsible for the AI agent's actions; AG-169 (Legal Commitment and Representation Authority Governance) for the legal implications of commitments made during undisclosed AI interactions; AG-049 (Governance Decision Explainability) for explaining AI involvement in decisions.