AG-454

AI Interaction Notice Placement Governance

Explainability, Disclosure & Communications ~24 min read AGS v2.1 · April 2026
EU AI Act SOX FCA NIST ISO 42001

2. Summary

AI Interaction Notice Placement Governance requires that every interaction between a person and an AI agent includes a clear, timely, and appropriately positioned notice informing the person that they are interacting with an AI system rather than a human. The notice must appear before or at the point of engagement — not buried in terms of service, hidden in metadata, or disclosed only upon explicit request — so that the person can make an informed decision about whether and how to proceed with the interaction. This dimension governs the placement, timing, visibility, persistence, and format of the AI interaction notice across all modalities including text chat, voice, email, embodied robots, and ambient agents, ensuring that the notice is effective in practice rather than merely present in theory.

3. Example

Scenario A — Chatbot Disclosure Buried Below the Fold: A retail bank deploys an AI agent on its customer service portal. The agent handles 42,000 interactions per month, including balance inquiries, dispute initiation, and credit limit adjustment requests. The AI disclosure notice appears in 8-point grey text at the bottom of the chat window, below the message input field, requiring the user to scroll down past the input area to see it. The agent introduces itself as "Alex, your banking assistant" without stating it is an AI. A customer initiates a credit limit increase request, provides personal financial information, and receives a denial — all without realising they were interacting with an AI. The customer files a complaint asserting they would not have shared detailed financial information with an AI system. A consumer advocacy group conducts testing and finds that only 3% of users who interacted with the chatbot noticed the AI disclosure. The regulator determines the disclosure was present but not effective.

What went wrong: The notice was technically present but placed in a location where users would not see it during normal interaction. The agent's name ("Alex") and conversational style mimicked a human agent. The notice was not integrated into the interaction flow — it was a static footer element that users never scrolled to. The 3% visibility rate demonstrates that placement, not presence, determines disclosure effectiveness. Consequence: FCA enforcement action for misleading communication, £890,000 fine, mandatory redesign of the chat interface with above-the-fold disclosure, and retrospective notification to 42,000 affected customers.

Scenario B — Voice Agent Without Audible AI Disclosure: A government agency deploys a voice-based AI agent to handle benefit inquiries by telephone. The agent uses a natural human-sounding voice and conversational patterns ("Let me look that up for you," "I understand your concern"). The AI disclosure is embedded in the pre-call IVR menu: "Some calls may be handled by an automated system." This disclosure is one of seven IVR announcements played before the caller reaches the agent, and it uses ambiguous language ("may be" rather than "is"; "automated system" rather than "AI" or "artificial intelligence"). A caller phones to inquire about a benefits decision that is being appealed. The caller shares sensitive medical information, believing they are speaking with a human caseworker. The caller later discovers the interaction was with an AI and files a privacy complaint, arguing they did not give informed consent to share medical information with an AI system.

What went wrong: The disclosure was ambiguous ("may be handled" rather than "is being handled"), used vague terminology ("automated system" rather than "AI"), and was buried in a multi-announcement IVR sequence where callers routinely tune out. The voice agent's human-like conversational patterns actively worked against disclosure by creating a false impression of human interaction. The disclosure was not repeated at the point of engagement — when the caller began speaking with the agent — where it would be salient. Consequence: Privacy complaint upheld by the data protection authority, finding that the caller's consent to process medical information was not informed because they were misled about the nature of the processing entity. £340,000 penalty, mandatory voice disclosure redesign, and suspension of the voice agent for benefits inquiries pending implementation.

Scenario C — Embodied Robot Without Persistent AI Identification: A hospital deploys an embodied AI robot to assist patients in a waiting area. The robot can answer questions about appointment schedules, provide wayfinding guidance, and collect preliminary symptom information. The robot has a small "AI-powered" label on its back panel, which is not visible during face-to-face interaction. The robot uses natural language and an empathetic conversational style. An elderly patient approaches the robot and begins describing chest pain symptoms, believing they are speaking with a medical assistant. The robot collects symptom information and directs the patient to the emergency department — an appropriate response — but the patient's family later complains that the patient, who has early-stage dementia, did not understand they were interacting with a machine and would not have consented to sharing health information with a non-human system if the nature of the interaction had been clear.

What went wrong: The AI identification was placed on the back panel — invisible during the forward-facing interaction that constitutes 100% of normal use. No audible or prominently visible disclosure occurred at the start of the interaction. The robot's empathetic conversational design actively created the impression of human interaction. For vulnerable populations (elderly, cognitively impaired), the disclosure standard must be higher because the risk of misunderstanding is greater. Consequence: Hospital trust investigation, redesign of the robot's physical appearance and interaction protocol, £180,000 in implementation costs, 3-month suspension of the robot programme, and reputational damage in local media coverage.

4. Requirement Statement

Scope: This dimension applies to every AI agent deployment where the agent interacts directly with people who might reasonably believe they are interacting with a human. This includes text-based chat agents, voice agents, email agents, social media agents, embodied robots, avatar-based agents, and any other modality where the agent communicates in natural language or human-like behaviour patterns. The scope extends to both synchronous interactions (real-time chat, voice calls) and asynchronous interactions (email responses, message replies). It covers first-party deployments (the organisation's own agents on its own channels) and third-party deployments (agents operating on behalf of one organisation within another organisation's platform). The dimension does not apply to purely machine-to-machine interactions where no human is a party to the communication, or to clearly non-conversational interfaces (e.g., a search results page with an "AI-generated summary" label) where the AI nature is inherent in the interface design and universally understood. The test is: could a reasonable person, encountering this agent in the context where it operates, believe they are interacting with a human? If yes, this dimension applies in full.

4.1. A conforming system MUST display a clear AI interaction notice before or at the point of first engagement with the user, such that the user is aware they are interacting with an AI system before they share substantive information or make decisions based on the interaction.

4.2. A conforming system MUST position the AI interaction notice in a location that is within the user's natural line of sight or attention during the interaction — not in peripheral interface elements, footer text, terms of service, or locations requiring scrolling, navigation, or affirmative action to discover.

4.3. A conforming system MUST use unambiguous language in the AI interaction notice that clearly states the entity is an AI, artificial intelligence system, or automated agent — not vague terms such as "automated system," "virtual assistant," or "digital helper" that could be interpreted as describing a human-operated tool.

4.4. A conforming system MUST ensure the AI interaction notice is persistent or repeated at appropriate intervals during extended interactions, so that users who join mid-interaction, who forget the initial disclosure, or whose cognitive state changes during the interaction are reminded that they are interacting with an AI.

4.5. A conforming system MUST adapt the AI interaction notice to the modality of the interaction: visual notice for text and visual interfaces, audible notice for voice interactions, and both visual and audible notice for embodied or multi-modal agents.

4.6. A conforming system MUST ensure the AI interaction notice is accessible to users with disabilities, including compatibility with screen readers, sufficient colour contrast for visual notices, clear and slow enunciation for audible notices, and alternative formats where needed.

4.7. A conforming system MUST ensure that the agent's name, persona, conversational style, and visual or auditory presentation do not create a false impression of human identity that contradicts or undermines the AI interaction notice.

4.8. A conforming system SHOULD implement enhanced disclosure measures for interactions involving vulnerable populations (elderly, minors, cognitively impaired individuals, individuals in distress) where the risk of misunderstanding the nature of the interaction is elevated.

4.9. A conforming system SHOULD log user acknowledgement or exposure to the AI interaction notice, providing an auditable record that the notice was presented and was visible or audible at the time of interaction.

4.10. A conforming system SHOULD conduct periodic effectiveness testing of the AI interaction notice, measuring the proportion of users who correctly understand they are interacting with an AI after exposure to the notice.

4.11. A conforming system MAY offer users the option to switch to human interaction upon seeing the AI interaction notice, where human alternatives are available.

5. Rationale

The right to know whether you are interacting with a human or a machine is foundational to informed consent, trust, and autonomy. When a person believes they are speaking with a human and adjusts their behaviour accordingly — sharing personal information, accepting advice, deferring to perceived expertise, expressing vulnerability — they are making those choices based on a false premise if the entity is actually an AI. The disclosure obligation is not a bureaucratic formality; it is a precondition for the person's agency in the interaction.

The regulatory landscape strongly supports mandatory AI interaction disclosure. The EU AI Act, Article 50, explicitly requires that providers of AI systems designed to interact directly with natural persons ensure that the persons are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use. The "obvious from circumstances" exception is narrow: a chatbot on a customer service page is not automatically obvious, because many organisations use the same interface for both human and AI agents. The Act places the burden on the provider to ensure awareness, not on the user to discover it. National implementations, consumer protection regulations, and sector-specific rules reinforce this requirement across jurisdictions.

The practical challenge is not whether to disclose but how to disclose effectively. A disclosure that is technically present but practically invisible — 8-point grey text below the fold, one line in a seven-announcement IVR sequence, a label on the back of a robot — satisfies no regulatory requirement and provides no protection. Research on disclosure effectiveness demonstrates that placement, timing, prominence, and language clarity determine whether a disclosure actually informs. A notice in 14-point text at the top of a chat window that states "You are chatting with an AI" achieves near-universal awareness. The same text in 8-point font below the chat input field achieves single-digit awareness rates. The governance requirement must therefore address not just the existence of the notice but its effectiveness.

The modality dimension adds complexity. Text-based notices are relatively straightforward — they can be placed in the chat window with appropriate sizing and positioning. Voice-based notices are more challenging: the disclosure must be audible, timely, and unambiguous, but it must also not create an interaction barrier that drives users to abandon the call. Embodied agents present the greatest challenge: physical appearance, movement patterns, and conversational style all contribute to the user's perception of whether the entity is human or machine. An embodied robot that looks, sounds, and behaves like a human creates a strong human impression that a small text label cannot overcome. The disclosure must match the strength of the human impression created by the agent's design.

The vulnerable population consideration is not optional enhancement — it reflects the reality that disclosure effectiveness varies with the user's cognitive capacity, familiarity with technology, and emotional state. A disclosure that is effective for a 35-year-old digital native may be completely ineffective for an 80-year-old with early-stage dementia. The governance standard must account for the full range of users who will encounter the agent, not just the median user.

Cross-border deployment adds jurisdictional complexity. The EU AI Act's disclosure requirement applies to EU residents regardless of where the AI system is operated. California's Bot Disclosure Law applies to bots that interact with California residents. Other jurisdictions have varying requirements. A cross-border agent must comply with the most stringent disclosure requirement applicable to any user it may encounter, or implement jurisdiction-specific disclosure that adapts based on the user's location.

6. Implementation Guidance

AI interaction notice placement must be designed as a first-class element of the user interface, not appended as a compliance afterthought. The notice is part of the interaction design, governed by the same usability standards as any other critical interface element.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial interactions involve high-trust decisions — loan applications, investment advice, insurance claims. The FCA's Treating Customers Fairly (TCF) framework requires that customers can make informed decisions about financial products and services. Knowing whether you are receiving financial guidance from a human or an AI is material to that decision. Financial agents must disclose AI nature before any financial information is shared or any financial recommendation is made. The disclosure must be distinct from marketing language that might frame AI involvement as a feature ("powered by AI") rather than a nature disclosure ("this is an AI, not a human").

Healthcare. Patients interacting with healthcare agents may share sensitive health information, express vulnerability, or make decisions about their care. The duty to disclose AI nature is heightened in healthcare because the trust relationship between patient and caregiver is particularly strong, and the consequences of misplaced trust are particularly severe. Healthcare-specific regulations in many jurisdictions require disclosure of the use of AI in clinical decision support. For patient-facing agents, the disclosure must be at a comprehension level appropriate for patients with low health literacy.

Public Sector. Citizens interacting with government services have a right to know whether they are dealing with a human civil servant or an AI system, particularly when the interaction may affect their rights, benefits, or obligations. Administrative law principles in many jurisdictions require that government decision-making processes are transparent. The disclosure standard for public sector agents should be higher than the commercial baseline because the power asymmetry between government and citizen is greater and the consequences of decisions (benefits, permits, enforcement) are more significant.

Cross-Border Operations. Agents operating across jurisdictions must comply with the disclosure requirements of each jurisdiction where they interact with users. The EU AI Act Article 50 applies to any AI system interacting with EU residents. Other jurisdictions have varying requirements. The safest approach is to implement the most stringent applicable standard globally, with jurisdiction-specific adaptations where lower standards are explicitly acceptable.

Maturity Model

Basic Implementation — The organisation has implemented AI interaction notices across all agent interaction channels. Notices are positioned at or before the point of first engagement. Language is unambiguous, using the term "AI" or "artificial intelligence." Notices are adapted to each modality (visual for text, audible for voice). Agent names and personas do not create false human impressions. Notices are accessible per applicable accessibility standards. This level meets the mandatory requirements of 4.1 through 4.7.

Intermediate Implementation — All basic capabilities plus: notices are persistent or repeated during extended interactions. Enhanced disclosure measures are implemented for vulnerable populations. User acknowledgement or exposure to the notice is logged. Disclosure effectiveness testing is conducted quarterly, with a target awareness rate of 90% or above. Cross-jurisdictional notice adaptation is implemented for agents operating in multiple regulatory environments. Notices include the option to switch to human interaction where available.

Advanced Implementation — All intermediate capabilities plus: real-time disclosure effectiveness monitoring tracks user awareness across all channels. A/B testing of disclosure designs optimises effectiveness. Vulnerable population identification triggers automatic disclosure enhancement. Independent audit of disclosure effectiveness is conducted annually. The organisation can demonstrate through empirical evidence that its AI interaction notices achieve awareness rates above 90% across all channels, modalities, and user demographics, including vulnerable populations.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Notice Visibility at Point of Engagement

Test 8.2: Notice Language Clarity

Test 8.3: Notice Persistence During Extended Interaction

Test 8.4: Modality-Appropriate Notice Delivery

Test 8.5: Agent Persona Consistency with AI Nature

Test 8.6: Accessibility Compliance of Notices

Test 8.7: Cross-Jurisdictional Notice Adaptation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 50 (Transparency Obligations for Certain AI Systems)Direct requirement
EU AI ActArticle 52 (Transparency Obligations for Providers and Users)Direct requirement
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
FCA PRIN2.1 (Treating Customers Fairly)Direct requirement
NIST AI RMFGOVERN 4.2, MAP 3.3Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks and Opportunities)Supports compliance
DORAArticle 5 (ICT Risk Management Governance)Supports compliance

EU AI Act — Article 50 (Transparency Obligations for Certain AI Systems)

Article 50 is the most direct regulatory basis for AG-454. It requires that providers of AI systems intended to interact directly with natural persons design and develop the AI system in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use, taking into account the characteristics of the persons belonging to vulnerable groups. AG-454 operationalises Article 50 by specifying where the notice must be placed (at point of engagement, not hidden), what language it must use (unambiguous, stating "AI" explicitly), how it must be adapted to modality (visual, audible, or both), and how its effectiveness must be verified (awareness testing). The "obvious from circumstances" exception is interpreted narrowly: unless the AI nature is universally apparent from the interface design itself, the notice is required.

FCA PRIN — 2.1 (Treating Customers Fairly)

The FCA's Treating Customers Fairly principle requires firms to pay due regard to the interests of customers and treat them fairly. A customer who does not know they are interacting with an AI cannot make a fair and informed decision about the interaction. This is particularly acute in financial services where the customer may share sensitive financial information, rely on the interaction for financial guidance, or make consequential financial decisions. The FCA has indicated that firms must be transparent about the use of AI in customer-facing interactions. AG-454 provides the governance framework for this transparency.

SOX — Section 404 (Internal Controls Over Financial Reporting)

For financial institutions, the AI interaction notice is an internal control supporting accurate customer communication. If the notice fails — customers do not know they are interacting with an AI — the organisation's customer communication controls are deficient. SOX auditors assessing the effectiveness of customer-facing controls will examine whether AI interaction disclosure is implemented and effective, particularly for interactions that could affect financial reporting (e.g., customer complaints, dispute resolution, account modifications).

NIST AI RMF — GOVERN 4.2, MAP 3.3

GOVERN 4.2 addresses transparency and accountability mechanisms for AI systems. MAP 3.3 addresses human-AI interaction design considerations. AG-454 implements both by ensuring that the most fundamental transparency mechanism — telling the person they are interacting with an AI — is governed, tested, and effective. The NIST framework's emphasis on proportionate governance aligns with AG-454's modality-specific and vulnerability-aware approach.

ISO 42001 — Clause 6.1 (Actions to Address Risks and Opportunities)

ISO 42001 requires organisations to identify and address risks associated with their AI systems. The risk of users not knowing they are interacting with an AI is a transparency risk that can lead to privacy violations, consent failures, and trust erosion. AG-454 addresses this risk through structured disclosure governance that is demonstrably effective rather than merely documented.

DORA — Article 5 (ICT Risk Management Governance)

DORA's ICT risk management requirements extend to the transparency and communication practices of digital financial services. An AI agent that interacts with financial services customers without adequate AI disclosure creates an ICT governance risk — the communication channel is operating without the transparency controls required for automated systems in financial services.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusPopulation-level — every user who interacts with the agent without effective disclosure is affected, potentially encompassing the entire user base across all channels

Consequence chain: An AI agent interacts with users without effective AI nature disclosure. Each user proceeds under the assumption — or at least the possibility — that they are interacting with a human. Users share information, make decisions, and form trust relationships on the basis of this false or uncertain premise. The immediate harm is the violation of the user's autonomy and informed consent: they cannot make an informed choice about the interaction if they do not know its nature. The regulatory consequences follow directly: EU AI Act Article 50 violations carry enforcement action and fines proportionate to severity and duration; FCA TCF violations carry enforcement action against regulated firms; consumer protection authorities can pursue misleading commercial practice claims. The legal exposure is compounded when users who did not know they were interacting with an AI later discover this fact and claim they would have acted differently — they would not have shared medical information, would not have accepted financial guidance, would not have agreed to a transaction. Each such claim is an individual complaint that becomes part of a pattern of systematic non-disclosure. The reputational damage is severe because AI transparency is a high-salience public concern — media coverage of an organisation secretly deploying AI agents to interact with customers without disclosure generates disproportionate attention and trust erosion. The remediation cost includes: retrospective notification to all affected users, interface redesign and retesting, regulatory engagement and potential fines, and the operational disruption of pausing or modifying agent deployments pending compliance.

Cross-references: AG-049 (Explainability Governance) provides the broader framework within which AI interaction notices operate. AG-011 (Agent Identity Governance) governs the identity attributes of the agent that must be consistent with AI nature. AG-451 (Plain-Language Duty Governance) ensures notice language meets plain-language standards. AG-453 (Adverse Action Notice Governance) governs the notices that follow when the AI agent makes adverse decisions — the interaction notice ensures the user knew they were dealing with an AI before that adverse decision. AG-455 (Synthetic Identity Disclosure Governance) addresses the related but distinct concern of AI-generated content that mimics a specific human identity. AG-456 (External Statement Approval Governance) governs approval for public-facing AI communications. AG-428 (Crisis Communication Approval Governance) governs AI communication during crisis situations. AG-048 (Cross-Border Data Sovereignty Governance) intersects where cross-jurisdictional disclosure requirements differ.

Cite this protocol
AgentGoverning. (2026). AG-454: AI Interaction Notice Placement Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-454