Audience-Specific Explanation Governance requires that organisations produce differentiated explanations of AI agent decisions tailored to the distinct needs, expertise levels, and statutory entitlements of each audience class — regulators, end-users, operators, and auditors. A single, one-size-fits-all explanation cannot simultaneously satisfy a regulator's need for compliance evidence, a consumer's right to understand a decision that affects them, an operator's need for actionable diagnostic detail, and an auditor's need for reproducible reasoning chains. This dimension mandates that explanation pipelines classify every explanation request by audience, apply audience-appropriate content selection and abstraction rules, and verify that each rendered explanation meets the minimum information requirements defined for its audience class — ensuring that no audience receives an explanation that is either incomprehensibly technical or misleadingly simplified.
Scenario A — Regulator Receives Consumer-Grade Explanation: A consumer credit agent denies a mortgage application. The applicant files a complaint with the national financial regulator. The regulator requests the explanation underlying the denial. The organisation produces the same explanation it gave the applicant: "Your application was declined because your overall risk profile did not meet our lending criteria." The regulator responds that this is insufficient — it does not identify which model features contributed to the decision, the relative weight of each feature, the regulatory thresholds applied, or the data sources consulted. The regulator issues a Section 166 skilled-person review. The organisation cannot produce a regulator-grade explanation within the 14-day deadline because the explanation pipeline was designed to produce only consumer-facing summaries. The skilled-person review costs £285,000 and takes 5 months. The regulator finds that the organisation cannot demonstrate compliance with the obligation to provide meaningful explanations to affected individuals, because its explanation system was never designed to operate at multiple levels of detail.
What went wrong: The explanation pipeline had a single output format — a consumer-friendly summary. No audience classification existed, and no mechanism could generate the feature-level, evidence-linked explanation that a regulator requires. The organisation conflated "we explained it to the customer" with "we can explain it to anyone who asks." Consequence: £285,000 skilled-person review, 5-month remediation programme, regulatory finding for inadequate explainability infrastructure, reputational damage from published enforcement notice.
Scenario B — End-User Receives Technical Explanation: A public-sector benefits eligibility agent determines that a citizen is ineligible for a housing subsidy. The citizen exercises their right to an explanation under administrative law. The agent produces an explanation containing: "SHAP values for features income_band_3 (0.342), postcode_deprivation_index (−0.187), household_composition_vector [2,1,0,3] (0.091); decision boundary threshold 0.65; calibrated probability 0.58 with Platt scaling applied." The citizen does not understand the explanation and believes the decision was arbitrary. They engage a solicitor, who files a judicial review application arguing that the explanation was not meaningful because no reasonable citizen could interpret it. The court agrees — the right to an explanation means an explanation the recipient can understand, not a data dump in technical notation. The local authority incurs £127,000 in legal costs and must reassess 4,200 decisions made by the same agent during the period when only technical explanations were available.
What went wrong: The agent produced a technically accurate explanation, but for the wrong audience. The citizen needed: "Your application was declined because your household income (Band 3, £34,000-£41,000) exceeds the subsidy threshold of £33,500 for your household size." Instead they received model internals. Consequence: £127,000 legal costs, 4,200 decision reassessments, judicial finding that the organisation failed to provide meaningful explanations.
Scenario C — Operator Receives Insufficient Diagnostic Detail: An insurance claims agent begins producing anomalous denial rates — 78% denial rate on Wednesday versus a 6-month average of 34%. The operations team requests an explanation of why specific claims are being denied. The explanation system produces consumer-grade summaries: "Your claim was declined because the damage described is not covered under your policy terms." The operator needs: which policy exclusion clause was matched, what confidence score the exclusion classifier returned, which input features drove the match, and whether a recent model update changed exclusion-matching behaviour. Without diagnostic-level explanation, the operator cannot determine whether the agent is functioning correctly or malfunctioning. The elevated denial rate persists for 9 days until a data scientist manually examines model outputs and discovers that a feature encoding error is causing the agent to misclassify covered claims as excluded. During those 9 days, 1,340 claims are incorrectly denied, generating £2.3 million in remediation costs including interest, complaint handling, and regulatory reporting.
What went wrong: The operator received an explanation designed for the claimant — correct in tone but useless for diagnosis. No operator-grade explanation existed that would have shown the feature-level detail needed to identify the encoding error within hours rather than days. Consequence: 9 days of undetected malfunction, 1,340 incorrect denials, £2.3 million remediation.
Scope: This dimension applies to every AI agent deployment where the agent produces decisions, recommendations, or actions that may require explanation to more than one audience class. An audience class is any distinct group with different information needs, expertise levels, or statutory entitlements regarding explanations — including but not limited to: end-users or affected individuals (consumers, citizens, patients, employees), operators (the team responsible for running the agent), regulators and supervisory authorities, auditors (internal and external), data subjects exercising rights under data protection law, and legal counsel in dispute resolution. If the agent's decisions are only ever reviewed by a single audience (e.g., an internal copilot used exclusively by its operator), the audience classification requirement still applies — the operator audience must be formally defined — but the multi-audience rendering requirements are reduced. The scope includes the audience classification mechanism, the content selection rules for each audience, the abstraction and rendering logic, and the verification that each audience-specific explanation meets its defined minimum information requirements.
4.1. A conforming system MUST define a formal audience taxonomy that classifies all explanation recipients into named audience classes, each with documented information needs, expertise assumptions, statutory entitlements, and minimum information requirements.
4.2. A conforming system MUST classify every explanation request by audience class before rendering the explanation, using explicit classification criteria rather than a default or fallback format.
4.3. A conforming system MUST generate explanations that satisfy the minimum information requirements defined for the requesting audience class, verified against a checklist of required information elements per audience class.
4.4. A conforming system MUST ensure that consumer-facing or citizen-facing explanations are comprehensible to a person without technical expertise in machine learning, statistical modelling, or data science — validated through readability assessment or user comprehension testing at a level no higher than a secondary-school reading standard.
4.5. A conforming system MUST ensure that regulator-facing explanations include: the specific decision output, the features or factors that materially contributed to the decision, the data sources consulted, the applicable thresholds or decision boundaries, the regulatory rules or constraints applied, and a traceable link to the underlying decision record per AG-415.
4.6. A conforming system MUST ensure that operator-facing explanations include diagnostic-level detail sufficient for the operator to determine whether the agent is functioning correctly — including feature contributions, confidence scores, model version, and any anomaly indicators.
4.7. A conforming system MUST ensure that auditor-facing explanations include sufficient detail to reproduce or independently verify the decision logic, including the complete reasoning chain, all inputs consumed, and the decision journal reference per AG-415.
4.8. A conforming system MUST prevent information leakage between audience-specific explanations — specifically, consumer-facing explanations MUST NOT expose proprietary model internals, trade-secret features, or security-sensitive system architecture details that are appropriate only for regulator or auditor audiences.
4.9. A conforming system SHOULD implement automated audience detection based on the request channel, requester role, or regulatory context, reducing reliance on manual audience classification.
4.10. A conforming system SHOULD support explanation escalation — when a recipient indicates that an explanation is insufficient, the system should offer a more detailed explanation at the next audience tier (e.g., consumer to consumer-detailed, then to regulator-equivalent if statutorily entitled).
4.11. A conforming system MAY implement explanation preview and review workflows where domain experts validate audience-specific explanations before delivery, particularly for high-stakes decisions or novel decision patterns.
The right to explanation is not a single, uniform entitlement. Different audiences have fundamentally different needs from an explanation, and an explanation that serves one audience well may be useless or harmful to another. This asymmetry is not a convenience consideration — it is a structural feature of how explanation obligations arise in law, regulation, and governance practice.
Consider the range of statutory and regulatory explanation obligations. Under the EU AI Act, Article 13 requires that high-risk AI systems be designed to be "sufficiently transparent to enable deployers to interpret the system's output and use it appropriately" — an operator-facing obligation. Simultaneously, Article 86 gives affected persons "the right to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken" — a citizen-facing obligation at a fundamentally different abstraction level. Under the UK FCA's Consumer Duty (PS22/9), firms must ensure that consumers can "understand information they are given, make effective and timely decisions, and act on them" — requiring explanations calibrated to consumer comprehension. Under SOX Section 302 and 404, management must certify that internal controls are effective, requiring explanations sufficient for audit verification — an auditor-facing standard. These are not variations of the same explanation; they are categorically different explanation types serving different functions.
The risk of a single-format explanation pipeline is severe in both directions. An explanation that is too simple for a regulator fails to demonstrate compliance and invites enforcement action (Scenario A). An explanation that is too technical for a consumer fails to satisfy the right to a meaningful explanation and invites legal challenge (Scenario B). An explanation that lacks diagnostic depth for an operator prevents timely detection of malfunction and extends the blast radius of errors (Scenario C). These are not edge cases — they are the predictable consequences of designing an explanation system that does not account for audience diversity.
The audience-specific approach also addresses a subtler risk: information leakage. A consumer-facing explanation that reveals proprietary model features, confidence thresholds, or decision boundaries may expose trade secrets or enable gaming. A regulator-facing explanation sent to a consumer may contain technical detail that confuses rather than informs. Each audience class needs an explanation that is both sufficient for its purposes and appropriately bounded — not too little, not too much, and not the wrong kind. This requires a deliberate content selection mechanism, not simply more or less of the same content.
Furthermore, explanation is not merely a compliance exercise but a trust infrastructure. When a consumer receives an explanation they can actually understand, trust in the system increases and complaint rates decrease. When an operator receives actionable diagnostic detail, mean-time-to-detection for malfunctions decreases dramatically. When a regulator receives structured, evidence-linked explanations, supervisory confidence increases and the frequency and intensity of examinations decreases. Audience-specific explanation governance is therefore both a risk-reduction measure and an operational efficiency investment.
Audience-Specific Explanation Governance requires a layered explanation architecture where a single decision event generates multiple explanation renderings, each tailored to the defined information needs and comprehension level of its target audience class. The core mechanism is: one decision record (per AG-415), multiple explanation views.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Financial regulators (FCA, SEC, FINMA, BaFin) have specific expectations about explanation content for different audiences. The FCA Consumer Duty requires that consumer explanations enable informed decision-making — not merely that an explanation exists, but that it is comprehensible and actionable. Simultaneously, the FCA expects firms to be able to provide supervisory authorities with feature-level explanations demonstrating that lending decisions are not discriminatory. Firms should define at least four audience classes: consumer, operator, regulator, and auditor, with the regulator class potentially sub-divided by jurisdiction (FCA, PRA, ECB).
Public Sector. Administrative law in most jurisdictions requires that government decisions affecting individuals are accompanied by reasons that the individual can understand and challenge. Judicial review of algorithmic decisions consistently holds that technical explanations do not satisfy this requirement. Public-sector organisations should calibrate consumer-facing explanations to the reading level and language needs of their specific population — benefits claimants, immigration applicants, or licence holders may require explanations at lower reading levels than a general consumer audience.
Healthcare. Patient-facing explanations of clinical decision support outputs must be comprehensible to patients while clinician-facing explanations must include clinical reasoning detail sufficient for the clinician to exercise independent judgement. The patient audience class requires particular sensitivity to health literacy levels, which are significantly lower than general literacy in most populations.
Cross-Border Operations. Agents operating across jurisdictions must map audience classes to jurisdiction-specific explanation rights. A consumer explanation that satisfies GDPR Article 22 may not satisfy the Australian Privacy Act's APP 1.4 transparency requirements. The audience taxonomy must accommodate jurisdictional variation, potentially requiring different minimum information elements for the same audience class in different jurisdictions.
Basic Implementation — The organisation has defined an audience taxonomy with at least three audience classes (consumer, operator, regulator/auditor). Each audience class has documented minimum information requirements. Explanation requests are classified by audience before rendering. Consumer-facing explanations are reviewed for readability. Regulator-facing explanations include feature-level detail and decision record references. This level meets the minimum mandatory requirements (4.1 through 4.8) and addresses the most damaging failure modes.
Intermediate Implementation — All basic capabilities plus: automated audience detection based on request channel and requester role. Automated readability scoring for consumer-facing explanations with a defined complexity threshold. Explanation escalation is supported — consumers can request additional detail. Template-based rendering with content gates ensures consistent quality across all audience classes. The audience taxonomy is reviewed quarterly and updated when new regulations or use cases introduce new audience needs. Explanation coverage metrics track the percentage of decisions for which audience-appropriate explanations are available.
Advanced Implementation — All intermediate capabilities plus: user comprehension testing validates that consumer-facing explanations are actually understood by representative recipients. Real-time explanation quality monitoring detects drift in readability, completeness, or accuracy across audience classes. The explanation pipeline supports dynamic audience classes — new audience types can be added by defining their information requirements without modifying the core pipeline. Cross-jurisdictional audience mapping ensures that explanations comply with the most stringent applicable explanation right for each audience-jurisdiction combination. Independent audit of explanation quality is conducted annually.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Audience Taxonomy Completeness and Classification
Test 8.2: Consumer Explanation Readability Verification
Test 8.3: Regulator Explanation Minimum Information Completeness
Test 8.4: Operator Diagnostic Sufficiency
Test 8.5: Information Leakage Prevention Between Audience Classes
Test 8.6: Auditor Explanation Reproducibility
Test 8.7: Explanation Escalation Path
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 13 (Transparency and Provision of Information) | Direct requirement |
| EU AI Act | Article 86 (Right to Explanation of Individual Decision-Making) | Direct requirement |
| SOX | Section 302/404 (Internal Controls and Certification) | Supports compliance |
| FCA SYSC | 2.1.1R (Apportionment of Responsibilities) | Supports compliance |
| FCA Consumer Duty | PS22/9 (Consumer Understanding Outcome) | Direct requirement |
| NIST AI RMF | MAP 5.1, GOVERN 1.5 | Supports compliance |
| ISO 42001 | Clause 9.3 (Management Review) | Supports compliance |
| DORA | Article 11 (Communication) | Supports compliance |
Article 13 requires that high-risk AI systems are "designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately." This is explicitly an operator-facing transparency requirement — the deployer needs technical-operational understanding, not a consumer-friendly summary. Article 86 separately establishes the right of affected persons to "clear and meaningful explanations" — a citizen-facing requirement at a fundamentally different abstraction level. The coexistence of these two articles in the same regulation demonstrates the legislative recognition that different audiences require different explanation types. AG-449 operationalises this dual obligation by requiring distinct audience classes for operators (Article 13) and affected persons (Article 86), each with appropriate content and abstraction levels.
The FCA Consumer Duty requires firms to ensure that "consumers can understand the information they are given, make effective and timely decisions, and act on them." This is not a general transparency requirement — it is a consumer-comprehension requirement. An explanation that is accurate but incomprehensible to the consumer fails the Consumer Duty. The FCA has explicitly stated that firms should test consumer understanding, not merely assert it. AG-449's readability verification requirement (4.4) and the comprehension testing in the advanced maturity model directly address this obligation. Firms that produce only technical explanations — even accurate ones — will fail Consumer Duty assessments.
SOX requires management to certify the effectiveness of internal controls and auditors to attest to that effectiveness. When AI agents make financial reporting decisions, the explanations provided to auditors must be sufficient for the auditor to independently assess control effectiveness. This requires a different explanation than what a consumer or operator needs — the auditor needs a reproducible reasoning chain linked to the decision record. AG-449's auditor audience class (4.7) directly supports SOX compliance by ensuring that audit-grade explanations are available.
MAP 5.1 addresses "the intended and potential purposes, context of use, and expected benefits and costs" of AI systems, which requires stakeholder-appropriate communication. GOVERN 1.5 addresses transparency and documentation practices. Together, these functions recognise that different stakeholders in the AI lifecycle require different types of information about AI system behaviour. AG-449 implements this principle at the explanation level.
DORA Article 11 requires financial entities to have communication plans for ICT-related incidents that distinguish between internal reporting (operational staff), external reporting (regulators, supervisory authorities), and public/customer communication. This audience-differentiated communication requirement directly parallels AG-449's audience-specific explanation requirement — the same event (an AI agent decision or incident) requires different communication for different recipients.
ISO 42001 requires management review of the AI management system, which includes review of AI system performance and effectiveness. The management audience requires a different explanation abstraction than operational staff or external regulators. AG-449's audience taxonomy supports the multi-level reporting that ISO 42001 management review requires.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Organisation-wide — affects every decision the agent explains, across all audiences and all decision types; cascading regulatory exposure across multiple compliance obligations simultaneously |
Consequence chain: The absence of audience-specific explanation governance produces a cascade of failures across different audience relationships simultaneously. The immediate technical failure is that explanations are either undifferentiated (one format for all) or arbitrarily differentiated (ad hoc adjustments without governed standards). The consumer-facing consequence is that affected individuals receive explanations they cannot understand (Scenario B), undermining trust, increasing complaint volumes, and creating legal exposure under administrative law and consumer protection regulation. The regulator-facing consequence is that supervisory authorities receive explanations that lack the feature-level, evidence-linked detail they require for compliance assessment (Scenario A), triggering skilled-person reviews, formal enforcement actions, and increased supervisory intensity. The operator-facing consequence is that the operations team cannot diagnose malfunctions from explanations (Scenario C), extending the mean-time-to-detection for errors and increasing the blast radius of every agent malfunction. The auditor-facing consequence is that audit opinions cannot be issued with confidence, potentially leading to qualified opinions or disclaimers that trigger board-level governance escalations. The compounding effect is particularly severe: a single explanation pipeline failure simultaneously violates the EU AI Act (Articles 13 and 86), the FCA Consumer Duty (consumer understanding outcome), SOX (audit evidence sufficiency), and administrative law (right to reasons) — creating multi-front regulatory exposure from a single root cause.
Cross-references: AG-049 (Explainability Governance), AG-415 (Decision Journal Completeness Governance), AG-450 (Decision Summary Provenance Governance), AG-451 (Plain-Language Duty Governance), AG-452 (Counterfactual Explanation Governance), AG-453 (Adverse Action Notice Governance), AG-458 (Uncertainty Disclosure Threshold Governance), AG-442 (Confidence Calibration Interface Governance), AG-440 (Oversight Ergonomic Design Governance).