AG-276

Policy Explainability Schema Governance

Policy Semantics, Rule Engine & Control Logic ~15 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST ISO 42001

2. Summary

Policy Explainability Schema Governance requires that every policy decision produced by an AI agent can be explained using a structured, consistent logic trace that identifies: which rules were evaluated, which inputs were considered, which rules matched, how conflicts were resolved, and what the final outcome was — all in a machine-readable format that can be rendered for human consumption. This dimension mandates that explanations are not generated after the fact from logs but are produced as a by-product of the decision process itself, ensuring that the explanation is a faithful representation of the actual decision logic. The explanation schema must be consistent across all decisions, enabling automated analysis, comparison, and audit.

3. Example

Scenario A — Customer Complaint Met with Opaque Rejection: A customer-facing agent rejects a credit application. The customer contacts the organisation and asks why. The support agent queries the decision log and finds a record showing "REJECTED" with a timestamp and an internal rule ID ("R-4471"). The support agent has no way to translate R-4471 into a meaningful explanation. The customer escalates to the regulator, citing the right to an explanation under GDPR Article 22. The organisation cannot provide a meaningful explanation because the decision log does not capture the decision logic — only the outcome.

What went wrong: The system logged the decision outcome but not the decision reasoning. No structured explanation was generated at decision time. Reconstructing the reasoning after the fact required a developer to re-run the decision engine in debug mode — a process that took 3 days and could not be guaranteed to reproduce the original conditions. Consequence: GDPR complaint, regulatory investigation for failure to provide meaningful explanation, reputational damage, estimated cost of £180,000 including legal and remediation.

Scenario B — Inconsistent Explanations Undermine Regulatory Audit: A financial-value agent produces explanations for its risk scoring decisions. However, the explanation format varies: some explanations list the rules that matched, others describe the outcome in natural language, and others provide a numeric score breakdown. When a regulator audits 500 decisions, they find that 120 explanations are in different formats, making systematic analysis impossible. The regulator cannot verify that the same logic was applied consistently because the explanation format does not support comparison.

What went wrong: The system did not enforce a consistent explanation schema. Different code paths, rule engine versions, and edge cases produced explanations in different formats. Consequence: Regulatory finding for inadequate transparency, requirement to re-audit all decisions with a consistent explanation format, estimated compliance cost of £650,000.

Scenario C — Explanation Diverges From Actual Decision Logic: An enterprise workflow agent generates explanations using a post-hoc explanation module that approximates the decision logic. The actual decision is made by evaluating 12 rules with complex precedence. The explanation module simplifies this to "top 3 contributing factors" using a separate algorithm. In 8% of cases, the "top 3 contributing factors" listed in the explanation were not actually the factors that determined the outcome — a different rule with higher precedence was the actual determinant, but the explanation module ranked it lower because it applied to fewer cases overall.

What went wrong: The explanation was generated by a separate module that approximated the decision logic rather than recording the actual decision logic. The approximation was unfaithful in 8% of cases. Consequence: 8% of explanations are misleading, regulatory risk if a customer acts on an incorrect explanation, legal liability if the organisation represents the explanation as accurate.

4. Requirement Statement

Scope: This dimension applies to all AI agents that make decisions subject to explanation requirements — whether regulatory (GDPR Article 22, EU AI Act Article 13, Equal Credit Opportunity Act adverse action notices), contractual (SLA transparency commitments), or operational (internal audit requirements). Any agent whose decisions may need to be explained to a customer, regulator, auditor, or internal stakeholder is within scope. The scope extends to all decision types: approvals, rejections, escalations, risk scores, recommendations, and any other outcome that a party may question.

4.1. A conforming system MUST generate a structured explanation for every decision at decision time, as a by-product of the decision process itself — not as a post-hoc reconstruction.

4.2. A conforming system MUST use a consistent explanation schema across all decisions, with defined fields that are always present: decision identifier, policy version (per AG-269), rules evaluated, rules matched, inputs considered, conflict resolutions applied (per AG-272), and final outcome.

4.3. A conforming system MUST ensure that the explanation faithfully represents the actual decision logic — the rules listed as matched must be the rules that actually matched, the inputs listed must be the inputs that were actually considered, and the conflict resolution described must be the resolution that actually occurred.

4.4. A conforming system MUST store explanations alongside decision records, with the same retention period and access requirements as the decision itself.

4.5. A conforming system MUST support rendering explanations in at least two formats: a machine-readable format (JSON, XML) for automated analysis and audit, and a human-readable format for customer-facing and regulatory purposes.

4.6. A conforming system SHOULD include in the explanation the specific input values that influenced the outcome (e.g., "credit score: 572, threshold: 580 — below threshold"), with appropriate redaction of sensitive data for external-facing explanations.

4.7. A conforming system SHOULD support counterfactual explanations — "what would need to change for the outcome to be different" (e.g., "if your credit score were 580 or above, the application would have been approved").

4.8. A conforming system SHOULD enable automated verification that explanations are consistent with decision outcomes — a verification tool that, given the explanation, can reconstruct the decision and confirm the outcome matches.

4.9. A conforming system MAY implement explanation comparison tooling that identifies decisions with identical inputs but different explanations, flagging potential inconsistencies.

5. Rationale

Explainability is not a feature — it is a governance requirement with legal force. GDPR Article 22 grants data subjects the right "not to be subject to a decision based solely on automated processing" and requires "meaningful information about the logic involved." The EU AI Act Article 13 requires transparency for high-risk AI systems, including "an adequate understanding of the system's output." The Equal Credit Opportunity Act (ECOA) requires adverse action notices that explain the specific reasons for a credit denial.

These requirements cannot be met with opaque decision logs that record only the outcome. They require structured explanations that describe the decision logic in terms a reasonable person (or a regulatory auditor) can understand. The structured schema requirement (4.2) ensures that explanations are machine-analysable — enabling an auditor to systematically verify that the same logic was applied to all decisions of a given type, rather than reviewing each explanation individually.

The faithfulness requirement (4.3) addresses the most dangerous failure mode in explainability: the unfaithful explanation. When the explanation is generated by a separate module that approximates the decision logic, the explanation may diverge from the actual logic. An unfaithful explanation is worse than no explanation because it actively misleads the recipient — and because the organisation may be held to the explanation it provided. If the explanation says "your application was rejected because your income is below £25,000" but the actual reason was a different rule, the organisation has provided a false explanation that may create legal liability.

The requirement to generate explanations at decision time (4.1) is the mechanism that ensures faithfulness. If the explanation is a by-product of the decision process — the same code path that evaluates rules also records which rules matched — then the explanation cannot diverge from the actual logic because it is the actual logic, recorded as it executes.

6. Implementation Guidance

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. ECOA adverse action notices require specific reasons for credit denials. The explanation schema must produce outputs that map to the 16 standard adverse action reason codes or their equivalents. The FCA expects firms to explain automated decisions in terms customers can understand. MiFID II suitability assessments must be explained with reference to the customer's stated objectives and financial situation.

Healthcare. Clinical decision explanations must reference the clinical rules and patient data that influenced the recommendation. Explanations directed at clinicians can use medical terminology; explanations directed at patients must use plain language. The explanation must distinguish between the AI recommendation and the clinician's final decision.

Critical Infrastructure. Operational decision explanations must reference the sensor data, parameter values, and safety rules that influenced the action. Explanations are critical for incident investigation: the investigation team must understand exactly why the system took a specific action under specific conditions.

Maturity Model

Basic Implementation — Every decision produces a structured explanation in a defined schema. The explanation includes: rules evaluated, rules matched, inputs considered, and outcome. Explanations are stored alongside decision records. A human-readable format is available. Explanations are generated at decision time by the rule engine.

Intermediate Implementation — The explanation schema is canonical and enforced — no decision can be recorded without a schema-conformant explanation. Conflict resolution is included in explanations. Dual-format rendering produces machine-readable and human-readable versions. Automated verification checks a sample of explanations for faithfulness. Counterfactual explanations are generated for customer-facing rejections.

Advanced Implementation — All intermediate capabilities plus: explanation consistency checking identifies decisions with identical inputs but different explanations. Explanation comparison tooling supports regulatory audit across large decision sets. Faithfulness verification covers 100% of decisions (not just a sample). Explanations are signed alongside the decision and policy version, creating a tamper-evident record. Independent third-party verification of explanation faithfulness for regulatory-critical decision types.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Explanation Completeness

Test 8.2: Explanation Faithfulness

Test 8.3: Conflict Resolution in Explanations

Test 8.4: Schema Consistency Across Decision Types

Test 8.5: Dual-Format Rendering

Test 8.6: Counterfactual Accuracy

Test 8.7: Explanation Retention and Retrieval

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
GDPRArticle 22 (Automated Decision-Making), Recital 71Direct requirement
EU AI ActArticle 13 (Transparency)Direct requirement
ECOARegulation B, 12 CFR 1002.9 (Adverse Action Notices)Direct requirement
FCA CONC7.9 (Adequate Explanations)Direct requirement
NIST AI RMFMEASURE 2.7, MANAGE 3.1Supports compliance
ISO 42001Clause 5.2 (AI Policy), Clause 9.1 (Monitoring)Supports compliance

GDPR — Article 22, Recital 71

Article 22 provides data subjects with the right not to be subject to decisions based solely on automated processing that produce legal or significant effects. Recital 71 specifies the right to "an explanation of the decision reached after such assessment." Policy explainability schema governance implements this right by ensuring that every automated decision produces a structured explanation that can be rendered in a meaningful format for the data subject. The explanation must be specific — not a generic statement about the algorithm, but a description of how the specific decision was reached for the specific individual.

EU AI Act — Article 13 (Transparency)

Article 13 requires providers of high-risk AI systems to ensure that the system is "designed and developed in such a way to ensure that its operation is sufficiently transparent to enable users to interpret the system's output." Policy explainability directly implements this by providing structured logic traces for every decision.

ECOA — Regulation B, 12 CFR 1002.9

ECOA requires creditors to provide adverse action notices to applicants who are denied credit. The notice must include the specific reasons for the denial. Policy explainability generates these reasons directly from the decision logic — "debt-to-income ratio of 47% exceeds the maximum of 45%" rather than vague categories like "income insufficient."

FCA CONC — 7.9

FCA CONC 7.9 requires firms to provide adequate explanations of credit decisions to consumers. For AI-governed credit decisions, this means the explanation must reference the specific factors that influenced the decision, in terms the consumer can understand.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusAll decisions requiring explanations — potentially every customer-facing decision

Consequence chain: Without structured policy explainability, the organisation cannot meet its regulatory obligations to explain automated decisions. The immediate technical failure is the absence of a meaningful explanation for a specific decision. The operational impact varies by context: for customer-facing decisions, the customer cannot understand why they were rejected; for regulatory audits, the auditor cannot verify that the decision logic was applied consistently; for internal reviews, the reviewer cannot identify policy errors. The regulatory consequence is significant: GDPR violations for failure to provide explanations carry penalties up to 4% of global annual turnover; ECOA adverse action notice violations carry statutory damages of up to $10,000 per individual action and uncapped class action damages; FCA CONC violations result in enforcement action and potential redress requirements. The reputational consequence is amplified in the current environment where algorithmic fairness and transparency are high-profile public concerns. An unfaithful explanation (Scenario C) creates additional legal risk: the organisation may be held to the explanation it provided, even if the actual logic was different, creating a discrepancy that undermines the organisation's credibility.

Cross-references: AG-134 (Machine-Checkable Policy Semantics) provides the formal policy representation that enables automated explanation generation. AG-272 (Exception Precedence Governance) provides the conflict resolution logic that explanations must include. AG-269 (Policy Version Pinning Governance) ensures that explanations reference the specific policy version. AG-270 (Policy Compilation Verification Governance) ensures that the compiled rules generating explanations match the approved policy. AG-271 (Rule-Test Coverage Governance) can use explanation schemas to verify that test coverage includes explanation verification. AG-275 (Policy Simulation Sandbox Governance) can generate comparative explanations between current and proposed policy. AG-277 (Policy Change Provenance Governance) provides provenance for the rules referenced in explanations. AG-135 (Policy Precedence and Conflict Arbitration) provides the precedence framework that explanations must accurately describe. AG-007 (Governance Configuration Control) governs changes to the explanation schema.

Cite this protocol
AgentGoverning. (2026). AG-276: Policy Explainability Schema Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-276