AG-568

Democratic Accountability Reporting Governance

Public Sector, Justice, Border & Law Enforcement ~23 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

Section 2: Summary

This dimension governs the structured, periodic, and event-triggered reporting of AI agent activity by public-sector deploying bodies to elected representatives, parliamentary oversight committees, independent inspectorates, ombudsmen, and other constitutionally or statutorily accountable bodies. It exists because democratic legitimacy requires that state power exercised through automated systems remains visible, contestable, and ultimately subject to popular accountability — conditions that cannot be satisfied by internal audit alone. Failure in this dimension presents as agentic systems operating at scale inside justice, border enforcement, welfare, or policing contexts with no meaningful parliamentary visibility, producing a structural accountability gap in which individual harms compound undetected, legal challenges are frustrated by opacity, and the electorate cannot evaluate or correct the state's use of autonomous power.

Section 3: Example Scenarios

Scenario A — Automated Bail Risk Scoring at Scale Without Legislative Visibility

A national probation service deploys a risk-scoring agent to assist magistrates in bail and remand decisions across 47 court districts. Over an 18-month operational period the agent processes 312,000 individual assessments. Internal performance reviews exist but no structured report is ever submitted to the relevant parliamentary justice committee. A civil-society analysis published in month 19 demonstrates that defendants from three ethnic minority groups receive scores disproportionately elevating remand risk at a rate 2.4 times higher than comparable white defendants with equivalent charge sheets and prior histories. Because no aggregated usage data, demographic breakdown, or calibration audit has ever been tabled before the committee, parliamentarians have had no basis on which to scrutinise the disparity, raise it with the minister, or commission an independent review. The agency's failure to report transforms a technical calibration defect into a constitutional failure: elected representatives have been unable to exercise their oversight function for 18 months during which an estimated 7,400 individuals may have received incorrectly elevated remand recommendations. Judicial review proceedings are obstructed because disclosure requests are met with claims that no structured reporting record exists; the government is ultimately required to commission a £4.2 million independent audit that could have been avoided by mandatory quarterly demographic reporting to committee.

Scenario B — Border Enforcement Agent Operates Across Four Jurisdictions Without Consolidated Cross-Parliamentary Report

A cross-border customs and immigration enforcement agent, jointly operated by four national border agencies under a multilateral data-sharing agreement, makes autonomous recommendations on secondary inspection, detention hold triggers, and visa-refusal escalation flags. In a 24-month period it processes 8.1 million traveller records. Each participating nation's domestic AI governance framework requires some form of internal review, but no provision exists for a consolidated cross-parliamentary accountability report to any of the four legislatures. In month 22, a human rights NGO documents 1,340 cases in which travellers from a specific geographic region received automated detention-hold flags that were subsequently overturned by human officers at a rate of 83% — indicating systematic false-positive generation against a protected characteristic (national origin). Because each parliament has only partial visibility — its own nation's agent transactions — no single legislature possesses the consolidated picture needed to identify the cross-border pattern. The absence of a cross-jurisdictional accountability report means that for 22 months the pattern escapes the oversight that any individual legislature would have triggered had aggregated data been available. Remediation is delayed by a further 14 months of inter-governmental negotiation over what information can be disclosed to which parliamentary body.

Scenario C — Child Protection Services Agent Deployment Without Statutory Oversight Body Notification

A local authority children's services department deploys an agent to triage safeguarding referrals, autonomously categorising incoming contacts as requiring immediate response, standard assessment, or no further action. In a 9-month period the agent processes 41,000 referrals. The local authority's data protection officer receives quarterly summaries, but no report is submitted to the statutory independent children's services inspectorate nor to the council's elected scrutiny committee. In month 8, a serious case review following the death of a child finds that the referral had been categorised as "no further action" by the agent 11 days before the fatal incident, overriding a social worker's expressed concern that was then not escalated. The inspectorate subsequently determines it had no knowledge of the agent's existence, let alone its operational parameters or accuracy rates. The council leader and lead member for children's services are found by the inquiry to have been similarly uninformed, despite being constitutionally accountable for service decisions. The inquiry recommends mandatory notification to the inspectorate and elected scrutiny body within 30 days of any agentic system deployment in safeguarding contexts, and quarterly statistical reporting thereafter — controls that, if already in place, could have surfaced the system's 14.3% incorrect "no further action" rate on high-risk referrals before it contributed to a preventable death.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to any AI agent deployed by a public-sector body — including central government departments, arm's-length agencies, local authorities, law enforcement organisations, border agencies, immigration tribunals, and court-affiliated services — where the agent's outputs materially influence decisions affecting the rights, liberties, entitlements, or welfare of natural persons. It applies to agents operating in single-jurisdiction contexts and to agents operating under cross-border, multi-authority, or joint-operating-agreement structures. It covers both agents that produce autonomous decisions and agents that produce recommendations presented to human decision-makers. The dimension does not apply to purely internal administrative automation (e.g., scheduling, document formatting) that has no decision effect on individuals' rights. Where a deploying organisation falls under multiple national frameworks simultaneously, the most stringent applicable reporting obligation governs.

4.1 Designation of Accountable Body

4.1.1 The deploying authority MUST formally designate, in writing and prior to agent deployment, at least one constitutionally or statutorily accountable oversight body to receive accountability reports under this dimension. Acceptable bodies include, but are not limited to: parliamentary or legislative committees with relevant subject-matter jurisdiction; independent statutory inspectorates; ombudsmen with relevant remit; and elected scrutiny committees of the deploying authority.

4.1.2 Where an agent operates under a cross-border or multi-jurisdictional operating agreement, each participating jurisdiction MUST designate its own accountable body, and the joint operating agreement MUST specify a mechanism for producing a consolidated cross-jurisdictional report that is made available to all designated bodies.

4.1.3 The designated accountable body MUST be informed of the agent's existence, operational purpose, and decision-making scope prior to or concurrent with go-live. Retroactive notification after operational deployment is non-conformant.

4.1.4 Where a deploying authority cannot identify a pre-existing statutory or elected body with relevant jurisdiction, it MUST establish or request the creation of a dedicated oversight mechanism before deployment proceeds. Deployment MUST NOT proceed without a designated recipient for accountability reporting.

4.2 Mandatory Reporting Schedule

4.2.1 The deploying authority MUST produce and submit a structured accountability report to each designated body at an interval no greater than six calendar months from the date of first operational use, and at a minimum every six months thereafter for the duration of the agent's operational life.

4.2.2 For agents classified as High-Risk/Critical under the deploying authority's AI risk framework, the reporting interval MUST be reduced to no greater than three calendar months.

4.2.3 The reporting obligation MUST continue through any period in which the agent is under active review, subject to legal challenge, or temporarily suspended pending investigation. Suspension of operation does not suspend the reporting obligation.

4.2.4 An event-triggered report MUST be submitted within 21 calendar days of any of the following trigger events: (a) a material change to the agent's decision logic, model weights, or scoring thresholds; (b) identification of a bias, accuracy, or safety failure affecting a protected characteristic; (c) any individual harm directly attributable to the agent's output that results in legal proceedings, regulatory investigation, or a formal complaint upheld by an ombudsman; (d) a data breach involving data processed by the agent; (e) the agent being invoked in a serious case review, public inquiry, or coroner's inquest.

4.3 Mandatory Report Content

4.3.1 Every scheduled accountability report MUST include, at minimum, the following elements:

4.3.2 Where demographic data is unavailable or partially incomplete, the report MUST include an explanation of the gap, the steps being taken to close it, and an interim analysis using available proxy indicators.

4.3.3 Reports MUST be written in plain language accessible to a non-technical elected representative. Technical annexes MAY be included for specialist reviewers but MUST NOT substitute for the plain-language summary.

4.3.4 All reports MUST carry a classified handling marking and MUST specify whether any portion is withheld from public disclosure, the legal basis for withholding, and the portions that are available for publication.

4.4 Publication and Transparency Obligations

4.4.1 Within 30 days of submission to the designated accountable body, the deploying authority MUST publish the non-withheld portions of each accountability report in a format that is publicly accessible, machine-readable, and persistently available at a stable URL.

4.4.2 A register of all AI agents subject to this dimension, listing the agent name or identifier, the deploying authority, the operational domain, the date of first deployment, and the designated accountable body or bodies, MUST be maintained and kept current. This register MUST be publicly accessible.

4.4.3 Where a report is withheld in full from public disclosure, the deploying authority MUST publish at minimum a notice of the report's existence, the reporting period it covers, and the legal basis for full withholding.

4.5 Senior Responsible Officer Accountability

4.5.1 The deploying authority MUST designate a named Senior Responsible Officer (SRO) who is a member of the organisation's executive leadership and who bears personal accountability for the agent's governance, including the accuracy and timeliness of all accountability reports.

4.5.2 The SRO MUST personally sign or formally attest every scheduled and event-triggered accountability report. Delegation to a subordinate officer for attestation is not conformant with this requirement.

4.5.3 In cross-border and multi-jurisdictional deployments, each participating authority MUST designate its own SRO, and the joint operating agreement MUST identify a lead SRO responsible for the consolidated cross-jurisdictional report.

4.6 Record Keeping and Audit Trail

4.6.1 The deploying authority MUST retain a complete record of every accountability report submitted under this dimension, including the date of submission, the identity of the receiving body, any acknowledgement received, and all correspondence arising from the report, for a minimum period of ten years from the date of submission or for the full duration of any related legal proceeding, whichever is longer.

4.6.2 The underlying data used to generate each report — including the raw audit logs, decision records, and demographic data — MUST be retained in an immutable or tamper-evident format for the same minimum period.

4.6.3 Records MUST be maintained such that they can be produced to a court, tribunal, or inquiry within five working days of a formal request.

4.7 Accountable Body Engagement Rights

4.7.1 The designated accountable body MUST be granted the right to request supplementary information arising from any submitted report, and the deploying authority MUST respond to such requests within 28 calendar days or such shorter period as is required by the body's standing orders.

4.7.2 The designated accountable body SHOULD be granted the right to commission an independent technical audit of the agent, at the deploying authority's expense, at intervals no greater than every three years, or following any event-triggered report.

4.7.3 Where a parliamentary committee or elected scrutiny body formally resolves to examine the agent, the deploying authority MUST make available a suitably qualified and authorised representative to give evidence within the timetable set by the committee.

4.7.4 Designated accountable bodies MAY request access to system documentation, model cards, training data summaries, and bias evaluation records as part of their oversight function, and the deploying authority MUST not unreasonably withhold such access where national security or third-party intellectual property exemptions do not apply.

4.8 Prohibited Practices

4.8.1 A deploying authority MUST NOT knowingly submit an accountability report containing materially inaccurate data to a designated accountable body.

4.8.2 A deploying authority MUST NOT withhold or delay an event-triggered report for the purpose of managing political or reputational risk. Withholding for legally prescribed exemptions (e.g., live operational security reasons) is permissible only where the exemption is formally invoked in writing and reviewed within 90 days.

4.8.3 A deploying authority MUST NOT suppress, redact, or aggregate demographic disaggregation data in a manner that obscures protected-characteristic disparities that would otherwise be visible in the disaggregated figures.

4.8.4 A deploying authority MUST NOT decommission or materially alter an agent under active accountability review by a designated body without providing prior written notification to that body and receiving a formal acknowledgement.

4.9 Cross-Border and Multi-Jurisdictional Supplementary Requirements

4.9.1 Where an agent operates under a formal inter-governmental or inter-agency agreement spanning two or more national jurisdictions, a joint accountability protocol MUST be established as a schedule to the operating agreement, specifying: the consolidated reporting format; the allocation of reporting responsibilities among participating authorities; the designated accountable body or bodies in each jurisdiction; and the mechanism for reconciling differing national disclosure requirements.

4.9.2 Consolidated cross-jurisdictional reports MUST include jurisdiction-level disaggregation so that each participating nation's designated accountable body can evaluate the agent's operation within its own territory.

4.9.3 Where the legal framework of a participating jurisdiction prevents full disclosure to another jurisdiction's accountable body, the joint accountability protocol MUST provide for parallel national-level reporting at equivalent detail.

Section 5: Rationale

Why Democratic Accountability Reporting Is a Structural Requirement, Not a Behavioural Aspiration

The underlying premise of democratic government is that the exercise of state power requires justification to those on whose behalf it is exercised. This premise does not dissolve when state power is exercised by an algorithmic agent rather than a human official. If anything, agentic systems intensify the accountability imperative: they operate at speeds and scales that outpace individual case review; their decision logic may be opaque to the very officials nominally responsible for their outputs; and their systemic errors compound at a rate that no manual correction mechanism can absorb in real time. Internal audit mechanisms — however well designed — are structurally insufficient for democratic accountability purposes because they report upward within the organisation that deployed the agent, not outward to the principal that the organisation serves. Parliamentary committees, inspectorates, and elected scrutiny bodies are not merely desirable recipients of information; they are the institutional expression of the accountability relationship between state and citizen.

The Compound Risk of Invisible Scale

A human decision-maker making 50 bail decisions per week produces errors that are individually visible, individually appellable, and collectively manageable through normal judicial processes. An agent making 3,000 bail decisions per day produces errors that are individually visible only to the individual affected — who may lack the resources, knowledge, or standing to surface a pattern — but whose aggregate effect is invisible to anyone without access to the full decision record. Democratic accountability reporting is the mechanism that converts an agent's aggregate operation into a form legible to oversight bodies. Without it, the accountability gap is not merely administrative; it is structural. Elected representatives cannot discharge their scrutiny function on information they do not receive.

Why Behavioural Controls Alone Are Insufficient

Requirements to log decisions, conduct bias evaluations, and maintain audit trails are necessary but not sufficient. They produce records held within the deploying organisation. Democratic accountability reporting requires active transmission of structured information to bodies external to the deploying authority, at defined intervals, in a form designed for oversight rather than operational management. This distinction — between records that could be disclosed if demanded and reports that must be produced and delivered — is the difference between passive transparency and active accountability. The latter is required precisely because the former depends on someone outside the organisation knowing enough to ask the right questions, which is itself undermined by the absence of proactive disclosure.

Legitimacy and the Rule of Law

In jurisdictions governed by the rule of law, the use of state power against individuals — whether through detention, benefit denial, visa refusal, or risk classification — requires legal authority, procedural fairness, and the possibility of effective challenge. Agentic systems that make or materially influence such decisions without parliamentary visibility operate in a legitimacy deficit. Courts have increasingly recognised that opacity in public-sector algorithmic decision-making is not merely a technical problem but a due process and constitutional one. Proactive accountability reporting to democratically legitimate bodies is the upstream structural control that prevents this deficit from becoming a rights violation at scale.

Section 6: Implementation Guidance

Pattern 1 — Pre-Deployment Accountability Agreement Before any agent goes live in a rights-affecting public-sector context, the deploying authority should execute a formal accountability agreement with the designated oversight body. This agreement should specify the reporting schedule, the agreed content template, the classification handling arrangements, and the process for requesting supplementary information. Establishing this relationship before deployment ensures that reporting infrastructure is tested before the agent operates at scale and that the oversight body has baseline context against which to evaluate future reports.

Pattern 2 — Standardised Accountability Report Template Deploying authorities should develop and maintain a standardised report template that pre-populates structural elements and draws automatically from audit log data sources. This reduces the risk of report-to-report inconsistency, makes year-on-year trend analysis accessible to oversight bodies, and ensures that demographic disaggregation fields are not omitted due to resource pressure. Templates should be approved by the SRO and reviewed annually.

Pattern 3 — Automated Data Pipeline from Audit Log to Report Where operational scale makes manual compilation unreliable, deploying authorities should establish automated pipelines that extract, aggregate, and format audit log data for accountability reporting. These pipelines should themselves be auditable, version-controlled, and subject to independent testing to ensure they accurately represent the underlying decision record. Human review of automated outputs before submission to the accountable body remains mandatory.

Pattern 4 — Tiered Disclosure with Parallel Confidential and Public Reports Where national security, operational security, or third-party data considerations prevent full public disclosure, deploying authorities should produce two versions of each report: a full version for the designated accountable body under appropriate classification, and a redacted public version that discloses all non-sensitive elements. This approach preserves democratic visibility while managing legitimate confidentiality interests. Legal basis for each redaction should be itemised rather than asserted generally.

Pattern 5 — Joint Accountability Secretariat for Cross-Border Deployments Multi-jurisdictional agent deployments should establish a joint accountability secretariat — a small coordinating function within the operating agreement's governance structure — responsible for producing the consolidated cross-jurisdictional report, maintaining the inter-authority data sharing protocol that underpins it, and managing the calendar of national-level supplementary reports. This function prevents the accountability obligation from falling between participating authorities through diffusion of responsibility.

Pattern 6 — Civil Society and Expert Panel Engagement While formal accountability sits with designated statutory and elected bodies, deploying authorities operating at high societal impact should consider establishing an expert advisory panel — including academic researchers, civil-society representatives, and technical specialists — to review accountability reports and provide independent analysis to both the deploying authority and the designated oversight body. This supplements democratic accountability without substituting for it.

Maturity Model

LevelDescription
Level 0 — Non-CompliantNo accountability reporting to any external body. Internal audit only. Oversight body unaware of agent's existence.
Level 1 — Basic NotificationDesignated oversight body notified of agent's existence. No structured reporting, ad hoc disclosure on request only.
Level 2 — Periodic Structured ReportingScheduled reports submitted to designated body. Standard content elements present. Manual compilation. No demographic disaggregation or partial only.
Level 3 — Full Scheduled and Event-Triggered ReportingAll required content elements present. Demographic disaggregation complete. Event-triggered reports submitted within required timescales. SRO attestation in place.
Level 4 — Proactive Publication and EngagementFull scheduled and event-triggered reporting plus proactive public disclosure of non-sensitive elements. Accountable body exercises supplementary information rights routinely. Independent audit commissioned.
Level 5 — Cross-Jurisdictional Consolidated AccountabilityAll Level 4 requirements met. Joint accountability protocol in place for multi-jurisdictional operations. Consolidated reports available to all participating legislatures. Expert advisory panel in place.

Explicit Anti-Patterns

Anti-Pattern 1 — Substituting Data Protection Officer Reporting for Democratic Accountability Reporting DPO reports and Data Protection Impact Assessments serve a different function — they assess compliance with data protection law and report to a regulatory authority, not to a democratically accountable body. Treating DPIA submission as equivalent to or a substitute for accountability reporting under this dimension is non-conformant. The two obligations are parallel and must both be met.

Anti-Pattern 2 — Aggregate-Only Reporting That Obscures Demographic Disparity Reporting total decision volumes without demographic disaggregation — or using demographic aggregation that combines protected groups in ways that mask disparity (e.g., reporting all "non-white" together) — is structurally inadequate. This pattern appears as a compliance artefact while defeating the substantive purpose of accountability reporting. Reports must disaggregate to the level at which actionable disparities could be detected.

Anti-Pattern 3 — Designating an Accountable Body With No Real Jurisdiction or Power Nominating a token consultative forum — one with no statutory functions, no power to require information, and no reporting relationship to a legislature — as the "designated accountable body" satisfies the letter of the designation requirement while defeating its purpose. Designated bodies must have meaningful institutional power over the subject matter.

Anti-Pattern 4 — Treating Accountability Reports as Internal Documents With Selective Disclosure Some deploying authorities structure accountability reports as internal governance documents that are disclosed to the oversight body only on request rather than proactively submitted. This reverses the accountability relationship: the burden of disclosure should lie with the deploying authority, not the oversight body.

Anti-Pattern 5 — Decommissioning Without Notification Silently decommissioning an agent while an accountability review is in progress — or without notifying the designated oversight body — creates a gap in the accountability record and may frustrate ongoing inquiries. All decommissioning decisions must be communicated to the designated body with appropriate notice.

Anti-Pattern 6 — SRO Delegation Chain That Obscures Accountability Establishing a reporting chain in which the SRO nominally signs reports but in practice plays no role in their preparation or review creates a formalistic attestation without substantive accountability. The SRO must be genuinely engaged with the report content, capable of answering questions about it before the accountable body, and personally responsible for its accuracy.

Industry Considerations

Law enforcement and border enforcement agencies frequently cite operational sensitivity as grounds for limiting disclosure. This concern is legitimate but must be managed through tiered disclosure mechanisms rather than wholesale opacity. Justice and court-affiliated services must navigate judicial independence considerations carefully — accountability reporting for tools used in judicial decision support must be structured to provide transparency about the tool without creating a channel for political interference in individual cases. Child protection services operate under statutory serious case review frameworks that interact with this dimension; reporting obligations under this dimension should be designed to complement rather than duplicate serious case review processes.

Section 7: Evidence Requirements

ArtefactDescriptionRetention Period
Accountable Body Designation RecordFormal written designation of the accountable body or bodies prior to deployment, signed by the SRODuration of agent operational life plus 10 years
Pre-Deployment Notification ConfirmationWritten acknowledgement from the designated accountable body of receipt of pre-deployment notificationDuration of agent operational life plus 10 years
Accountability AgreementExecuted accountability agreement specifying reporting schedule, content template, and engagement rightsDuration of agent operational life plus 10 years
All Scheduled Accountability ReportsComplete record of every submitted report including submission date, recipient, and acknowledgementMinimum 10 years from submission date or duration of related legal proceedings, whichever is longer
All Event-Triggered ReportsAs above, for each event-triggered submissionMinimum 10 years from submission date or duration of related legal proceedings, whichever is longer
SRO Attestation RecordsSigned attestation for each report by the named SROAs above
Underlying Audit Log DataImmutable decision logs, demographic data, performance metric records underpinning each reportMinimum 10 years in tamper-evident format
Correspondence with Accountable BodyAll supplementary information requests, responses, and committee engagement recordsMinimum 10 years
Public Register EntryCurrent entry in the public AI agent registerContinuously maintained; historical versions retained for 10 years
Published Public Versions of ReportsPublicly accessible, machine-readable versions of non-withheld report elements at stable URLsContinuously accessible; archived copies retained for 10 years
Independent Audit ReportsAny independent technical audit commissioned by the accountable body10 years from completion
Joint Accountability Protocol (cross-border deployments)Executed joint accountability protocol as schedule to the operating agreementDuration of operating agreement plus 10 years
Consolidated Cross-Jurisdictional ReportsAll consolidated reports produced under the joint accountability protocolMinimum 10 years
Redaction and Withholding Justification RecordsWritten legal basis for any withheld or redacted elements of a report, with 90-day review records10 years

Section 8: Test Specification

8.1 Accountable Body Designation Test

Maps to: §4.1.1, §4.1.3, §4.1.4, §4.5.1

Objective: Confirm that a valid, appropriately jurisdictioned accountable body was designated in writing before deployment and was notified prior to or concurrent with go-live.

Method: Request the written designation record and the pre-deployment notification confirmation. Verify the date of the designation and notification against the documented go-live date. Verify that the designated body is a constitutionally or statutorily accountable body with relevant subject-matter jurisdiction. Where no pre-existing body existed, verify that a formal process to establish an oversight mechanism was completed before deployment.

Pass/Fail Criteria:

8.2 Reporting Schedule Compliance Test

Maps to: §4.2.1, §4.2.2, §4.2.3, §4.2.4

Objective: Verify that all required scheduled and event-triggered reports have been submitted within the mandated timescales throughout the agent's operational life.

Method: Obtain the complete submission log of all accountability reports. For each reporting period, verify: (a) the report was submitted within the required interval; (b) for High-Risk/Critical agents, the interval does not exceed three months; (c) all event trigger dates are identified and matched against corresponding event-triggered report submission dates; (d) reports were submitted during any suspension or review period. Cross-reference event trigger identification against audit logs, legal proceedings register, and incident records.

Pass/Fail Criteria:

8.3 Report Content Completeness Test

Maps to: §4.3.1, §4.3.2, §4.3.3, §4.3.4

Objective: Verify that submitted reports contain all mandatory content elements at the required level of detail, including demographic disaggregation.

Method: Select a stratified sample of submitted reports — at minimum the most recent four scheduled reports and all event-triggered reports in the preceding 24 months. For each selected report, apply a structured content checklist covering all elements in §4.3.1(a) through (h). Evaluate demographic disaggregation for completeness and specificity. Assess plain-language accessibility of the summary section. Verify classification marking and disclosure status declaration. Where demographic data gaps are noted, verify the presence of an explanation and remediation plan per §4.3.2.

Pass/Fail Criteria:

8.4 Senior Responsible Officer Accountability Test

Maps to: §4.5.1, §4.5.2, §4.5.3

Objective: Verify that a named SRO at executive level bears documented personal accountability for the agent's governance and has personally attested all submitted reports.

Method: Obtain the SRO designation record and verify the SRO's position within the organisation's executive leadership structure. Review all submitted reports for SRO attestation. In cross-border deployments, verify that each participating authority has its own SRO and that a lead SRO for the consolidated report is identified in the joint accountability protocol. Interview or obtain a written statement from the SRO confirming their engagement with report preparation and content. Review whether any attestation has been delegated to a subordinate officer.

Pass/Fail Criteria:

8.5 Public Disclosure and Register Test

Maps to: §4.4.1, §4.4.2

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Democratic Accountability Reporting Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-568 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-568 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Democratic Accountability Reporting Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without democratic accountability reporting governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-568, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-568: Democratic Accountability Reporting Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-568