This dimension governs the obligations of AI agents operating within public-sector contexts to correctly classify, retain, disclose, and manage records in accordance with Freedom of Information (FOI) legislation, administrative law, and public-records frameworks across applicable jurisdictions. It matters because AI agents increasingly generate, process, and rely upon records that are subject to statutory disclosure rights, and failures to preserve these records or respond lawfully to access requests can violate fundamental rights, obstruct accountability mechanisms, and expose government bodies to judicial review, administrative penalties, and democratic legitimacy crises. Failure in this dimension takes three principal forms: unlawful destruction or suppression of disclosable records, incorrect application of exemptions that deny legitimate access to public information, and inadequate audit trails that prevent an agency from demonstrating compliance when challenged.
A border enforcement agency deploys an AI-assisted document processing agent to manage incoming asylum case files. The agent is configured with an automated disposal rule that purges working drafts and intermediate summaries after 30 days on the assumption they are transient artefacts. A journalist submits a Freedom of Information request under the Freedom of Information Act 2000 (FOIA 2000) seeking all records relating to a specific policy decision on deportation scheduling. The agent has already destroyed 847 intermediate decision summaries that recorded which criteria were weighted by the AI recommendation engine during case triage. The agency cannot demonstrate how automated recommendations were generated or whether protected characteristics influenced scoring. The Information Commissioner's Office (ICO) finds a breach of section 46 of FOIA 2000 (the Code of Practice on records management), imposes a £500,000 monetary penalty notice, and the agency faces judicial review proceedings in the Upper Tribunal. The destruction prevents the claimants from obtaining exculpatory evidence in linked asylum appeals, constituting a material prejudice to individual rights under Article 6 ECHR.
A state police agency in Australia uses an AI case management agent that auto-classifies outgoing FOI responses. The agent is trained on historical exemption decisions and learns to apply the law enforcement exemption under section 37 of the Freedom of Information Act 1982 (Cth) to all records containing the string "investigation." A complainant requests records about a use-of-force incident involving themselves. The agent applies the exemption to 142 pages of internal review documentation, 38 of which contain no operationally sensitive material and concern only administrative oversight findings. The complainant appeals to the Office of the Australian Information Commissioner (OAIC). On review, the OAIC finds that the blanket application of the exemption is unlawful — the public interest test was not conducted document by document, as required. The agency is ordered to release 38 pages and pay the complainant's legal costs. The incident triggers a Senate Estimates hearing and a formal audit of all AI-assisted FOI decisions made over the preceding 24 months, identifying 1,206 improperly withheld documents across 394 requests.
A cross-border law enforcement cooperation platform, jointly operated by two EU member state agencies under a Europol-facilitated framework, deploys a shared AI agent for joint intelligence analysis. The agencies apply differing national records retention schedules: Agency A (Netherlands) applies a 10-year retention period under the Dutch Archives Act (Archiefwet 1995); Agency B (Hungary) applies a 5-year period under national administrative law. The shared agent automatically purges records according to the shorter schedule upon request from Agency B. Three years later, a criminal defendant in the Netherlands requests access to all records used in their prosecution. The agent's purge has eliminated 6 years of joint analysis records that would have been available under Dutch law and that the Dutch court considers essential to assessing the reliability of the prosecution evidence. The Dutch Hoge Raad finds that the prosecution cannot demonstrate the integrity of its intelligence chain, resulting in a stay of proceedings. A formal complaint is lodged with the European Data Protection Board under Article 63 GDPR, and a joint supervisory body investigation is opened. The incident exposes the absence of a harmonised records retention protocol for AI-mediated joint operations and results in a two-year moratorium on automated purging by the shared platform.
This dimension applies to any AI agent that: (a) generates, processes, stores, transmits, or acts upon records held by a public authority as defined under applicable FOI legislation; (b) participates in the automated or semi-automated processing of FOI requests, exemption determinations, or disclosure reviews; (c) performs records management functions including classification, retention scheduling, redaction, or disposal within a public-sector operational context; or (d) produces outputs — including recommendations, scores, summaries, or decisions — that constitute or inform an administrative act subject to public accountability obligations. It applies regardless of whether the agent is deployed as a primary system, a decision-support tool, or an infrastructure layer. Scope is determined by functional role, not by the agent's self-description or contractual classification.
4.1.1 The agent MUST create and capture a complete record of every output it produces that constitutes or materially informs an administrative decision, including intermediate reasoning steps, input data sources, weighting parameters, confidence scores, and final outputs, sufficient to enable retrospective reconstruction of the decision basis.
4.1.2 The agent MUST assign each captured record a unique, persistent, and immutable identifier at the point of creation, which MUST be preserved through all subsequent processing, transfer, redaction, and disposal operations.
4.1.3 The agent SHOULD capture metadata including the version of the model or ruleset active at the time of record creation, the identity of any human operator who confirmed or modified the output, and the timestamp of each modification event.
4.1.4 The agent MAY implement tiered capture mechanisms that apply more granular logging to records classified as high-sensitivity, provided the minimum capture requirements of 4.1.1 and 4.1.2 are met for all records without exception.
4.2.1 The agent MUST classify every record it creates or processes against the applicable records classification scheme for the jurisdiction(s) in which it operates, including at minimum a sensitivity designation, a subject-matter category, and a retention trigger event.
4.2.2 The agent MUST NOT apply a default classification that results in automatic exemption from FOI disclosure without human review confirming that the exemption conditions are satisfied on the facts of the specific record.
4.2.3 Where the agent operates across multiple jurisdictions with differing classification frameworks, the agent MUST apply the most protective retention obligation and the least restrictive disclosure obligation, unless a competent authority has formally authorised a derogation in writing.
4.2.4 The agent SHOULD flag classification decisions made under conditions of ambiguity — including records that could fall under two or more classification categories — for mandatory human review before the classification is applied.
4.3.1 The agent MUST implement retention schedules that are derived from and verifiably traceable to the authoritative records authority schedule for the operating jurisdiction, and MUST NOT purge any record before the scheduled retention period has elapsed.
4.3.2 The agent MUST maintain a disposal certificate for every record it destroys, recording the record identifier, classification, retention basis, scheduled disposal date, actual disposal date, and the identity or system reference of the disposal authorisation.
4.3.3 The agent MUST suspend all scheduled disposal operations for any record that is the subject of a live FOI request, internal review, appeal, litigation hold, or regulatory investigation, and MUST resume disposal only after receiving an explicit release instruction from an authorised human officer.
4.3.4 The agent MUST NOT apply disposal rules based on string-matching, keyword filtering, or pattern recognition alone; disposal eligibility MUST be confirmed against the full classification metadata of each record, not against surface features of its content.
4.3.5 The agent SHOULD provide a 30-day pre-disposal notification to the responsible records manager for any record categorised as permanent, long-term, or subject to contested classification before disposal operations execute.
4.4.1 The agent MUST, upon receiving or initiating processing of an FOI request, conduct a comprehensive scope search across all record repositories within its operational boundary, including archived, inactive, and backup stores, and MUST NOT limit the search to only live production systems.
4.4.2 The agent MUST log every search query executed in response to an FOI request, including the query parameters, the repositories searched, the number of records retrieved, and the number of records subsequently withheld or redacted, with each entry linked to the FOI request identifier.
4.4.3 The agent MUST NOT apply an exemption to any record without generating an exemption justification record that identifies: the specific exemption provision claimed, the factual basis for the claim, whether a public interest test was conducted, and the outcome of that test.
4.4.4 Where an exemption is applied, the agent MUST produce a redacted version of the record that preserves all non-exempt content in a form that is accessible and intelligible to the requestor, unless the exempt and non-exempt content is so intermingled as to be inseparable, in which case that finding MUST itself be recorded.
4.4.5 The agent SHOULD flag any FOI request response where more than 50% of retrieved records are proposed for withholding, for mandatory human senior review before the response is issued.
4.4.6 The agent MAY use automated tools to assist in redaction and exemption identification, provided that every automated exemption determination is subject to human review before the response is finalised.
4.5.1 The agent MUST maintain a current, version-controlled exemption register that maps each applicable FOI exemption provision to its statutory source, scope conditions, public interest considerations, and mandatory review frequency.
4.5.2 The agent MUST apply exemptions at the record level, not at the category or collection level, and MUST generate a separate, itemised justification for each record or discrete portion of a record to which an exemption is applied.
4.5.3 The agent MUST NOT apply an exemption that has been identified as incorrectly applied in a prior appeal, tribunal, or regulatory decision to records of the same type without first obtaining a formal legal opinion confirming that the application is legally sustainable.
4.5.4 The agent SHOULD audit exemption application patterns at intervals not exceeding 90 days to identify systematic over-application or under-application of specific exemption categories, and MUST report the findings of each audit to the designated records manager and data protection officer.
4.6.1 Where the governing jurisdiction requires or authorises a publication scheme or proactive disclosure framework, the agent MUST identify records that meet proactive publication criteria and MUST flag them for inclusion in the publication scheme within 30 days of their creation or receipt.
4.6.2 The agent MUST maintain a log of all records identified for proactive publication, including the date of identification, the date of actual publication, any reason for delay, and any exemptions claimed against proactive publication.
4.6.3 The agent SHOULD cross-reference proactive publication logs against FOI request logs to identify patterns where frequently requested information types are not currently included in the publication scheme, and MUST report such patterns to the responsible publication officer.
4.7.1 Where the agent operates in a cross-border or multi-agency environment, the agent MUST apply the jurisdiction-specific records obligations for each record according to its origin jurisdiction, unless a formal data-sharing agreement specifies an alternative governing framework, in which case the terms of that agreement MUST be recorded in the record's metadata.
4.7.2 The agent MUST NOT accept a request from a partner agency, system, or jurisdiction to purge, reclassify, or withhold records in a manner that would be unlawful under the originating jurisdiction's FOI or records legislation, and MUST log and escalate any such request to the responsible legal officer.
4.7.3 The agent SHOULD maintain a jurisdiction mapping table that records, for each data-sharing relationship, the governing records framework, the applicable retention periods, the disclosure obligations, and the designated competent authority for dispute resolution.
4.8.1 The agent MUST correctly distinguish between FOI requests (which confer access rights on any person) and subject access requests under data protection legislation (which confer rights only on the data subject) and MUST apply the legally correct framework to each request type.
4.8.2 The agent MUST NOT use FOI exemptions to deny a data subject access to their own personal data where that access would be lawful under the applicable data protection framework, and MUST NOT use data protection provisions to limit third-party FOI access beyond what the law permits.
4.8.3 The agent SHOULD provide requestors with a written explanation of which access regime has been applied to their request and why, within the applicable statutory response period.
4.9.1 The agent MUST maintain a complete, tamper-evident audit log of all records management operations it performs, including creation, classification, reclassification, access, redaction, transfer, and disposal events, with each entry timestamped and attributed to an agent instance identifier or operator identity.
4.9.2 The agent MUST make its audit logs available to designated supervisory officers within 24 hours of a written request, and MUST be capable of producing a structured export of audit log data in a format suitable for regulatory inspection.
4.9.3 The agent MUST undergo a formal records management compliance review by a qualified records professional at intervals not exceeding 12 months, and the findings and any corrective actions MUST be documented and retained for a minimum of 7 years.
4.9.4 The agent SHOULD implement real-time alerting to notify the responsible records manager when any of the following conditions are detected: disposal of a record subject to a litigation hold, application of an exemption to a record that has previously been released in an identical or substantially similar form, or failure of the audit log integrity check.
Public-sector AI agents operate within a uniquely constrained accountability environment. Unlike commercial AI deployments, where transparency obligations arise primarily from contractual and regulatory sources, public-sector agents are instruments of state power. Their records are not merely organisational assets — they are the evidentiary substrate through which citizens, courts, journalists, and oversight bodies can scrutinise the exercise of that power. FOI legislation exists precisely because democratic accountability requires that citizens can, in principle, see what government has done and why. When an AI agent destroys, misclassifies, or incorrectly withholds records, it does not merely create a data management problem; it actively obstructs the accountability mechanisms that legitimate government depends upon.
The structural challenge is that AI agents introduce new categories of records that existing FOI frameworks were not designed to anticipate. Traditional records management assumes that decisions are made by identifiable human officers, that decision rationales are expressed in human-readable documents, and that the boundary between a record and a transient artefact is clear. AI agents violate all three assumptions. They generate reasoning traces, confidence scores, model outputs, and intermediate states that have no clear analogue in traditional records law, yet which are essential to understanding why a particular outcome was reached. Courts in multiple jurisdictions have already begun to recognise that algorithmic decision records are subject to FOI obligations — including the UK's Upper Tribunal in Browning v Information Commissioner [2014] and the Australian Federal Court in Patrick v Australian Securities and Investments Commission [2021]. This dimension operationalises the legal trajectory that those decisions represent.
The requirements in Section 4 are structured to address three distinct failure modes that risk analysis of AI-assisted records systems has repeatedly identified.
The first is scope failure: agents that are not explicitly required to search all repositories will optimise for speed and search only the most accessible systems, creating systematic gaps in FOI responses. Requirements 4.4.1 and 4.4.2 address this by mandating comprehensive scope search and requiring that the scope of every search be recorded and auditable.
The second is exemption drift: agents trained on historical exemption decisions learn to replicate past patterns, including patterns that were incorrect or legally unsustainable. Over time, this produces systematic over-application of exemptions in ways that are individually plausible but collectively constitute a pattern of unlawful withholding. Requirements 4.5.2, 4.5.3, and 4.5.4 interrupt this drift by mandating record-level justification, prohibiting reapplication of overturned exemptions without legal review, and requiring regular exemption audits.
The third is disposal automation failure: agents configured with automated disposal rules will apply those rules without regard to contextual factors — litigation holds, live requests, contested classifications — that human records managers would recognise as requiring a pause. Requirements 4.3.3 and 4.3.4 enforce a human-in-the-loop gate on disposal decisions by requiring explicit release instructions and prohibiting surface-feature-based disposal eligibility.
Records governance failures in public-sector AI contexts cascade in ways that are characteristically difficult to remediate. Once records are destroyed, they cannot be recreated. Once an accountability gap is created, it cannot be closed retrospectively. The blast radius of a records failure extends from the immediate requestor — who is denied access to information they are legally entitled to — through to criminal defendants who cannot access exculpatory evidence, civil claimants who cannot establish their cases, and the broader public who lose the ability to hold government accountable for the operation of AI systems that affect millions of lives. The irreversibility of disposal failures, combined with the scale at which AI agents can process and destroy records, justifies the High-Risk/Critical tier designation.
Records-at-creation capture architecture: Implement a separate, append-only records capture layer that operates independently of the agent's primary processing pipeline. Every output from the agent is simultaneously written to the capture layer before being acted upon, ensuring that records exist even if the primary system is subsequently modified, rolled back, or fails. This architecture prevents the common failure mode where records exist only as reconstructions from log data rather than as primary captured artefacts.
Retention schedule as code: Encode the applicable retention schedule as a machine-readable, version-controlled data structure that is maintained by the responsible records authority and consumed by the agent as a dependency. Every disposal decision is evaluated against the current version of this structure, and the version reference is included in the disposal certificate. When the retention schedule is updated by the records authority, a new version is published and the agent's disposal logic is automatically updated. This pattern ensures that the agent's disposal behaviour remains aligned with authoritative records policy without requiring manual reconfiguration.
Exemption decision workflow with human checkpoint: Design the FOI response workflow so that the agent's role is to identify candidate exemptions and generate draft justifications, but every exemption determination is routed to a human decision-maker who reviews and confirms the justification before the response is issued. The agent records the human decision and the human's identity as part of the exemption justification record. This pattern satisfies the requirement for human oversight while allowing the agent to provide meaningful analytical support that reduces the burden on human officers.
Litigation hold registry with automated subscription: Maintain a centralised litigation hold registry that the agent subscribes to in real time. When a hold is registered against a record identifier, collection, or keyword set, the agent immediately suspends all disposal operations for matching records and logs the suspension. When the hold is released by an authorised officer, the disposal queue is reviewed by a human records manager before operations resume. This pattern eliminates the race condition between disposal automation and hold registration that has caused records destruction in documented cases.
Cross-jurisdictional metadata tagging: For agents operating across multiple jurisdictions, implement a metadata schema that tags each record with its origin jurisdiction, the applicable records framework, the governing retention period, and the disclosure threshold at point of creation. Resolution logic for cross-jurisdictional conflicts is implemented as an explicit, auditable rule set — not left to default behaviour — and the resolution outcome is recorded in the record's metadata.
Proactive publication pipeline: Implement a dedicated pipeline that evaluates newly created records against the publication scheme criteria on a scheduled basis (no less than weekly) and routes candidate records to the publication officer with a structured recommendation. The pipeline maintains a log of all candidates, recommendations, and publication outcomes, enabling the identification of systematic gaps in the publication scheme.
Treating AI-generated working artefacts as transient: The most dangerous anti-pattern in this landscape is configuring AI systems to treat intermediate outputs — reasoning traces, confidence scores, model input summaries, draft classifications — as non-records that can be automatically deleted. In AI-assisted administrative decision-making, these artefacts are precisely the records that FOI requestors and courts most need, because they illuminate the decision process in a way that final outputs do not. Every intermediate artefact that materially influenced the final output must be captured and retained.
Training the exemption engine on historical decisions without legal review: Using historical FOI exemption decisions as training data for an automated exemption classification engine embeds past errors and legally unjustifiable patterns into the agent's behaviour. Historical decisions include a significant proportion that were made incorrectly, under time pressure, or by officers with varying levels of FOI expertise. An agent trained on this data will replicate the error rate of the historical decision-making cohort and will do so at scale and with apparent authority. Exemption classification must be grounded in a current, legally reviewed exemption register, not in historical decision patterns.
Single-jurisdiction disposal logic in multi-jurisdiction deployments: Configuring a shared agent with the disposal logic of one partner jurisdiction and assuming it is adequate for all partners is a category error that has produced recoverable harm in documented cross-border enforcement contexts (see Example C). Multi-jurisdiction deployments require explicit, per-record retention logic that accounts for all applicable frameworks.
Conflating search scope limitation with reasonable search: Configuring the FOI search scope to exclude archive, backup, or offline storage on the grounds that these systems are inaccessible within the statutory response period is not a lawful limitation of search scope — it is a failure to conduct a reasonable search. The agent must be capable of initiating searches across all repositories and must flag to the responsible officer if a full search cannot be completed within the statutory period, allowing the officer to apply for an extension where legally available.
Using redaction as a substitute for exemption analysis: Redacting records without conducting the underlying exemption analysis — and without recording the basis for each redaction — produces responses that are lawfully defective even if they are factually accurate. Redaction is the output of an exemption decision, not a substitute for it.
Public-sector AI deployments span an exceptionally wide range of records contexts: social welfare decisions, immigration determinations, criminal intelligence analysis, planning approvals, regulatory enforcement, and electoral administration each carry distinct records obligations that reflect the sensitivity and accountability significance of the subject matter. Agents operating in law enforcement and border control contexts will encounter classification frameworks with national security dimensions — including the Official Secrets Act (UK), national security exemptions under Schedule 3 of the Freedom of Information Act 1982 (Cth), and equivalent provisions in other jurisdictions — that require specialised legal expertise to apply correctly. Implementers should not assume that a generic records management configuration is adequate for high-security contexts; jurisdiction-specific legal review of the classification and exemption logic is mandatory at deployment and at every major version update.
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Ad Hoc | Records capture is manual and inconsistent; no automated FOI search capability; disposal rules are informal; no exemption register. |
| Level 2 — Defined | Basic records capture for final outputs; structured FOI workflow with human processing; documented retention schedule; exemption categories defined but not enforced systematically. |
| Level 3 — Managed | Automated records capture including intermediate artefacts; agent-assisted FOI search with human review of exemptions; machine-readable retention schedule; litigation hold registry operational; exemption audit conducted annually. |
| Level 4 — Optimised | Real-time litigation hold integration; proactive publication pipeline; cross-jurisdictional metadata resolution; exemption audit conducted quarterly; continuous audit log integrity monitoring; formal compliance review by qualified records professional annually. |
Agents deploying in High-Risk/Critical contexts must achieve Level 3 at minimum before operational deployment and SHOULD target Level 4 within 12 months of go-live.
| Artefact | Description | Minimum Retention Period |
|---|---|---|
| Records Creation Log | Timestamped log of every record created or captured by the agent, including unique identifier, classification, and metadata fields | Duration of record + 7 years |
| Disposal Certificates | Certificate for every disposed record per requirement 4.3.2 | 20 years from disposal date |
| FOI Search Logs | Per-request log of queries, repositories searched, retrieval counts, and disposition per requirement 4.4.2 | 7 years from request closure |
| Exemption Justification Records | Per-record exemption justification per requirement 4.4.3 | 7 years from request closure or appeal resolution, whichever is later |
| Litigation Hold Register | Current and historical record of all active and released holds, with timestamps and authorising officer identities | 20 years |
| Exemption Audit Reports | Quarterly or annual exemption pattern analysis per requirement 4.5.4 | 10 years |
| Compliance Review Reports | Annual records management compliance review findings per requirement 4.9.3 | 7 years |
| Retention Schedule Version History | Version-controlled history of all retention schedule editions applied by the agent | Duration of agent operation + 10 years |
| Proactive Publication Log | Log of records identified, recommended, and published under publication scheme per requirement 4.6.2 | 7 years |
| Cross-Jurisdictional Conflict Log | Record of every jurisdictional conflict detected and the resolution applied per requirement 4.7.2 | 10 years |
| Audit Log Export | Structured export of full audit log per requirement 4.9.2 | 10 years |
| Model Version Registry | Record of model version active at time of each batch of records creation per requirement 4.1.3 | Duration of records affected + 7 years |
All audit logs must be stored in a tamper-evident format with cryptographic integrity assurance. Disposal certificates must be stored separately from the disposal system and must survive the deletion of the records to which they relate. FOI search logs must be structured to enable reconstruction of the search process by an independent auditor who was not present at the time of the search. Compliance review reports must be produced by a qualified records professional who is operationally independent from the team responsible for the agent's records management configuration.
Objective: Verify that the agent creates a complete, uniquely identified record for every administrative decision output, including intermediate reasoning steps.
Method: Select a random sample of 50 administrative decision outputs from the agent over a 30-day production period. For each output, examine the records capture layer and verify that: (a) a record exists capturing the input data sources, weighting parameters, confidence scores, and final output; (b) a unique, persistent identifier was assigned at creation; (c) the identifier is preserved through all subsequent processing stages. Examine 10 cases where the agent's output was modified by a human operator and verify that the original and modified outputs are both captured with modification timestamps and operator identity.
Pass Criteria:
Objective: Verify that the agent correctly suspends disposal operations when a litigation hold or FOI request is active, and does not apply surface-feature-based disposal eligibility.
Method: Inject 20 test records into the agent's processing environment, 10 of which are registered in the litigation hold registry and 10 of which contain keyword strings that would trigger disposal under a surface-feature rule but which have not elapsed their retention period. Advance the system clock past the nominal disposal date for all 20 records. Verify that: (a) no records subject to a litigation hold are disposed; (b) no records are disposed based on keyword matching alone; (c) disposal suspension events are logged with hold reference identifiers; (d) the system requires an explicit human release instruction before disposal resumes for held records.
Pass Criteria:
Objective: Verify that the agent conducts FOI searches across all repositories including archive and backup systems, and that search scope is logged per request.
Method: Create a test FOI request and pre-seed records responsive to the request in four repository types: live production, archive, offline backup, and a third-party data processor store. Execute the FOI search and verify that: (a) the agent queries all four repository types; (b) all seeded records are retrieved; (c) the search log records the query parameters, repositories searched, retrieval count, and links to the FOI request identifier; (d) the log entry is created within the same processing cycle as the search.
Pass Criteria:
Objective: Verify that every exemption application generates a complete, record-level justification that identifies the exemption provision, factual basis, and public interest test outcome.
Method: Select 30 FOI responses issued by the agent in the preceding 90 days that involved at least one exemption. For each response, examine the exemption justification records and verify that: (a) a separate justification exists for each record or discrete record portion to which an exemption was applied; (b) each justification identifies the specific statutory exemption provision; (c) each justification contains a factual basis statement; (d) each justification records whether a public interest test was conducted and the outcome; (e) justifications are applied at the record level, not the category level.
Pass Criteria:
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Public Records and FOI Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-566 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-566 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Public Records and FOI Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without public records and foi governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-566, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.