This dimension governs the obligations of AI-assisted and AI-automated systems operating within public-sector, justice, border, and law-enforcement contexts to preserve the procedural rights of individuals subject to consequential decisions — specifically the rights to notice, meaningful hearing, independent review, and effective appeal. It matters because state power amplified by automated decision-making can extinguish constitutional and statutory due-process protections faster than human-administered processes, producing irreversible harm — detention, deportation, benefit termination, criminal record inscription — before any corrective mechanism can engage. Failure presents as: decisions executed at machine speed with no human-legible justification delivered to the subject; appeal pathways that exist on paper but are structurally inaccessible because the underlying model logic is opaque; and review bodies unable to evaluate the decision because the reasoning artefact is either absent, post-hoc, or protected as proprietary.
A county jail deploys a pretrial risk-assessment instrument that produces a numeric score (0–10) and a categorical recommendation (Release / Supervised Release / Detain) by ingesting 47 variables including prior arrest history, residential stability, and employment status. In 2022, the instrument processes 8,400 pretrial hearings. In 83% of cases, the presiding magistrate's disposition matches the instrument's categorical recommendation without documented independent analysis. Defendants are not informed that a score exists, are not provided the score value, are not told which variables drove the recommendation, and are not given an opportunity to contest the input data before the hearing. One defendant — a warehouse worker with an identical address flag shared with a half-sibling who had prior failures to appear — receives a score of 8 (Detain) based substantially on the residential-instability proxy. She is detained for 31 days before a public defender discovers the shared-address misclassification and obtains manual correction. She loses her employment and her apartment during the detention period. No notice obligation was coded into the system. No hearing mechanism existed to surface the data error pre-detention. The consequence chain: unnoticed automated input error → no pre-deprivation notice → no pre-deprivation hearing → 31-day liberty deprivation → collateral socioeconomic destruction → post-hoc correction with no restorative remedy.
A national border agency integrates a biometric facial-recognition system that cross-references arriving passengers against a consolidated terrorism and outstanding-warrant watchlist containing 2.1 million records. The system is configured to trigger a "secondary referral" flag autonomously when the facial-similarity score exceeds 0.87. In a six-month operational period, the system generates 1,140 secondary referrals. Of these, 214 involve individuals later confirmed as false positives — a 18.8% false-positive rate. In 61 of those 214 false-positive cases, the subject is detained in a secondary holding area for between 4 and 19 hours while officers complete unrelated processing queues; no supervisor is required to review the match confidence metadata before detention begins. Subjects are not told the basis of referral, are not shown the watchlist entry triggering the match, and are not given a means to present exculpatory identity documentation before being escorted to secondary. One subject — a dual national academic — misses an international connecting flight, loses a conference keynote engagement worth $12,000 in contractual obligations, and is prevented from re-entering the primary terminal for 11 hours without explanation. The structural failure: the system's referral mechanism has no mandated human-review gate between algorithmic flag and physical custody commencement; notice is suppressed on a blanket security-rationale override that the operating procedures do not require to be individuated or time-limited.
A state welfare agency deploys a fraud-detection algorithm that analyses claim patterns, IP address geolocation, device fingerprinting, and benefit usage timing to generate a "suspected duplicate claim" score. When the score exceeds a configurable threshold (set at 0.72 in this deployment), the system automatically issues a termination notice and suspends payment with a 72-hour delay before the termination takes effect. The notice generated is a system template that states only: "Your benefits have been suspended pending fraud review. You may appeal by calling [telephone number]." The notice does not identify: the score value; which signals triggered the flag; the evidentiary basis; or the time limit for appeal submission. Over 18 months, 4,300 termination notices are generated. The telephone appeal line averages a 47-minute hold time and is staffed only Monday–Friday, 09:00–17:00. Of 4,300 recipients, 1,890 — 43.9% — do not submit any appeal. Post-audit reveals that 31% of the non-appealing cases were false positives. One claimant — a single parent of three — has her housing benefit and food-assistance payments terminated simultaneously based on an IP-address flag caused by her use of a library computer shared with another claimant. She does not understand the notice, cannot reach the appeal line during working hours (she works shifts), and loses housing within 6 weeks. Independent audit later establishes the threshold of 0.72 was set to optimise agency cost reduction, not accuracy. The failure chain: opaque automated flag → inadequate notice content → structurally inaccessible appeal channel → 43.9% effective disenfranchisement of appeal rights → 31% false-positive harm materialisation at full severity.
This dimension applies to any AI system — whether fully autonomous, semi-autonomous, or human-assisted — that produces, contributes to, or influences a consequential administrative or enforcement decision affecting a natural person's liberty, residency, benefit entitlement, licensing status, employment eligibility, or access to public services, where that decision is made by or on behalf of a public-sector entity or is made under delegated statutory authority. "Consequential decision" is defined as any decision that creates, modifies, suspends, terminates, or restricts a legal right or entitlement, or that initiates a deprivation of liberty in any form. The dimension applies regardless of whether the AI system is the sole decision-maker or one input among several, and regardless of whether the final decision is formally made by a human official if that official's decision is structurally dependent on AI output without independent assessment capacity. The dimension applies across all jurisdictions in which a covered organisation operates and, where jurisdictions impose conflicting due-process standards, the more protective standard governs.
4.1.1 The system MUST generate and deliver to the affected individual, prior to or contemporaneously with any adverse consequential decision, a notice that: (a) identifies the fact that an automated or AI-assisted process contributed to the decision; (b) describes the general category of information and signals used; (c) states the outcome of the decision in plain language; and (d) states the deadline and mechanism for challenge or appeal.
4.1.2 Where a decision results in immediate deprivation of liberty, the system MUST generate a notice that is delivered to the affected individual within 2 hours of the commencement of deprivation and no later than at the point of first formal custodial processing.
4.1.3 Notice content MUST NOT be limited to a generic template that fails to reference the individualised basis for the specific decision affecting the subject. Batch-generated template notices that contain no individualised signal reference do not satisfy this requirement.
4.1.4 The system MUST retain a durable record of every notice generated, including timestamp, delivery method, delivery confirmation status, and the exact content delivered to each specific individual, for a minimum of 7 years or the applicable statutory limitation period for challenge, whichever is longer.
4.1.5 Where a notice cannot be delivered due to address or contact failure, the system MUST log the delivery failure, flag the case for human review within 24 hours, and suspend execution of any irreversible consequential action until delivery is confirmed or a substitute notice mechanism is activated.
4.1.6 Notice MUST be provided in a language accessible to the subject. Where the subject's primary language is identified in available records and differs from the default system language, the system MUST either generate the notice in the subject's language or flag for human-translated notice delivery before the adverse action takes effect.
4.2.1 For any consequential decision that is not time-critical emergency action, the system MUST enforce a mandatory human-review gate before the decision takes irrevocable effect, during which the subject MUST have an opportunity to present responsive information or contest the factual basis of the automated assessment.
4.2.2 The system MUST make available to the subject, prior to any hearing or review opportunity, a summary of the specific inputs, flags, or risk indicators that drove the adverse output, expressed at sufficient granularity for the subject to identify and respond to factual errors.
4.2.3 The system MUST provide a structured mechanism — distinct from a generic complaint channel — through which the subject can submit a formal objection to specific input data before the irrevocable adverse action takes effect.
4.2.4 Where a time-critical exception to pre-decision hearing is invoked (e.g., immediate border security detention, emergency protective custody), the system MUST: (a) log the specific justification for the exception; (b) require authorisation from a named responsible official; (c) enforce a mandatory post-deprivation hearing within a defined and system-enforced time window not exceeding 48 hours; and (d) generate a notice to the subject that the exception was invoked and that a post-deprivation hearing is scheduled.
4.2.5 The hearing mechanism MUST be structurally accessible — operating hours, format, language, and channel — to the population of affected subjects. A hearing mechanism that is technically available but operationally inaccessible to a predictable proportion of the subject population does not satisfy this requirement.
4.3.1 The system MUST be capable of generating, on demand and in real time at the point of decision, a human-legible explanation of the decision rationale that: (a) identifies the top contributing factors to the adverse output; (b) quantifies or qualitatively describes the relative weight or influence of each identified factor; and (c) identifies any input data that was flagged as uncertain, estimated, or derived rather than directly observed.
4.3.2 Explanations generated for due-process purposes MUST NOT be post-hoc rationalisations generated by a separate interpretability layer that is disconnected from the actual decision logic. Where the operational model is not natively explainable, the system MUST either: (a) substitute an audited proxy model that is explainable and materially equivalent in outcomes; or (b) require full human independent assessment before the adverse action takes effect.
4.3.3 The system MUST retain the complete decision record — including model version, input vector, output score or recommendation, explanation artefact, and the identity of any human official who approved or confirmed the decision — in tamper-evident audit storage for the minimum retention period specified in 4.1.4.
4.3.4 Explanation artefacts MUST be disclosed to the subject upon request within a period not exceeding the organisation's applicable subject-access response deadline, or 30 days from request, whichever is shorter.
4.4.1 The system MUST provide, and prominently disclose to affected subjects, a formal appeal pathway that enables independent review of the adverse decision by a person or body with: (a) access to the complete decision record; (b) authority to overturn, modify, or stay the adverse action; and (c) no structural conflict of interest with the initial decision.
4.4.2 The appeal pathway MUST be technically operable — the system MUST NOT enforce any technical barrier (account requirement, fee payment, form submission complexity) that is not required by law and that is not uniformly applied without discriminatory effect.
4.4.3 Where an appeal is lodged, the system MUST automatically generate a case reference, log the appeal receipt with timestamp, and — where the adverse action is capable of being stayed — MUST flag the case for human review of whether a stay should be applied pending appeal outcome.
4.4.4 The system MUST track and report appeal outcomes in aggregate, including: total appeals lodged; appeals upheld (full and partial); appeals dismissed; appeals abandoned; and mean time to resolution. This data MUST be reviewed by a designated responsible official at intervals not exceeding 90 days and MUST be used to evaluate the threshold calibration and input data quality of the underlying model.
4.4.5 Where an appeal results in a decision in favour of the subject, the system MUST: (a) record the correction in the subject's case record; (b) propagate the correction to any downstream system or agency that received the adverse determination; (c) trigger an automatic review of other cases processed under the same model version and parameter configuration that may have been similarly affected; and (d) flag the matter for root-cause analysis.
4.5.1 Any fully automated adverse decision affecting liberty, residency, or termination of primary subsistence benefits MUST require affirmative confirmation by a responsible human official before taking irrevocable effect. A passive review — where the human is presented with the AI recommendation and must act only to override rather than to confirm — does not satisfy this requirement.
4.5.2 The confirming human official MUST have access to, and MUST attest to having reviewed, the complete decision record and explanation artefact before confirming the decision.
4.5.3 The system MUST maintain records of all human confirmations including the identity of the confirming official, timestamp, and whether the official's decision aligned with or deviated from the AI recommendation. Deviation rates MUST be analysed as an ongoing quality indicator.
4.5.4 Where a human official's confirmation rate of AI recommendations exceeds 95% over any rolling 90-day period, the system MUST generate an alert for supervisory review to assess whether meaningful independent review is occurring or whether automation bias is producing nominal rather than substantive oversight.
4.6.1 Where a consequential decision is based on data received from another jurisdiction or agency, the system MUST verify that the source data was collected and processed in accordance with the legal standards of the jurisdiction in which the adverse action is to be taken.
4.6.2 The system MUST disclose to the subject the identity of any third-party data source that materially contributed to the adverse decision, to the extent permitted by applicable law. Where disclosure is restricted by law, the system MUST log the restriction invocation with specific legal authority cited.
4.6.3 Where a decision has legal effect in multiple jurisdictions, the system MUST apply the most protective due-process standard applicable across those jurisdictions.
4.6.4 Data-sharing agreements with other agencies or jurisdictions MUST include explicit provisions governing: (a) the due-process obligations of each party when using shared data for consequential decisions; and (b) correction propagation obligations when a shared data record is found to be erroneous following an appeal or review.
4.7.1 The organisation MUST designate a named Due-Process Accountability Officer (DPAO) with documented authority to suspend or modify AI system operation where due-process obligations are not being met.
4.7.2 The system MUST produce a monthly operational report covering: total consequential decisions; notice delivery success rate; pre-decision hearing utilisation rate; appeal rate; appeal upheld rate; false-positive rate (where calculable); and any exceptions invoked under 4.2.4.
4.7.3 The DPAO MUST review the monthly operational report and MUST escalate to executive governance within 5 business days where: appeal upheld rate exceeds 15%; notice delivery failure rate exceeds 5%; or false-positive rate exceeds 10%.
4.7.4 Annual independent audits of due-process compliance MUST be conducted by a body with no operational dependency on the AI system under review. Audit reports MUST be retained for a minimum of 10 years and MUST be made available to supervisory regulators upon request.
4.8.1 The system MUST analyse notice delivery rates, hearing utilisation rates, and appeal rates disaggregated by protected characteristics (where data is legally collectible) and by proxies for vulnerability such as primary language, benefit dependency, and detention status.
4.8.2 Where statistically significant disparities are identified in any due-process access metric across demographic groups, the organisation MUST conduct a root-cause analysis within 30 days and MUST implement corrective measures within 90 days.
4.8.3 The system MUST NOT use cost-reduction objectives as the primary basis for setting decision thresholds that affect the rate of adverse decisions subject to due-process obligations. Threshold-setting decisions MUST be documented with a primary accuracy and fairness justification.
4.9.1 The organisation MUST maintain a documented due-process incident-response procedure that is activated when: a systemic failure in notice delivery is identified; an appeal upheld rate triggers the escalation threshold in 4.7.3; a judicial or regulatory finding is made that due-process obligations were breached; or a model error is identified that may have produced materially incorrect adverse decisions at scale.
4.9.2 Upon activation of the incident-response procedure, the system MUST: (a) suspend further irrevocable adverse decisions under the affected model configuration pending review; (b) notify all individuals who received an adverse decision under the affected configuration within 30 days; (c) establish an expedited appeal pathway for affected individuals; and (d) report the incident to the relevant supervisory authority within the legally required notification period.
4.9.3 Remediation actions taken following a due-process incident MUST be documented, including the scope of affected individuals, the nature of the failure, corrective measures implemented, and compensatory or restorative actions offered to affected individuals.
Due-process protections are among the oldest recognised constraints on state power, enshrined across constitutional, statutory, and treaty frameworks precisely because the historical pattern of unconstrained administrative authority demonstrates that process failures — not merely substantive errors — produce the most durable and the most socially corrosive harms. When a person is wrongly detained, deported, or stripped of subsistence support, the harm is rarely confined to the duration of the error: it generates cascading secondary losses — employment, housing, family integrity, immigration status — that persist long after the primary error is corrected. AI systems deployed in public-sector enforcement contexts amplify this dynamic in two structural ways that are qualitatively different from human-administered process failures.
First, AI systems operate at volume and speed that outpace any human corrective reflex. A fraud-detection algorithm that generates 4,300 adverse decisions in 18 months creates a volume of individual harm that no ex post complaints process can meaningfully address at comparable speed. The aggregate social damage accumulates faster than oversight institutions can track it. This is the velocity asymmetry problem: automated harm accrues faster than institutional remediation operates, meaning that prevention is not merely preferable to cure — it is structurally the only viable protection for a significant proportion of affected individuals.
Second, AI systems introduce a new category of opacity into administrative decision-making. Human administrative decisions, however poor in quality, are at minimum legible: a human decision-maker can be asked why they decided as they did and can give an answer that an appeal body can evaluate. A complex model producing a risk score cannot be interrogated in this way unless explicit explainability infrastructure is built and maintained. The absence of that infrastructure does not merely inconvenience the appeals process — it structurally nullifies it. An appeal reviewer who cannot access the reasoning underlying the decision cannot exercise independent judgment; they can only ratify or reject the outcome, which is not review. This is why AG-560 treats explainability not as a supplementary quality feature (as it might be in a commercial context) but as a procedural prerequisite for due-process compliance.
Purely behavioural controls — training officials to apply independent judgment, issuing guidance on when to override AI recommendations, requiring sign-off from senior personnel — are necessary but structurally insufficient in high-volume enforcement environments. The evidence from automation bias research consistently shows that where humans are required to act to deviate from a machine recommendation rather than act to confirm it, and where the volume of cases is high, meaningful independent human review effectively ceases within weeks of deployment. The Requirement Statement therefore imposes affirmative-confirmation gates (4.5.1), deviation-rate monitoring (4.5.3), and automation-bias detection thresholds (4.5.4) precisely because behavioural guidance without structural enforcement creates the appearance of human oversight while extinguishing its substance.
The Critical-tier classification is justified by the intersection of three factors: (1) the state-power context, in which the organisation exercising authority has coercive legal capacity that private entities lack; (2) the irreversibility characteristic of primary adverse outcomes (detention, deportation, benefit loss causing housing failure); and (3) the vulnerable-population concentration — the individuals most likely to receive adverse decisions in justice, border, and welfare contexts are disproportionately those with the least capacity to self-advocate through complex administrative processes. The combination of coercive power, irreversible harm, and concentrated vulnerability in a population with limited corrective capacity is the definition of a Critical-tier risk profile.
Pattern 1 — Pre-Decision Notice Generation at Inference Time. Integrate notice generation directly into the model inference pipeline so that the notice document is produced as a simultaneous artefact of the decision, not a downstream administrative step. The notice content — specifically the individualised signal summary required by 4.1.3 — should be populated from the same explanation artefact generated for due-process purposes under 4.3.1. This ensures consistency between what the decision-maker saw and what the subject receives, and eliminates the window in which an adverse action can be executed before notice is generated.
Pattern 2 — Affirmative-Confirmation Workflow Design. Design confirmation workflows so that the default state is non-action: the system presents the decision and supporting record to the responsible official, and the adverse action does not proceed unless the official actively selects a "Confirm" action. The "Confirm" action should be coupled with a mandatory checklist attestation covering: review of the explanation artefact; review of any subject-submitted response; and assessment of whether a pre-deprivation hearing is required. Passive timeout or inaction should never be treated as confirmation.
Pattern 3 — Structured Appeal Case Management. Implement a dedicated appeal case-management module that is logically and technically separate from the primary enforcement system. This separation ensures that appeal reviewers are not viewing a system interface that subtly re-presents the original adverse recommendation as the default framing. The appeal module should surface the full decision record, the subject's submitted objection, and any corrections to input data identified since the original decision, without pre-loading any AI-generated disposition recommendation.
Pattern 4 — Tiered Hearing Accessibility. Design hearing mechanisms with multiple access channels: written submission (by post and electronically), telephone callback (not hold queue), and in-person options where applicable. For populations with identified language barriers, integrate translation services at the point of hearing request, not as a separate subsequent step. Conduct annual accessibility audits measuring actual utilisation rates by channel and demographic, not merely the formal availability of channels.
Pattern 5 — Cascade Correction Automation. When an appeal or review results in a finding that input data was erroneous, implement an automated cascade-correction workflow that: identifies all other active cases in which the same erroneous data record was used; flags those cases for priority human review; and generates outbound notifications to affected individuals. This pattern converts what is otherwise a one-by-one corrective process into a systemic remediation mechanism proportionate to the volume at which errors are generated.
Pattern 6 — Threshold Governance Register. Maintain a formal threshold governance register for all decision thresholds that affect the rate of adverse decisions. Each threshold entry should document: the original justification; the fairness and accuracy analysis conducted at threshold-setting; the official who approved the threshold; the date of last review; and any changes made with documented rationale. Threshold changes MUST NOT be made by system configuration staff without review by the DPAO.
Anti-Pattern 1 — The Proprietary Model Shield. Refusing to produce explanation artefacts for due-process purposes on grounds of commercial confidentiality or proprietary model protection. The due-process obligation is owed to the individual subject to state power; it cannot be extinguished by a contractual relationship between the procuring agency and a technology vendor. Procurement contracts for AI systems used in consequential public-sector decisions MUST include explicit provisions requiring the vendor to produce explanation artefacts on demand and to support audit and appeal processes. Where a vendor refuses these terms, the system MUST NOT be deployed in consequential decision contexts.
Anti-Pattern 2 — Generic Batch Notices. Generating thousands of identically worded notices that name the outcome but provide no individualised information about why that outcome was generated. Generic notices do not satisfy the notice requirement and have been found by multiple administrative tribunals and courts to breach procedural fairness obligations. They are also functionally useless as a due-process protection: a notice that tells a person "your benefits are suspended" without telling them why does not enable them to identify whether there is an error to contest.
Anti-Pattern 3 — The Single High-Friction Channel. Providing a single appeal mechanism (typically a telephone number with high hold times, restricted operating hours, and no alternative) and treating its existence as satisfaction of the appeal-pathway requirement. A pathway that is technically available but practically inaccessible to a predictable proportion of the affected population is not a functional due-process protection. The 43.9% effective disenfranchisement rate in Example 3 is the measurable consequence of this anti-pattern.
Anti-Pattern 4 — Passive Human Review (Rubber-Stamping). Designing confirmation workflows in which the human reviewer must actively override the AI recommendation rather than actively confirm it, and in which the system default progresses the adverse action on inaction. This design, combined with high decision volumes and time pressure, produces nominal human oversight while eliminating its substance. The automation-bias literature documents consistent convergence toward >95% alignment rates under these conditions, which is the trigger for supervisory review under 4.5.4.
Anti-Pattern 5 — Post-Hoc Explanation Fabrication. Using a separate post-decision interpretability tool that produces an explanation for a decision after the fact, where that explanation is not derived from the actual decision pathway. Post-hoc explanations produced by tools that approximate the model's likely reasoning using perturbation methods are not reliable records of why a specific decision was made; they are approximations of what the model might have been doing. Using them as due-process artefacts creates a systematic risk that appeal reviews are evaluating an explanation that does not correspond to the actual basis of the decision.
Anti-Pattern 6 — Threshold Optimisation for Cost Without Fairness Analysis. Setting decision thresholds based primarily or exclusively on agency resource-efficiency metrics — reducing caseload, minimising benefit expenditure, maximising processing throughput — without conducting and documenting a fairness and accuracy impact assessment. This practice, documented in the Example 3 scenario, produces systematically elevated false-positive rates that fall disproportionately on vulnerable subpopulations and constitutes a structural violation of the equity monitoring requirements in Section 4.8.
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Reactive | Notice generated manually post-decision; no structured appeal pathway; explanation available only under compelled disclosure; no DPAO designated |
| Level 2 — Compliant | Automated notice generation with basic individualisation; formal appeal pathway exists; explanation artefacts produced on request; DPAO designated; monthly reporting operational |
| Level 3 — Managed | Pre-decision hearing mechanism operational; affirmative-confirmation workflow enforced; disaggregated equity monitoring active; cascade-correction automation implemented; threshold governance register maintained |
| Level 4 — Optimised | Real-time due-process performance dashboards; predictive identification of appeal risk cases for proactive outreach; multi-channel accessible hearing design verified by accessibility audit; independent annual due-process audit with public reporting; appeal outcome feedback loop integrated into model retraining governance |
For every consequential decision, the organisation MUST retain: the complete text of the notice delivered (or attempted); the delivery method; the timestamp of delivery or delivery attempt; the confirmation of delivery (read receipt, postal confirmation, or equivalent); and, where delivery failed, the failure log and the subsequent action taken. Retention period: 7 years minimum, or the applicable statutory limitation period for challenge, whichever is longer.
For every consequential decision, the organisation MUST retain: the model version and configuration at time of decision; the complete input vector supplied to the model (or a hash-verified reference to the source record); the model output (score, probability, categorical recommendation); the explanation artefact generated for due-process purposes; the identity and attestation record of any confirming human official; and any subject-submitted responsive information. Retention period: 7 years minimum, or the applicable statutory limitation period, whichever is longer. All decision records MUST be stored in tamper-evident storage with integrity-verification capability.
The appeal case-management system MUST retain: the full record of each appeal lodged, including the subject's submission; the case reference and timestamp of receipt; the identity of the reviewer; the reviewer's determination and reasoning; any stay applied; the notification of outcome to the subject; and, where the appeal was upheld, the correction and cascade-correction records. Retention period: 10 years.
Monthly operational reports (4.7.2) MUST be retained for 5 years. Annual independent audit reports (4.7.4) MUST be retained for 10 years. Threshold governance register entries MUST be retained for the operational life of the model plus 10 years. Automation-bias alert logs (4.5.4) MUST be retained for 5 years.
All due-process incident-response activations MUST be documented in an incident record covering: the trigger event; the scope of affected individuals; the suspension action taken; notifications issued; the expedited appeal pathway established; the regulator notification submitted; and the final remediation outcome. Retention period: 10 years.
Disaggregated due-process access analysis reports (4.8.1) MUST be retained for 7 years. Root-cause analysis and corrective-measure documentation (4.8.2) MUST be retained for 10 years.
Data-sharing agreements (4.6.4) and vendor contracts for AI systems used in consequential decisions (Anti-Pattern 1 guidance) MUST be retained for the operational life of the agreement plus 10 years and MUST be available to supervisory regulators upon request.
| Score | Meaning |
|---|---|
| 0 | Requirement not met; no evidence; critical gap |
| 1 | Partial compliance; material gaps present; remediation required |
| 2 | Substantially compliant; minor gaps or documentation deficiencies |
| 3 | Fully compliant; complete evidence; no material gaps |
Objective: Verify that notice generated for consequential decisions is individualised, timely, delivered, and language-accessible.
Procedure:
Pass Criteria: Score 3: ≥95% of sample cases fully compliant. Score 2: 80–94% compliant. Score 1: 60–79% compliant. Score 0: <60% compliant or audit storage unavailable.
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Due-Process Preservation Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-560 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-560 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Due-Process Preservation Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without due-process preservation governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-560, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.