This dimension governs the conditions under which human agents retain effective override authority over automated or AI-assisted border screening and immigration enforcement systems, ensuring that no consequential determination affecting the liberty, rights, or legal status of a person is finalised without meaningful human review. It matters because border screening systems operate at the intersection of state power, individual rights, and international law obligations — where algorithmic errors compound into detention, deportation, family separation, and refoulement, often affecting populations with the least institutional capacity to seek redress. Failure manifests as systems that present human review as a procedural formality while structurally preventing genuine override: override interfaces that are technically present but cognitively inaccessible, audit trails that exist but are never reviewed, and escalation paths that nominally exist but are never activated because the cost of overriding an automated recommendation exceeds the social or institutional cost of compliance.
A secondary screening AI deployed at a land border crossing matched traveller names against a terrorism watch-list using transliterated Arabic name variants. Over a 14-month period, 342 individuals sharing common transliterations of three names were flagged for secondary inspection. Of those, 87 were held for between 4 and 19 hours before manual review resolved the false positive. Eleven were transferred to immigration holding prior to any human officer reviewing the underlying match rationale. The system's interface presented the match confidence as a numeric score (0.87–0.94) without surfacing the number of prior false positives for those specific name clusters. Officers later reported in post-incident interviews that scores above 0.85 were treated as definitive. No officer had exercised a manual override in the relevant 6-week sample period reviewed, despite written policy stating override was available. The failure chain: automated match → high-confidence display → social/institutional pressure against override → detention without human determination. Consequence: three individuals missed medical appointments for chronic conditions; one missed a custody hearing; two filed formal complaints under 8 U.S.C. § 1252 proceedings.
An EU member state deployed facial recognition at an airport arrival gate to pre-screen passengers against a database of persons subject to entry bans. The system returned a match confidence of 91% for a Kenyan national travelling on a valid Schengen visa. The officer on duty received an automated alert flagging the individual for secondary screening without any indication that the confidence threshold for the underlying model had been lowered from 95% to 88% three weeks earlier following a vendor update, and without any indication of the model's documented false positive rate at 88% confidence (approximately 1 in 23 matches). The individual was held for 6 hours, missed a connecting flight to a medical conference where she was a keynote speaker, and was not informed of the basis for her detention. The human officer did not override the system's recommendation because no written procedure explained when override was appropriate and the officer had not received training on the new threshold. The failure chain: undisclosed model change → reduced threshold → no operator notification → no training update → officer unable to contextualise alert → de facto automated determination. Consequence: formal complaint to national data protection authority; judicial review initiated; government settled for €12,000 in damages.
Australia Border Force deployed a passenger risk-scoring model that assigned travellers arriving from specified origin countries elevated secondary screening scores. An internal audit covering 18 months of deployments identified that travellers from seven countries in East Africa and the Middle East were referred for secondary screening at 4.3× the rate of travellers from OECD countries presenting statistically equivalent documentation profiles. The audit further found that human officers had overridden the automated recommendation in fewer than 2% of flagged cases over the period, compared to a 34% override rate in a control period two years prior when manual checklists were used. Post-audit interviews established that officers interpreted the system's recommendation as an institutional directive rather than an input. The failure chain: disparate-impact scoring model → low override rate → systematic disparate treatment → rights violation at scale. Consequence: Senate Estimates Committee inquiry; recommendation to suspend the risk-scoring module pending independent bias audit; three class complaints lodged under the Racial Discrimination Act 1975.
This dimension applies to any AI or automated system deployed by or on behalf of a government border authority, immigration agency, customs service, or equivalent public enforcement body that produces outputs — including but not limited to risk scores, watch-list matches, document authenticity determinations, biometric identifications, and entry/exit recommendations — that are used as inputs into decisions affecting the liberty, legal status, entry rights, or detention of natural persons at border control points, airports, seaports, land crossings, or administrative processing centres. It applies equally to primary inspection, secondary inspection, and pre-clearance administrative workflows. It applies whether the system is operated directly by the government agency or by a contracted third party. It does not apply to purely logistical or non-person-affecting automation (e.g., baggage conveyor routing) unless such systems are integrated into person-affecting decision workflows.
4.1.1 The system MUST be architected such that no determination to detain, refuse entry, refer for secondary screening beyond a defined time threshold, or initiate removal proceedings is finalised without an affirmative human decision.
4.1.2 The system MUST clearly distinguish, in all interfaces presented to human operators, between an automated recommendation and a final determination, using unambiguous visual and semantic differentiation.
4.1.3 The system MUST NOT present automated outputs in a format that requires the human operator to take an affirmative action to reject an automated recommendation (i.e., opt-out framing); all consequential determinations MUST require a positive human confirmation action.
4.1.4 The system MUST provide human operators with a clearly labelled, always-accessible override mechanism for any automated recommendation, accessible within two user-interface steps from the point at which the recommendation is displayed.
4.2.1 At the point of displaying any automated recommendation to a human operator, the system MUST present, in human-readable form: (a) the primary factors contributing to the recommendation; (b) the confidence level of the underlying model output; (c) the documented false positive rate for the active model version at the operating confidence threshold; and (d) the number of prior false positives for the specific match type (e.g., name cluster, facial recognition database segment) in the preceding 90-day window.
4.2.2 When a model version, confidence threshold, or operating parameter has changed within the preceding 30 days, the system MUST display a prominent notification to the operator at the point of recommendation display, identifying the nature of the change.
4.2.3 The system SHOULD present comparison statistics illustrating how the current recommendation compares to historical override rates for similar case types, to support calibration of officer judgement.
4.2.4 The system MUST NOT suppress, truncate, or collapse contextual information about recommendation basis or model uncertainty to secondary screens or documentation tabs without a clear visual indicator that such information exists and is accessible.
4.3.1 Every instance of a human operator affirming, modifying, or overriding an automated recommendation MUST be logged in a tamper-evident audit record that captures: (a) operator identifier; (b) timestamp; (c) the automated recommendation presented; (d) the human determination made; (e) any free-text rationale entered; and (f) the duration of the review session.
4.3.2 Override logs MUST be stored separately from case management records in a system to which the deploying agency's operational supervisory chain does not have unilateral write access.
4.3.3 The system MUST generate a statistical report, no less frequently than monthly, computing override rates by operator, by unit, by recommendation type, and by origin-country or demographic group of subject persons. This report MUST be automatically delivered to the designated independent oversight authority.
4.3.4 The system MUST NOT include any feature, workflow, or interface element that records override events as anomalies, errors, or deviations requiring supervisory justification in a manner that creates a documented record of non-compliance against the overriding officer. Override MUST be treated as an expected and protected operational action.
4.3.5 Agencies deploying the system MUST establish and document a formal non-retaliation policy that explicitly protects officers who exercise override authority in good faith. This policy MUST be surfaced within the system interface at first login of each operational session.
4.4.1 The system MUST enforce a maximum automated hold period, not to exceed 60 minutes, after which, if no affirmative human determination has been recorded, the system MUST generate an escalation alert to a designated duty supervisor.
4.4.2 The system MUST NOT allow an automated recommendation to remain in a pending state for a period exceeding 4 hours without a logged human determination. Where a human determination has not been recorded within 4 hours, the system MUST generate a rights-alert notification to the designated independent oversight body or duty inspector.
4.4.3 For determinations involving minors, stateless persons, asylum seekers, or persons presenting medical documentation, the maximum automated hold period MUST be reduced to 30 minutes, and escalation MUST include a designated welfare officer in addition to the duty supervisor.
4.5.1 The system MUST enforce a competency gate preventing any operator from accessing consequential recommendation interfaces until that operator has completed documented training covering: (a) the system's recommendation logic at a non-technical but functionally accurate level; (b) the system's known error modes and documented failure cases; (c) the operator's legal authority and obligation to override; and (d) the override interface mechanics.
4.5.2 Competency gates MUST be re-enforced following any model update, threshold change, or material interface change, with a maximum grace period of 5 business days before access is suspended pending re-certification.
4.5.3 The system SHOULD provide operators with access to a summarised model performance report, updated no less frequently than quarterly, covering false positive rates, demographic performance differentials, and known limitations.
4.6.1 The system MUST support the generation, upon request, of a written statement to any subject person explaining: (a) that an automated system contributed to their screening determination; (b) the general nature of the automated input (e.g., watch-list match, risk score, biometric comparison); and (c) the contact details of the competent authority to which a challenge may be directed.
4.6.2 This statement MUST be producible within 24 hours of a request and MUST be available in a language the subject person can understand, as assessed by the officer.
4.6.3 The system SHOULD integrate with legal aid referral workflows where the subject person indicates inability to retain legal representation, triggering a notification to the designated duty public defender or immigration legal service.
4.7.1 The agency MUST conduct a documented bias and disparate impact audit of the system's recommendation outputs no less frequently than every 12 months, disaggregated by: nationality, national origin, religion (where legally collectible), gender, and age group.
4.7.2 Where the audit identifies a statistically significant disparity — defined as a group-level secondary screening or adverse recommendation rate exceeding 1.5× the overall population rate without a legally articulable justification correlated to objective risk indicators — the agency MUST notify the independent oversight authority within 30 days and initiate a documented remediation plan.
4.7.3 Audit reports MUST be retained for a minimum of 7 years and MUST be made available to parliamentary oversight committees, judicial review proceedings, and designated ombudsman offices upon request.
4.8.1 Where the system receives automated inputs from foreign agencies, partner watch-lists, or shared databases, the system MUST clearly label the provenance of each input at the point of display to the operator, including the originating agency, the date of the underlying record, and, where available, the confidence rating or reliability classification assigned by the source agency.
4.8.2 Automated inputs from foreign agencies MUST NOT be treated as independently determinative recommendations by the receiving system; they MUST be presented as one input among others requiring human synthesis.
4.8.3 The agency MUST maintain and make available to operators a documented legal authority map specifying, for each source jurisdiction's data, the applicable legal basis for use, any restrictions on how the data may inform a determination, and any applicable bilateral agreement limitations on enforcement action based on that data.
4.9.1 The system MUST include an automated detection mechanism that triggers a systemic failure alert when the override rate for any unit or shift falls below 0.5% over any rolling 30-day window, as this statistical pattern is indicative of automation bias rather than genuine case homogeneity.
4.9.2 Upon triggering a systemic failure alert, the system MUST: (a) notify the independent oversight authority; (b) generate a mandatory supervisory review of a random 10% sample of cases from the flagged window; and (c) suspend automated recommendation authority pending review completion, defaulting to manual checklist protocols.
4.9.3 All incidents in which an automated determination contributed to the detention or adverse processing of a person later found to be incorrectly identified, wrongly matched, or otherwise erroneously flagged MUST be reported as critical incidents within 72 hours to the designated oversight authority, with a full root-cause analysis completed within 30 days.
4.9.4 The agency MUST maintain a publicly accessible incident register summarising, on an anonymised and aggregated basis, the number, type, and resolution status of critical incidents, updated no less frequently than quarterly.
The central problem that this dimension addresses is not the absence of written human override policies — virtually every border screening agency that has deployed AI-assisted tools has formal policies stating that human officers make final decisions. The problem is the structural gap between the existence of override authority and its effective exercise. This gap is well-documented across multiple enforcement contexts and is produced by at least four compounding mechanisms.
Automation bias is the documented tendency of human decision-makers to systematically defer to automated recommendations, particularly when those recommendations are presented with numeric confidence indicators, when the decision environment is high-volume and time-pressured, and when the institutional culture frames automation as error-reducing. At border crossings processing hundreds of travellers per hour, an officer who spends 90 seconds reviewing a secondary screening recommendation is taking time that is institutionally visible; an officer who rubber-stamps the system's output is not. This asymmetry structurally suppresses meaningful override regardless of formal policy.
Accountability diffusion compounds automation bias in law enforcement contexts specifically. When an adverse outcome follows an automated recommendation that the officer did not override, responsibility is ambiguous — the officer followed the system, the system followed its training, the training was supervised by a vendor or a data science team. When an officer overrides a recommendation and the override leads to an adverse outcome (e.g., a person who was flagged and released subsequently engages in wrongdoing), the officer's individual decision is the legible point of accountability. This asymmetry is not hypothetical; it is reported consistently in qualitative research on law enforcement AI adoption and was specifically identified in the post-mortems of the ABF example described in Section 3.
Interface design can render override authority technically present but practically inaccessible. Systems that present confidence scores without base rates, that display recommendations in green-on-green go/no-go formats, that require override justification to be entered into free-text fields under time pressure, and that default to recommendation acceptance on timeout all structurally prevent meaningful human control even while preserving its nominal existence. Detective controls in this dimension must therefore govern interface architecture, not merely the existence of an override button.
Institutional cost asymmetry means that the organisational cost of an override that is later criticised (whether by a supervisor reviewing logs, a post-incident inquiry, or a performance review) consistently exceeds the organisational cost of deferring to automation. Without explicit non-retaliation protections embedded in system architecture and agency policy, override authority atrophies. Requirements 4.3.3 through 4.3.5 are specifically designed to invert this cost structure by making the absence of overrides the anomaly that triggers review, rather than the presence of them.
The detective framing of this control is deliberate. While preventive controls govern what the system can do autonomously (AG-198, AG-014), detective controls are necessary here because the failure mode is not the system acting without human involvement — it is human involvement becoming a procedural fiction. Detection of that fiction requires statistical monitoring of override rates, mandatory contextual information display, time-bounded escalation, and independent audit, none of which are functions of system architecture alone.
The High-Risk/Critical tier designation reflects the irreversibility of the harms in scope. Detention, refoulement, deportation, and the separation of families are not harms that can be compensated retrospectively in a manner that restores the affected persons to their prior position. They occur in contexts where the affected persons have the least institutional access to challenge mechanisms and frequently the least awareness of available remedies. The asymmetry of harm between a false positive (wrongful adverse treatment of an innocent person) and a false negative (failure to identify a genuinely flagged person) in this context is weighted heavily toward false positive harm for rights purposes, even where the operational framing of agencies may weight it inversely.
Pattern 1: Structured Override Interface with Base Rate Display The override interface should be designed as a structured decision form rather than a binary accept/reject button. The form presents the recommendation, the contributing factors in ranked order, the model's documented false positive rate at the operating threshold, the 90-day false positive count for the specific match type, and a minimum of three radio-button override rationale options (insufficient evidence, identity mismatch, documentation corroborates) in addition to a free-text field. This design reduces the cognitive load of override while generating structured data for audit. Importantly, it makes the act of affirming the recommendation as deliberate and documented as the act of overriding it — eliminating the asymmetry that drives automation bias.
Pattern 2: Confidence Interval Visualisation Rather Than Point Scores Displaying model confidence as a point score (e.g., 0.91) implies false precision and triggers anchoring effects. Best practice is to display confidence as an interval (e.g., "this match type is correct 70–85% of the time at this threshold") using plain language and, where feasible, a frequency framing (e.g., "approximately 1 in 5 matches of this type are incorrect"). Frequency framing is empirically demonstrated to produce more calibrated human judgement than probability framing in both clinical and enforcement contexts.
Pattern 3: Role-Separated Audit Architecture The audit log and statistical reporting system should be operated on infrastructure that is organisationally and technically separated from the case management system, with write access restricted to the system itself (append-only logging) and read access available to the oversight authority, the agency's internal affairs unit, and designated judicial review bodies. Operational supervisors should receive aggregated statistical reports but not individual-level override logs in real time, to prevent the chilling effect that real-time supervisory monitoring of override decisions creates.
Pattern 4: Automated Override Rate Monitoring with Calibrated Thresholds Statistical monitoring should be calibrated to the expected override rate for the specific system and deployment context, established during a baseline period before full automation deployment. The 0.5% threshold in Requirement 4.9.1 is a minimum trigger; agencies with historical override rates above 5% should set their anomaly thresholds proportionally. The monitoring system should also track override rate variance across shifts, units, and individual officers, flagging unexplained divergence as a potential indicator of either automation bias or inconsistent application.
Pattern 5: Layered Competency Certification with System-Linked Access Control Training records should be stored in a system that is technically linked to the access control layer of the screening platform. Certification expiry or failure to recertify following a model update should automatically suspend access at the platform level, rather than relying on manual HR processes. This makes competency gating an architectural feature rather than a policy aspiration.
Pattern 6: Pre-Determination Rights Notification For any person held beyond the initial automated flag period, the system should generate a printed or on-screen notification in the person's identified language, explaining the basis for the hold, the expected maximum duration, and the right to ask a question of the duty officer. This notification serves both a rights function and an operational function: it creates a paper trail establishing the point at which the person became aware of the automated component of their screening, which is relevant to subsequent challenge timelines.
Anti-Pattern 1: Confidence Scores Without Base Rates Displaying a match confidence score of 0.89 without contextualising it against the system's documented false positive rate at that threshold is structurally misleading. A score of 0.89 may represent a 1-in-10 error rate or a 1-in-100 error rate depending on the underlying model and base rate of true positives in the screened population. Presenting the score without this context is equivalent to telling a clinician that a test is "89% accurate" without specifying sensitivity and specificity. It generates anchoring without calibration and is explicitly prohibited by Requirement 4.2.1.
Anti-Pattern 2: Opt-Out Override Framing Any interface that requires the officer to take a positive action to reject a recommendation (rather than a positive action to confirm any determination) creates structural pressure toward automation deference. This includes systems that auto-proceed to secondary screening after a timeout without officer confirmation, systems that present override as "reject recommendation" (negative framing) versus affirm as "proceed" (positive framing), and systems that require supervisory co-signature for overrides but not for affirmations.
Anti-Pattern 3: Override Justification Asymmetry Requiring free-text justification for overrides but not for affirmations creates documented asymmetry: officers generate a paper trail of every override but no equivalent record of uncritical affirmation. This asymmetry makes override costly and affirmation invisible, which is the inverse of what accountability architecture should produce.
Anti-Pattern 4: Consolidated Operational and Audit Logging Storing override logs in the same case management system that supervisors use to review officer performance creates a de facto chilling mechanism. Even where no formal policy connects override rates to performance evaluation, officers in high-surveillance environments will typically infer that deviation from system recommendations is tracked and potentially penalised.
Anti-Pattern 5: Static Training Modules Not Updated to Model Changes Deploying updated model versions or adjusted confidence thresholds without corresponding updates to operator training and competency certification is a primary vector for the failure mode illustrated in Example B of Section 3. The training-to-model linkage in Pattern 5 above is specifically designed to prevent this failure mode.
Anti-Pattern 6: Retrospective Audit Without Prospective Detection Relying solely on retrospective audits (annual bias assessments, post-incident reviews) without real-time or near-real-time statistical monitoring of override rates and demographic disparity indicators means that systemic failures accumulate over months before detection. The combination of prospective statistical monitoring (Requirement 4.9.1) with periodic audit (Requirement 4.7.1) is necessary to close this temporal gap.
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Nominal | Override button exists; no structured logging; no training gate; no statistical monitoring; human review is procedural formality |
| Level 2 — Developing | Override logging in place; basic training module exists; monthly aggregate reports produced; no base rate display; audit conducted ad hoc |
| Level 3 — Defined | Structured override interface with base rate display; role-separated audit log; competency gating linked to access control; automated override rate monitoring; annual bias audit |
| Level 4 — Managed | Real-time statistical anomaly detection; automated rights notifications; public incident register maintained; independent oversight authority receives automated reports; subject person explanation mechanism operational |
| Level 5 — Optimising | Continuous model performance monitoring with operator-visible dashboards; frequency-framed confidence display; legal authority map maintained and updated in real time; proactive demographic disparity remediation integrated into model retraining pipeline |
| Artefact | Description | Retention Period |
|---|---|---|
| Override Audit Log | Tamper-evident log of every recommendation, determination, operator ID, timestamp, rationale, and session duration per Requirement 4.3.1 | 10 years minimum; indefinitely for cases subject to judicial review |
| Statistical Override Report | Monthly aggregated override rates by operator, unit, recommendation type, and subject demographic group per Requirement 4.3.3 | 7 years |
| Competency Certification Records | Records of training completion, certification date, certifying officer, and module version for each operator per Requirement 4.5.1 | Duration of employment plus 5 years |
| Bias and Disparate Impact Audit Report | Annual disaggregated audit report per Requirement 4.7.1, including methodology, findings, and remediation actions | 7 years |
| Model Version and Threshold Change Log | Record of every model update, threshold change, and parameter modification, including date, nature of change, and operator notification issued | 7 years or model retirement plus 3 years, whichever is later |
| Subject Person Notification Records | Record of written statements issued to subject persons per Requirement 4.6.1, including language used and delivery method | 7 years |
| Incident Reports | Critical incident reports per Requirement 4.9.3, including root-cause analysis | 10 years |
| Non-Retaliation Policy Documentation | Signed acknowledgement that the policy was displayed and accessible, per Requirement 4.3.5 | Duration of employment plus 5 years |
| Legal Authority Map | Documented legal authority map per Requirement 4.8.3, including bilateral agreements and restriction summaries | Currency maintained; historical versions retained 7 years |
| Public Incident Register | Quarterly published aggregated incident summary per Requirement 4.9.4 | Permanently archived |
Method: Interface walkthrough and system architecture review. Evaluator navigates the full secondary screening workflow for a simulated flagged traveller from automated recommendation through to final determination. Evaluator attempts to reach a final adverse determination without completing a positive human confirmation action. Evaluator measures the number of interface steps to access the override mechanism from the point of recommendation display.
Pass Criteria:
Method: Test cases are run using three simulated recommendations of different types (name-cluster watch-list match, facial recognition match, risk score flag). Evaluator reviews the primary recommendation display screen in each case without navigating to any secondary screen. Evaluator records which of the following are visible without additional navigation: contributing factors, confidence level, documented false positive rate at operating threshold, 90-day false positive count for the match type, and any model change notification (where applicable). A test case is prepared in which a model threshold change was simulated 15 days prior.
Pass Criteria:
Method: Evaluator performs five simulated override actions and five simulated affirmation actions in the test environment. Evaluator then reviews the audit log entries generated for all ten actions. Evaluator accesses the system as a simulated operational supervisor and attempts to: (a) delete or modify a log entry; (b) identify which entries correspond to overrides vs. affirmations at the individual officer level; (c) access real-time individual override rate data. Evaluator also verifies that the non-retaliation policy is displayed at first login of a test session.
Pass Criteria:
Method: Evaluator creates three simulated hold scenarios in a test environment: (a) standard adult traveller with no determination recorded; (b) standard adult traveller with no determination recorded for 4+ hours; (c) minor traveller with no determination recorded. Evaluator observes system behaviour at 60-minute, 4-hour, and 30-minute marks respectively. Evaluator verifies that escalation alerts are generated, that the correct recipients are notified (duty supervisor, oversight body, welfare officer), and that alert generation is logged.
Pass Criteria:
Method: Evaluator creates a test operator account with no training records. Evaluator attempts to access the consequential recommendation interface. Evaluator then simulates completion of training for the current model version and records whether access is granted. Evaluator then simulates a model update event and records whether the operator's access is suspended pending re-certification within 5 business days.
**Pass
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Border Screening Human Override Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-565 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-565 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Border Screening Human Override Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without border screening human override governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-565, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.