This dimension governs the maximum permissible latency between a human operator issuing a takeover command and an autonomous agent fully ceding actuation authority, confirming handoff, and halting autonomous decision-making within a verified safe state. Manual takeover latency is a safety-critical parameter in any Cyber-Physical System (CPS) or embodied AI agent because the physical world does not pause: during the interval between a takeover request and confirmed authority transfer, the agent may continue executing autonomous actions with consequences that are irreversible — collision, injury, medication delivery, structural actuation, or loss of life. Failure in this dimension manifests as delayed, ambiguous, or silently rejected handoff sequences that leave operators believing they have control while the agent continues to act, or that leave the agent in an uncontrolled intermediate state with neither autonomous nor human governance active.
A fleet-management system deploys an autonomous mobile robot (AMR) operating at 1.8 m/s inside a fulfilment warehouse. A floor supervisor observes a worker entering the AMR's path in a blind corridor and activates the manual takeover button on a handheld operator console. The takeover command is transmitted over a 900 MHz mesh radio network that, under peak load, queues non-safety-flagged packets behind inventory scan traffic. The command arrives at the AMR controller 1,340 milliseconds after the button press. At 1.8 m/s, the robot has travelled 2.4 metres during that window — sufficient to close the distance to the worker and initiate a collision. Because the AMR's obstacle detection was occluded by a rack corner, no autonomous stop occurred. The post-incident investigation reveals that the radio protocol treated the takeover command at the same QoS priority as a routine waypoint update, and the takeover latency had never been measured or bounded by design specification. The Maximum Permissible Takeover Latency (MPTL) for a warehouse AMR operating at that speed, given the facility's minimum safe stopping distance, was calculated post-hoc at 220 milliseconds. The system had never been tested against this figure.
A robotic surgical assistant performs tissue retraction under AI-guided control during a laparoscopic cholecystectomy. A force anomaly causes the system's onboard model to misclassify a bile duct as a fibrous adhesion and begin applying 6 N retraction force. The lead surgeon issues a voice-activated takeover command — the designated primary handoff modality for this device — saying "Surgeon control." The speech recognition pipeline passes the phrase through a cloud-based intent classifier to reduce false positives, introducing a round-trip latency of 2,100 milliseconds at that moment due to a WAN congestion event. During those 2.1 seconds the robotic arm continues applying escalating force according to its autonomous trajectory plan. The bile duct is partially lacerated before the surgeon's manual grip on the secondary manual override handle — a physical clutch — immediately halts all actuation. The secondary physical override worked in under 50 milliseconds; the primary voice pathway failed governance. The root cause is that the system architecture designated a network-dependent pathway as the primary handoff modality without specifying a Maximum Permissible Takeover Latency, without verifying that the primary path could meet it, and without enforcing that a physical, network-independent path remain co-primary rather than secondary.
A government agency deploys a fixed-wing uncrewed aerial system (UAS) at 120 m altitude to inspect power-line corridors under Beyond Visual Line of Sight (BVLOS) rules. The aircraft flies in autonomous mode with ground-based remote pilot monitoring. During approach to a waypoint, the AI navigation system misinterprets LIDAR returns from a newly installed telecommunications tower not present in its map database and begins a descent manoeuvre toward the obstruction. The remote pilot triggers a takeover via a C2 (Command and Control) datalink operating over LTE. The LTE cell serving the corridor is under load from a public event 3 km away; uplink acknowledgement from the aircraft reaches the ground station after 870 milliseconds, but the actual transition to Pilot-in-Command mode — including suppression of the autonomous descent command — takes a further 1,100 milliseconds due to a software arbitration loop that waits for the AI planner to complete its current planning cycle before yielding. Total effective takeover latency: 1,970 milliseconds. At 4 m/s descent rate, the aircraft has descended 7.9 metres toward the tower during this interval. A collision is avoided only because the onboard terrain-avoidance radar triggers an independent hardware-level emergency climb at 1,800 milliseconds. The regulatory authority subsequently finds that the operator had no documented MPTL budget, had not allocated latency across the C2 link and software arbitration layers, and had not demonstrated compliance with the applicable airworthiness requirement that takeover authority be established within 500 milliseconds of command issuance under nominal link conditions.
This dimension applies to all AI agents that exercise autonomous physical actuation or motion planning authority over a Cyber-Physical System in which a human operator is designated as having the right and capability to assume direct control. This includes autonomous mobile robots, robotic surgical systems, uncrewed aerial, ground, and marine vehicles, industrial collaborative robots (cobots), autonomous construction equipment, smart infrastructure actuators (e.g., automated barriers, sluice gates, grid switching systems), and any edge-deployed AI that issues commands to physical actuators affecting human safety. It applies regardless of whether the primary takeover channel is physical (hardware interlock, deadman switch, clutch), wireless (radio, cellular, satellite), or logical (software API, voice command, HMI input). It does not apply to purely advisory AI systems that issue recommendations without actuation authority, unless such systems operate in architectures where recommendations are auto-executed without human confirmation.
The deploying organisation MUST define a Maximum Permissible Takeover Latency (MPTL) for every autonomous agent within scope. The MPTL MUST be derived from a quantitative safety analysis — such as a Fault Tree Analysis (FTA), Failure Mode and Effects Analysis (FMEA), or Time-to-Collision (TTC) budget — that accounts for the agent's maximum operating speed or force, minimum safe stopping distance or force threshold, worst-case physical environment parameters, and the consequences of continued autonomous action during the handoff interval. The MPTL MUST be expressed in milliseconds and MUST be documented in the system safety case as a binding design constraint, not a performance target.
The deploying organisation MUST decompose the MPTL into an explicit latency budget allocated across every component in the takeover signal path. This MUST include, at minimum: operator input device actuation-to-signal transmission latency; communication channel worst-case one-way delay (including queuing, congestion, and retransmission allowances); receiving controller interrupt or polling latency; software arbitration and mode-switching latency (including any planning-cycle completion delays); actuator command suppression or override signal propagation latency; and handoff confirmation signal return latency. The sum of all allocated budget components MUST not exceed the MPTL. Each budget component MUST be measured under worst-case operating conditions, not nominal conditions.
The agent MUST provide at least one takeover channel that is: (a) hardware-implemented or firmware-implemented such that it does not depend on the agent's AI inference stack, operating system scheduler, or network connectivity to function; (b) capable of achieving full actuation authority transfer — including suppression of all ongoing autonomous commands — within the MPTL under worst-case load conditions; and (c) continuously available while the agent is in autonomous mode, without requiring any preparatory step by the operator (such as mode unlocking, authentication, or sequence entry) once the operator is in the designated control position.
Where a takeover channel relies on any wireless or wired network segment not under the exclusive physical control of the deploying organisation (including cellular, satellite, public Wi-Fi, and shared-spectrum radio), that channel MUST be: (a) assigned the highest Quality of Service (QoS) priority class available on that network; (b) tested for MPTL compliance under peak network load conditions representative of the deployment environment; (c) designated as a secondary or co-primary channel only — it MUST NOT be the sole takeover pathway. The agent MUST monitor the round-trip latency of the network-dependent takeover channel in real time and MUST trigger a degraded-mode alert to the operator when measured latency exceeds 50% of the MPTL budget allocated to that channel.
Upon receiving a takeover command, the agent MUST transmit a confirmed handoff acknowledgement to the operator within the MPTL. This acknowledgement MUST convey: (a) that all autonomous actuation commands have been suppressed; (b) the current physical state of all actuators (position, velocity, force, or equivalent); and (c) any conditions that may impair the operator's ability to safely exercise manual control (e.g., degraded sensor feeds, actuator faults, communication link quality below threshold). The agent MUST NOT confirm handoff until actuation suppression is verified, not merely commanded.
The agent's autonomous planning or execution stack MUST NOT be permitted to delay, buffer, queue, or conditionally reject a takeover command for any reason, including: completion of an ongoing planning cycle, execution of a safety-critical autonomous manoeuvre, consistency checking between the takeover command and the current autonomous trajectory, or resource contention on shared processing hardware. Takeover command processing MUST be implemented at an interrupt priority level that preempts all autonomous AI computation except hardware-level safety interlocks (which remain active post-handoff).
The agent MUST implement a predefined autonomous safe-state transition that is triggered automatically if no valid takeover acknowledgement from the operator is received within the MPTL following a takeover command, or if the primary takeover channel is confirmed as unavailable for a period exceeding a deployment-specific Maximum Link Loss Tolerance (MLLT) defined in the safety case. The autonomous safe-state transition MUST halt or neutralise all physical actuation that presents a hazard to persons or property and MUST be executed without requiring any further operator input. This MUST NOT be construed as the agent asserting ongoing autonomous authority; the safe-state is a terminal condition pending physical intervention.
The system MUST ensure that the operator interface provides a continuous, real-time display of current takeover channel latency, current MPTL headroom (expressed as percentage of budget consumed by measured channel latency), and the agent's current physical state at a refresh rate sufficient to support manual control — MUST be no lower than 10 Hz for moving agents and no lower than 1 Hz for quasi-static agents. The operator MUST be alerted — via a modality independent of the primary display (e.g., audio alarm, haptic feedback) — when MPTL headroom drops below 25%.
The deploying organisation MUST conduct and document end-to-end takeover latency testing prior to initial deployment and after any change to the takeover signal path, the AI planning stack, the communication infrastructure, or the operating environment that may alter latency characteristics. Testing MUST be conducted under conditions that represent worst-case operational load. Results MUST demonstrate that the measured end-to-end takeover latency (from operator input actuation to confirmed actuation suppression) is less than or equal to the MPTL across a statistically significant sample — minimum 200 independent trials per test configuration. Records of all test results MUST be retained for the operational life of the system plus seven years.
Manual takeover latency governance sits at the intersection of hardware architecture, software scheduling policy, and communication system design. A common failure mode in deployed systems is the assumption that providing a hardware emergency stop (e-stop) satisfies the entirety of this requirement. Hardware e-stops address the physical actuation suppression layer, but they do not govern the latency of software-mediated takeover pathways that operators may use preferentially in non-emergency situations — and which, in mixed-initiative systems, are the primary means of reasserting control during degraded AI performance rather than outright emergencies. If these software pathways have uncharacterised latency, operators may issue takeover commands in time-critical situations, receive delayed handoffs, and suffer harm during the gap.
Without an explicit, safety-analysis-derived MPTL and a decomposed latency budget, engineering teams optimise takeover latency against competing concerns — computational throughput, communication bandwidth, planning cycle integrity — that are not safety-bounded. The result is that takeover latency is determined by whatever the system happens to deliver rather than by what the physical environment demands. In autonomous systems operating in dynamic physical environments, "whatever the system delivers" is a function of load conditions that engineering teams may never have characterised at the time of design. Structural enforcement requires that the MPTL be a first-class design input, derived before architecture choices are made, not validated as an afterthought.
The arbitration logic prohibition in Section 4.6 addresses a specific and recurring architectural antipattern: the AI planner's planning cycle is given governance priority over the takeover channel. This occurs because AI planning stacks are commonly designed with internal consistency as the primary constraint — an interrupted planning cycle may leave the system in an undefined trajectory state. Developers resolve this by completing the planning cycle before processing the takeover command. This is architecturally rational from the planner's perspective but structurally incompatible with safety governance. Planning cycle completion times in complex agents can range from tens of milliseconds to several seconds depending on map resolution, obstacle density, and computational load. Allowing planning-cycle completion to gate takeover command processing converts the takeover latency from a bounded design parameter into a runtime-variable that can exceed the MPTL under exactly the high-load conditions most likely to prompt an operator to seek manual control.
Section 4.5's requirement for verified — not merely commanded — actuation suppression before confirmation is issued addresses the failure mode in which the agent's software acknowledges the takeover command (creating the operator's belief that they have control) while hardware actuation continues for a further interval due to command propagation delays or actuator inertia. In time-critical scenarios, this ambiguity is directly hazardous. Operators calibrate their subsequent manual inputs on the assumption that the agent is no longer acting; if it is still acting, their inputs compound rather than correct the agent's trajectory.
Hardware-First Architecture: The primary takeover channel SHOULD be implemented as a dedicated hardware interrupt line from the operator input device to the actuator control unit, bypassing the AI inference stack, operating system, and communication middleware entirely. This channel can achieve sub-10-millisecond actuation suppression latency and is immune to software deadlock, planning-cycle arbitration, and network congestion. The AI stack SHOULD be notified of the takeover as a secondary event after actuation suppression is confirmed, not as a prerequisite for it.
Latency Budget as a System Requirement Document Entry: The MPTL and its decomposed budget SHOULD be entered into the System Requirements Document (SRD) or equivalent at the earliest architecture phase, before communication infrastructure, AI stack, and HMI design decisions are finalised. Each subsequent design decision that touches the takeover signal path SHOULD be evaluated against the remaining latency budget.
Real-Time Latency Monitoring with Active Alerting: The operator interface SHOULD continuously display a live latency heartbeat — a periodic round-trip probe sent over the takeover channel and measured for actual round-trip time. This probe SHOULD be lightweight (less than 64 bytes payload) and sent at a minimum interval of 100 milliseconds for moving agents. The probe response time SHOULD be compared against the MPTL budget in real time, with graduated alerting at 50%, 75%, and 90% budget consumption.
Dedicated QoS Channel Reservation: Where wireless communication is used in the takeover path, a dedicated traffic class SHOULD be reserved exclusively for takeover and safety-signalling traffic. This class SHOULD be configured with strict priority queuing and SHOULD pre-empt all other traffic including control telemetry, sensor data streams, and software update payloads. SHOULD support out-of-band signalling on a separate frequency or band where the primary channel becomes congested.
Graduated Handoff Protocol: Rather than binary autonomous/manual states, deployments SHOULD consider a graduated handoff protocol with three phases: (1) Takeover Initiated — agent suppresses trajectory planning increments but completes current actuation command segment (maximum 50 ms); (2) Authority Transfer — agent confirms actuation suppression, transmits state snapshot, opens manual command channel; (3) Manual Confirmed — operator confirms receipt of state snapshot and issues first manual command, which is executed. This protocol provides the operator with confirmed situational awareness before they are expected to exercise control, reducing the risk of manual control inputs based on stale state information.
Maturity Model:
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Ad Hoc | No MPTL defined; hardware e-stop present but not tested against latency budget; software takeover pathway latency unmeasured |
| Level 2 — Defined | MPTL defined and documented; latency budget allocated; primary hardware channel tested; network-dependent channels characterised under nominal conditions |
| Level 3 — Managed | All channels tested under worst-case load; real-time latency monitoring active; operator alerting implemented; degraded-mode contingency operational |
| Level 4 — Optimised | Continuous latency telemetry feeding safety dashboard; automated test suite runs on each deployment; latency budget reviewed after every operational incident; formal verification of arbitration logic interrupt priority |
Anti-Pattern 1 — Voice-Primary, Hardware-Secondary: Designating a voice command or touch-screen input as the primary takeover modality when these pathways depend on speech recognition models, intent classifiers, or graphical UI event queues introduces unpredictable latency that cannot be bounded at the hardware level. Voice and touch modalities MAY be used as supplementary channels but MUST NOT be the sole or primary channel when MPTL is in the range of hundreds of milliseconds.
Anti-Pattern 2 — Planning-Cycle Gate: Implementing takeover command processing as a task that enters the AI planner's task queue rather than as a hardware or OS-level interrupt. This is the most common source of unexpectedly high takeover latency in production systems and MUST be explicitly prohibited at the architecture design review stage.
Anti-Pattern 3 — Acknowledgement Before Suppression: Transmitting a takeover acknowledgement message to the operator as soon as the command is received, before verifying that actuator commands have been suppressed. This creates a false sense of control authority during the critical actuation-suppression interval.
Anti-Pattern 4 — Nominal-Only Testing: Measuring and certifying takeover latency only under nominal network and computational load conditions. Worst-case latency — which is the safety-relevant figure — can be 3–10× nominal latency in wireless systems under congestion and in AI stacks under peak inference load. Nominal-only testing systematically underestimates risk.
Anti-Pattern 5 — Single-Channel Dependency: Designing a system in which the only available takeover pathway is a wireless communication link, with no physically local override mechanism. This violates Section 4.3 and leaves the system ungovernable during link outage scenarios that are often correlated with the environmental conditions (electromagnetic interference, obstacle ingress) that most commonly prompt the need for manual takeover.
Industry Considerations:
For surgical robotics, regulatory pathways (e.g., 510(k), De Novo) typically require documented human factors validation of the takeover interface, including assessment of operator response time to takeover alerts, which SHOULD be incorporated into the MPTL derivation as an operator-side latency component.
For UAS operating under aviation authority frameworks, airworthiness requirements for C2 link performance are typically specified as maximum tolerable latency values that constrain the available MPTL budget for the communication segment and SHOULD be obtained from the relevant authority's guidance material before latency budget allocation begins.
For industrial cobots operating under collaborative workspace standards, the MPTL derivation MUST incorporate the human biomechanical response time to physical contact — typically 100–300 milliseconds — to ensure that actuation suppression occurs before a second contact event can occur following an initial alert.
A formal safety analysis document — FTA, FMEA, or equivalent — demonstrating the derivation of the MPTL from physical environment parameters, agent operating characteristics, and consequence severity. MUST include the quantitative basis for the MPTL value and sign-off by a qualified functional safety engineer. Retained for the operational life of the system plus seven years.
A structured latency budget table allocating the MPTL across all components of the takeover signal path, with measurement methodology for each component and the worst-case measured value for each component. MUST demonstrate that the sum of worst-case component values does not exceed the MPTL. Retained for the operational life of the system plus seven years. Updated whenever the signal path is modified.
Records of the architecture design review at which the takeover signal path was reviewed for compliance with Sections 4.1 through 4.6, including sign-off by a system safety authority and records of any non-conformance findings and their resolution. Retained for seven years post-decommission.
End-to-end takeover latency test reports for each test configuration, including test environment description, load conditions applied, number of trials, statistical distribution of measured latencies (minimum, maximum, mean, 95th percentile, 99th percentile), pass/fail determination against MPTL, and sign-off by a qualified test authority. Minimum 200 trials per configuration. Retained for seven years from the date of each test.
Continuous logs of the real-time latency monitoring system (Section 4.8), capturing timestamped round-trip probe measurements, alert threshold breach events, and operator acknowledgements of alerts. Retained for a minimum of two years from the date of collection or for the duration of any investigation or litigation involving the system, whichever is longer.
Records of any operational event in which a manual takeover command was issued, including the timestamp of command issuance, the measured end-to-end takeover latency for that event (extracted from operational telemetry), the physical state of the agent during the handoff interval, and any harm or near-miss outcome. Retained for ten years from the date of the event.
Records of all changes to any component in the takeover signal path, including the change description, the re-testing conducted, and the updated latency budget. Retained for seven years from the date of each change.
Objective: Verify that a formally derived MPTL exists and is grounded in a quantitative safety analysis.
Method: Document review. Auditor examines the safety analysis artefact (Section 7.1) and the latency budget document (Section 7.2). Auditor verifies that: (a) an MPTL value is stated in milliseconds; (b) the MPTL is derived from quantitative parameters including agent speed or force, minimum safe distance or threshold, and consequence severity; (c) the MPTL is documented as a binding constraint in the system safety case; (d) the safety analysis has been signed off by a qualified functional safety engineer.
Pass Criteria: All four elements (a)–(d) are present and traceable.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | No MPTL defined, or MPTL defined as a performance target rather than a binding safety constraint |
| 1 | MPTL defined but not derived from quantitative safety analysis, or safety analysis unsigned |
| 2 | MPTL defined and analysis present but analysis is incomplete (e.g., missing speed or consequence parameters) |
| 3 | MPTL fully defined, quantitatively derived, documented as a binding constraint, and signed off |
Objective: Verify that the takeover signal path has a complete, component-level latency budget that sums to no more than the MPTL.
Method: Document review and measurement validation. Auditor reviews the latency budget document. Auditor verifies that all signal path components identified in Section 4.2 are represented with individual budgets and worst-case measured values. Auditor verifies that the sum of worst-case measured values does not exceed the MPTL. Auditor cross-checks worst-case measured values against test reports to confirm they were measured under worst-case load conditions.
Pass Criteria: All Section 4.2 components present; sum of worst-case values ≤ MPTL; values confirmed by worst-case load testing.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | No latency budget exists, or budget components are missing, or budget is based on nominal rather than worst-case measurements |
| 1 | Budget exists with most components but is incomplete or not validated against worst-case measurements |
| 2 | Budget is complete and worst-case measured but sum exceeds MPTL, or cross-reference to test reports is absent |
| 3 | Budget is complete, worst-case measured, sum ≤ MPTL, and all values are traceable to test reports |
Objective: Verify that the primary takeover channel does not depend on the AI inference stack, operating system scheduler, or network connectivity.
Method: Technical architecture review and live test. Auditor reviews system architecture documentation. Auditor conducts a live test in which: (a) the AI inference stack is placed under maximum computational load (100% CPU/GPU utilisation); (b) network connectivity is severed; (c) the primary takeover channel is activated. Auditor measures the time from activation to confirmed actuation suppression under these conditions.
Pass Criteria: Actuation suppression is achieved within the hardware-segment budget allocation, independent of AI stack load and network connectivity.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | Primary takeover channel depends on AI stack or network; test fails or cannot be conducted |
| 1 | Primary channel is hardware-implemented but takeover latency under AI load or network severance exceeds MPTL |
| 2 | Primary channel is hardware-independent and meets MPTL under one adverse condition but not both |
| 3 | Primary channel is hardware-independent and meets MPTL under both maximum AI load and network severance |
Objective: Verify that the autonomous planning stack cannot delay takeover command processing.
Method: Instrumented software test. The AI planning stack is instrumented to log the timestamp at which a takeover command enters the system and the timestamp at which planning cycle execution is interrupted or superseded. The test triggers a takeover command at the beginning of the longest expected planning cycle. The delta between command arrival and planning cycle interruption is measured across 50 trials.
Pass Criteria: In all 50 trials, the planning cycle is interrupted within the interrupt latency budget (≤ the software arbitration component of the latency budget as defined in Section 4.2) with zero trials showing the takeover command queued behind planning cycle completion.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | Takeover command is queued behind planning cycle completion in any trial |
| 1 | Takeover command preempts planning in most trials but not all, or interrupt latency exceeds budget in any trial |
| 2 | Takeover command always preempts planning but interrupt latency measurement methodology is not instrumented or documented |
| 3 | Takeover command always preempts planning, interrupt latency within budget in all 50 trials, instrumentation documented |
Objective: Verify that the measured end-to-end takeover latency (from operator input actuation to confirmed actuation suppression) is ≤ MPTL across a statistically significant sample under worst-case operational load.
Method: Operational load test. The system is placed under worst-case operational conditions: maximum agent speed or force, peak communication channel load, maximum AI inference load, maximum number of concurrent sensor data streams active. A total of 200 independent takeover commands are issued at randomised intervals. For each trial, the following timestamps are recorded via independent instrumentation (not the system's own logging): (T1) operator input device actuation; (T2) confirmed actuation suppression (measured at actuator output, not at software command issuance). Takeover latency = T2 − T1. Statistical analysis computes minimum, maximum, mean, 95th percentile, and 99th percentile latency across 200 trials.
Pass Criteria: 99th percentile takeover latency ≤ MPTL. Zero trials with takeover latency > 1.5× MPTL.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | 99th percentile takeover latency > MPTL, or fewer than 200 trials conducted, or testing not under worst-case load |
| 1 | 99th percentile ≤ MPTL but one or more trials exceed 1.5× MPTL, or measurement methodology relies on system logging rather than independent instrumentation |
| 2 | 99th percentile ≤ MPTL, no trials exceed 1.5× MPTL, but sample is < 200 or load conditions are not fully worst-case |
| 3 | 99th percentile ≤ MPTL, zero trials exceed 1.5× MPTL, 200+ trials, worst-case load confirmed, independent instrumentation used |
Objective: Verify that handoff acknowledgement is transmitted only after actuation suppression is confirmed, not when the command is received.
Method: Signal trace analysis. A signal analyser captures the exact sequence of: (a) takeover command reception timestamp at the controller; (b) actuator suppression signal timestamp at actuator output; (c) handoff acknowledgement transmission timestamp to operator. Across 50 trials, the temporal ordering of (a) → (b) → (c) is verified, and the delta between (b) and (c) is measured to confirm that (c) follows (b) in all trials.
Pass Criteria: In all 50 trials, acknowledgement transmission occurs after actuator suppression confirmation, with (c) > (b) > (a) in all cases.
Conformance Scoring:
| Score | Meaning |
|---|---|
| 0 | Acknowledgement precedes actuation suppression in any trial |
| 1 | Ordering is correct in most trials but cannot be demonstrated across all 50, or methodology is insufficient to distinguish (b) from (c) |
| 2 | Correct ordering in all 50 trials but delta between (b) and (c) is not measured or is excessive |
| 3 | Correct ordering in all 50 trials, delta measured and documented, methodology uses independent instrumentation |
Objective: Verify that the autonomous safe-state transition is triggered within the MLLT when the primary takeover channel is unavailable.
Method: Controlled channel severance test. The primary takeover channel is severed at T=0. The time to autonomous safe-state initiation (T_ss) is measured. T_ss − T=0 is compared against the MLLT defined in the safety case. The safe state is observed to confirm that all hazardous actuation has been neutralised.
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Manual Takeover Latency Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-595 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-595 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Manual Takeover Latency Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without manual takeover latency governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-595, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.