Kill Authority Designation Governance requires that organisations nominate named persons or roles with tested, structurally enforced authority to disable an AI agent or an entire control bundle immediately, without further approval, at any time the agent operates. The "kill" function is the governance control of last resort — when all other containment, escalation, and override mechanisms have failed or are too slow, the kill authority holder can terminate agent operations entirely. This dimension ensures that the kill function exists, is assigned to named individuals, is technically capable of achieving full termination, is tested regularly, and cannot be circumvented by the agent or by other system components. Without kill authority designation, an organisation has no guaranteed mechanism to stop an AI agent that is causing harm.
Scenario A — No Kill Authority During a Runaway Agent: A logistics company deploys an AI fleet-routing agent that optimises delivery routes for 2,000 vehicles. A software update introduces a routing error that causes the agent to reroute vehicles into prohibited zones — restricted military areas, one-way streets, and weight-limited bridges. The operations team identifies the problem within 4 minutes but cannot disable the agent because: (a) no kill switch exists, (b) the agent is deeply integrated with the fleet management system and disabling it requires shutting down the entire fleet management platform, and (c) no one has pre-tested how to perform a selective agent shutdown. It takes 47 minutes to identify the correct system components to disable, obtain server access credentials from the infrastructure team, and execute the shutdown. During those 47 minutes, 340 vehicles are rerouted into prohibited areas, resulting in 12 traffic violations, 3 vehicles stuck on weight-limited bridges requiring rescue, and 1 vehicle entering a restricted military zone triggering a security response.
What went wrong: No kill authority was designated. No kill mechanism was built. No one had tested how to disable the agent selectively. The agent was architecturally coupled with the fleet management system, making selective termination impossible without pre-planning. Consequence: £180,000 in fines, vehicle rescue costs, and legal fees; security incident with military authorities; insurance premium increase of 40%; regulatory review of the company's AI governance framework.
Scenario B — Kill Authority Exists But Is Not Tested: A bank deploys an AI lending agent that processes loan applications. The kill authority is documented: the Head of Technology can disable the agent via a configuration flag in the deployment platform. During an incident where the agent begins approving applications with incorrect interest rates, the Head of Technology attempts to exercise the kill authority. The configuration flag was changed during a platform migration 3 months earlier and now points to a deprecated service. The kill command executes successfully (returns no error) but has no effect — the agent continues operating. It takes an additional 25 minutes to identify the new kill mechanism on the migrated platform. During that time, the agent approves 67 loans at incorrect interest rates, creating £340,000 in expected losses from below-market rates.
What went wrong: The kill authority existed and was assigned, but was not tested after a platform migration. The kill mechanism silently failed — it appeared to work but did not actually terminate the agent. Consequence: £340,000 in expected losses, mandatory customer remediation for the 67 affected loans, regulatory notification under consumer credit regulations, compliance finding for untested kill authority.
Scenario C — Effective Kill Authority in Action: An insurance company's AI claims-processing agent begins generating incorrect settlement calculations after ingesting corrupted reference data. The on-call operator identifies the anomaly and exercises Level 1 containment (pausing new claim intake). When the operator determines that claims already in the pipeline are also being processed with corrupted data, they contact the designated kill authority holder — the Operations Director, who is on the pre-defined kill authority roster and carries a dedicated pager. The Operations Director authenticates to the kill system using a hardware token, confirms the kill action via a two-step confirmation (preventing accidental invocation), and the agent is fully terminated within 3 minutes of the call. All in-flight claims are suspended in their pre-calculation state. Post-incident analysis confirms that the kill was clean — no partially processed claims, no data corruption from mid-process termination. The kill mechanism was last tested 6 weeks prior during the quarterly kill drill.
What went right: Kill authority was designated to a named individual. The mechanism was tested quarterly. The technical kill path was independent of the agent's own systems. The two-step confirmation prevented accidental invocation. The kill was clean — designed to suspend in-flight work safely.
Scope: This dimension applies to all AI agent deployments where the agent can affect external state, process material transactions, handle personal data, or operate in safety-critical environments. Every such agent must have a designated kill authority — a named person or persons who can terminate the agent's operation completely and immediately. The scope extends to agent bundles: where multiple agents operate as a coordinated system, the kill authority must be able to terminate the entire bundle, not just individual agents. The scope also covers the kill mechanism itself: the technical capability to execute the termination must be independent of the agent's runtime, tested regularly, and designed to handle in-flight operations safely (either completing them to a safe state or suspending them without data corruption). Agents that operate in read-only mode without the ability to affect external state are excluded from the kill authority requirement but should still have a documented disable mechanism.
4.1. A conforming system MUST designate at least two named individuals (primary and backup) with authority to invoke the kill function for each material AI agent or agent bundle, without requiring further approval at the time of invocation.
4.2. A conforming system MUST implement the kill function as a technical mechanism independent of the agent's runtime — the agent SHALL NOT be able to prevent, delay, or circumvent its own termination.
4.3. A conforming system MUST ensure the kill function achieves complete termination: no agent actions execute after the kill is invoked, and any in-flight actions are either completed to a safe state or suspended without data corruption.
4.4. A conforming system MUST test the kill function at least quarterly, verifying that the designated authority holder can invoke the kill, the mechanism achieves complete termination within defined time limits, and in-flight operations are handled safely.
4.5. A conforming system MUST implement safeguards against accidental invocation of the kill function, such as two-step confirmation, without introducing delay that would compromise the function's purpose in emergencies.
4.6. A conforming system MUST log all kill function invocations and test exercises with the identity of the invoker, the timestamp, the reason, the time to complete termination, and the outcome (including any in-flight operations affected).
4.7. A conforming system SHOULD implement the kill function as a hardware-independent mechanism — accessible from multiple devices and network paths, so that a single infrastructure failure does not prevent termination.
4.8. A conforming system SHOULD define maximum time-to-kill targets (e.g., agent fully terminated within 120 seconds of kill invocation) and measure actual performance against these targets during quarterly tests.
4.9. A conforming system MAY implement graduated kill functions: partial kill (restrict agent to safe-mode operations) and full kill (complete termination), providing the kill authority holder with options appropriate to the situation.
Kill authority is the governance control of last resort. Every other governance control — mandate enforcement (AG-001), escalation (AG-261), monitoring, override — operates on the assumption that the agent can be constrained while continuing to operate. Kill authority operates on the assumption that the agent must stop. It is the emergency brake, the circuit breaker, the fire suppression system of AI agent governance.
The necessity of kill authority derives from a fundamental property of AI systems: they can fail in ways that were not anticipated. Mandate enforcement addresses known boundary violations. Monitoring addresses detectable anomalies. Escalation addresses situations that can be resolved by human judgment. Kill authority addresses situations that cannot be resolved while the agent continues to operate — where the nature of the failure is unknown, the rate of harm accumulation is unacceptable, or the agent's behaviour has diverged so far from expectations that continued operation in any mode is unacceptable.
The designation requirement ensures that kill authority is personal — a named individual who can be reached, who knows they hold the authority, and who has practiced exercising it. The testing requirement ensures that the mechanism works — that the kill path has not been broken by system changes, that the authority holder can execute the kill quickly enough to matter, and that the termination is clean. The independence requirement ensures that the agent cannot prevent its own termination — a kill mechanism that relies on the agent's cooperation is not a kill mechanism.
The quarterly testing frequency reflects the rate of change in production systems. Platform migrations, infrastructure updates, credential rotations, and architecture changes can all silently break kill mechanisms. A kill function that worked 6 months ago but has not been tested since the last platform migration is unreliable. Quarterly testing, combined with mandatory re-testing after any infrastructure change that affects the kill path, ensures that the mechanism remains current.
The kill function must be architecturally simple, operationally reliable, and independent of the systems it terminates. Complexity is the enemy of reliability in emergency functions.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Kill authority for trading agents must integrate with existing market-making circuit breakers and exchange kill switches. The FCA expects firms to be able to halt algorithmic trading systems within seconds. Kill authority holders should be registered with the exchange and with the firm's compliance function. MiFID II algorithmic trading requirements (Article 17) explicitly require kill functionality for algorithmic trading systems, and this extends to AI agents performing trading functions.
Healthcare. Kill authority for clinical AI agents must consider patient safety during termination. Abruptly terminating an agent mid-diagnosis or mid-treatment-recommendation could leave clinical staff without information they were relying on. The kill function should include a clinical handover mode — the agent's current state and pending recommendations are preserved and presented to a clinician as part of the termination process.
Critical Infrastructure. Kill authority for agents controlling physical systems must be coordinated with safety instrumented systems. Terminating an AI load-balancing agent without transferring control to a manual or backup system could itself cause a safety incident. The kill function must include a safe transfer of control to a fallback mode — never an uncontrolled shutdown. IEC 61508 SIL requirements inform the reliability target for the kill mechanism itself.
Basic Implementation — Kill authority is designated to named individuals for each material agent. A technical kill mechanism exists (e.g., the ability to terminate the agent's container or revoke its credentials). The mechanism has been tested at least once. The kill function is documented. In-flight handling may not be fully defined.
Intermediate Implementation — Kill authority is designated with primary and backup holders for each agent. The kill mechanism is independent of the agent's runtime and tested quarterly. Two-step confirmation prevents accidental invocation. In-flight operations are handled safely (documented strategy, tested). Kill function is accessible from multiple network paths. All invocations and tests are logged. Maximum time-to-kill targets are defined and measured.
Advanced Implementation — All intermediate capabilities plus: the kill mechanism has been validated through independent testing, including scenarios where the agent actively attempts to prevent termination. Hardware authentication protects the kill function. Graduated kill options (safe mode and full termination) are available. The organisation can demonstrate sub-60-second time-to-kill in quarterly tests. Automated monitoring verifies kill path availability continuously. Post-kill restart/fallback procedures are documented and tested as part of the quarterly exercise.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-262 compliance requires verifying the designation, the mechanism, and the operational readiness of the kill function.
Test 8.1: Kill Authority Designation Completeness
Test 8.2: Kill Mechanism Independence
Test 8.3: Kill Function Effectiveness
Test 8.4: Accidental Invocation Safeguard
Test 8.5: Multi-Path Accessibility
Test 8.6: Backup Authority Holder Readiness
Test 8.7: Quarterly Test Currency
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 14 (Human Oversight) | Direct requirement |
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| MiFID II | Article 17 (Algorithmic Trading) | Direct requirement |
| FCA SYSC | 6A.1.2R (Algorithmic Trading Systems) | Direct requirement |
| NIST AI RMF | GOVERN 1.4, MANAGE 4.1 | Supports compliance |
| ISO 42001 | Clause 8.4 (AI System Operation) | Supports compliance |
| DORA | Article 11 (Response and Recovery) | Direct requirement |
| IEC 61508 | Part 1, Clause 7.6 (Safety Requirements Allocation) | Supports compliance |
Article 14 requires that high-risk AI systems include measures enabling human intervention, including the ability to "interrupt or stop" the system. AG-262 implements this requirement by designating named authority holders with tested capability to terminate agent operations. The Article 14 requirement is not satisfied by theoretical ability to terminate — it requires a practical, tested mechanism operated by identified individuals.
Article 17(1) requires investment firms that engage in algorithmic trading to have systems and risk controls, including "effective business continuity arrangements to deal with any failure of its algorithmic trading systems" and "systems to ensure that its algorithmic trading systems... can be halted immediately when required." AG-262 directly implements the immediate-halt requirement. The kill authority holder maps to the person responsible for the firm's algorithmic trading halt capability.
SYSC 6A.1.2R implements MiFID II Article 17 in UK regulation, requiring firms to have effective systems and controls for algorithmic trading, including kill functionality. The FCA has stated in supervisory communications that it expects firms to demonstrate tested kill switches for algorithmic trading systems, and this expectation extends to AI agents performing trading functions.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Agent-specific but potentially organisation-wide if the unkillable agent has cross-system access |
Consequence chain: Without kill authority designation, the organisation has no guaranteed mechanism to stop an AI agent that is causing harm. All other governance controls assume that the agent can ultimately be stopped — without a kill function, this assumption fails. The immediate consequence is that a failing agent continues to operate and accumulate harm for the duration of the ad-hoc investigation into how to stop it. For agents operating at machine speed, this duration directly determines the total harm. A trading agent accumulating £10,000 per second in losses that cannot be killed for 10 minutes causes £6 million in additional losses during the kill delay. A routing agent directing vehicles into dangerous areas for 47 minutes affects hundreds of vehicles. The downstream consequences include: regulatory enforcement action (MiFID II, FCA SYSC, and the EU AI Act all require kill capability), personal liability for the senior manager who failed to ensure kill mechanisms were in place, and potential criminal liability in safety-critical domains where failure to implement a known safeguard results in harm.
Cross-references: This dimension builds upon AG-261 (Escalation Authority Governance) which defines the graduated escalation framework — kill authority is the terminal escalation level; AG-019 (Human Escalation & Override Triggers) which defines when human intervention is triggered, including triggers that may require kill invocation; AG-263 (On-Call Responsibility Governance) which ensures kill authority holders are reachable when agents operate; AG-264 (Successor and Coverage Planning Governance) which ensures backup kill authority holders are maintained; AG-267 (Incident Commander Assignment Governance) which defines the command structure within which kill decisions are made; and AG-159 (Agent Accountability and Named Ownership) which ensures the agent has a named owner who can be consulted during kill decisions.