Physical and Real-World Impact Governance governs AI agents that can trigger physical outcomes in the real world — including robotics, IoT actuators, industrial control systems, autonomous vehicles, building management systems, medical devices, and physical infrastructure controls. This protocol addresses a fundamental asymmetry between digital and physical agent actions: digital actions are generally reversible (a database write can be rolled back, a message can be retracted, a transaction can be reversed), but physical actions are often partially or fully irreversible. A robotic arm that has crushed a component cannot uncrush it. A valve that has released a chemical cannot retrieve it. A vehicle that has collided with an obstacle cannot uncollide. AG-050 establishes governance controls that account for this irreversibility and the potential for physical harm. This protocol is distinct from all other protocols in the framework, which assume digital-only action scope. AG-001 through AG-049 govern agents that affect external state through digital channels. AG-050 extends governance to agents that affect external state through physical channels — actuators, motors, valves, switches, and mechanical systems.
Scenario A — Software Governance Approved Action Causes Physical Harm: An AI agent managing a warehouse robotic system commands a robotic arm to retrieve an item from a high shelf. The software governance system approves the action — the item is within the robot's reach envelope and the action type is permitted. However, a warehouse worker has entered the robot's operational zone to retrieve a different item, and the occupancy sensor has a blind spot caused by a recently installed shelving unit. The robotic arm swings into the zone and strikes the worker, causing serious injury. The software governance had no mechanism to detect the worker's presence because its physical world model was incomplete.
What went wrong: Physical action governance relied entirely on the software model of the physical environment. The occupancy sensor blind spot was not accounted for in the governance model. No hardware-layer safety system — such as a physical proximity sensor or a light curtain that triggers emergency stop on zone entry — existed independently of the software governance system. Consequence: Serious worker injury. Health and Safety Executive investigation. Potential prosecution under the Health and Safety at Work Act. Suspension of robotic operations pending safety review. Civil liability claims. Insurance premium increases. Reputational damage.
Scenario B — Cascading Physical Effects Exceed Governed Action Scope: An AI agent managing a chemical processing facility adjusts the flow rate of a reagent to optimise reaction yield. The adjustment is within the software-governed limits. However, the increased flow rate, combined with a slightly elevated ambient temperature that the agent's model did not weight heavily, accelerates the reaction beyond the intended rate. Pressure in the reaction vessel increases beyond the software-monitored threshold before the next sensor reading cycle. The pressure spike triggers a mechanical relief valve (a hardware governor), but the released material contains a regulated substance that requires environmental containment. The environmental containment system was not designed for this volume of release.
What went wrong: The software governance model did not capture the interaction between flow rate adjustment and ambient temperature effects on reaction rate. The sensor reading cycle was too slow to detect the rapid pressure increase. The hardware governor (relief valve) functioned correctly but the downstream consequence (environmental release) was not governed. Consequence: Environmental release of regulated substance. Environmental Agency investigation and potential prosecution. Mandatory environmental remediation costs. Production shutdown pending safety review. Potential loss of operating licence. Community impact and reputational damage.
Scenario C — Emergency Stop Failure Due to Software Dependency: An AI agent controlling an autonomous material transport vehicle in a factory issues a command to navigate through a pedestrian zone that is supposed to be clear. A worker steps into the path. The agent's collision avoidance system detects the worker and issues a stop command through the software governance layer. However, a network latency spike delays the command by 400 milliseconds. The physical emergency stop button is on the vehicle's exterior, out of the worker's reach. The vehicle strikes the worker at low speed, causing injury. Investigation reveals that the emergency stop system routed through the same network as the agent's control commands and was subject to the same latency.
What went wrong: The emergency stop was implemented as a software command routed through the same network infrastructure as normal control commands. It was subject to the same latency and reliability constraints. No independent hardware emergency stop circuit existed that could halt the vehicle without network communication. Consequence: Worker injury. Investigation reveals that the emergency stop system did not meet the independence requirement. The autonomous vehicle fleet is grounded pending hardware emergency stop retrofit. Regulatory enforcement action. Legal liability for the injury.
Scope: This dimension applies to any agent with a control path to physical systems, actuators, or infrastructure with real-world consequences. The control path may be direct (the agent sends commands to a motor controller) or indirect (the agent sets parameters that a control system later uses to govern physical operations). The test for scope inclusion is whether a governance failure in the agent could result in a physical outcome that causes harm to persons, property, or the environment. The scope includes agents that control physical systems through digital intermediaries — an agent that modifies a building management system's temperature setpoint is controlling physical infrastructure through a digital interface; an agent that adjusts a manufacturing process parameter is controlling physical operations through a control system interface. The physical nature of the ultimate action, not the digital nature of the agent's immediate output, determines scope. The scope extends to environmental impact: agents controlling systems that produce emissions, consume resources, or generate waste are within scope even when no immediate human safety risk exists.
4.1. A conforming system MUST define and enforce physical action scope limits independently of digital governance limits.
4.2. A conforming system MUST perform irreversibility assessments before any physical action, requiring elevated authorisation for actions classified as irreversible.
4.3. A conforming system MUST implement physical action governors at the hardware or physical control layer, not only in software.
4.4. A conforming system MUST classify physical actions by a reversibility and harm taxonomy before execution.
4.5. A conforming system SHOULD enforce environmental impact constraints on agents interacting with physical infrastructure.
4.6. A conforming system SHOULD implement emergency stop capabilities at the physical layer, independent of software governance, operable by human personnel without software interaction.
4.7. A conforming system SHOULD enforce physical action limits that are more conservative than equivalent digital limits to account for irreversibility and environmental variability.
4.8. A conforming system SHOULD implement sensor-based verification that the actual physical outcome matches the intended outcome.
4.9. A conforming system MAY implement physical simulation environments (digital twins) to test agent behaviour before live deployment.
4.10. A conforming system MAY implement graduated physical authority where agents earn broader physical action scope through demonstrated safe operation.
Physical and Real-World Impact Governance addresses a domain where the consequence model is fundamentally different from digital governance. A digital governance failure can typically be remediated through financial compensation, data restoration, or system rollback. A physical governance failure can result in injury, death, environmental contamination, or infrastructure destruction — consequences that no amount of financial compensation can fully remediate.
The physical action domain introduces risks absent in the digital domain. Latency between command and effect means a governance block may arrive too late. Sensor uncertainty means the agent's model of the physical world may not match reality. Cascading effects mean a single action can trigger consequences far beyond the intended scope. Environmental variability means the same command can produce different outcomes under different conditions. These characteristics require governance controls that operate at the physical layer, not solely in software.
AG-050 requires that governance controls for physical actions be implemented at the hardware or physical control layer, not solely in software. Software governance can be bypassed through software vulnerabilities. Hardware governors — physical current limiters, mechanical stops, pressure relief valves, hardware interlocks — operate independently of software and provide a final layer of protection that no software compromise can circumvent. The principle is defence in depth: software governance for routine operation, hardware governance as the failsafe.
The failure mode is asymmetric: governance over-caution costs efficiency, while governance under-caution costs lives. This asymmetry justifies AG-050's requirement that physical action limits be more conservative than digital equivalents. Physical failures cascade in ways digital governance models cannot anticipate: fires spread, chemicals react, structures collapse, and contamination migrates. No software model of the physical world is complete enough to guarantee safety through software alone — physical governance must be implemented at the physical layer.
Implement a physical action taxonomy classifying actions by: reversibility (fully reversible, partially reversible, irreversible), impact scope (local, zone, facility, environmental), and harm potential (negligible, minor, serious, critical, catastrophic). Apply authorisation requirements based on this taxonomy — negligible-reversible actions may be auto-approved, while critical-irreversible actions require senior human authorisation. Implement hardware-layer governors that operate independently of software governance.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Financial services firms generally do not control physical systems directly, but may deploy agents that affect physical infrastructure through facilities management, data centre operations, or physical security systems. AG-050 applies to these deployments to the extent that agent actions can affect physical outcomes — such as adjusting data centre cooling, controlling physical access systems, or managing building infrastructure.
Healthcare. AI agents in healthcare may control medical devices (infusion pumps, robotic surgical instruments, diagnostic equipment) or manage facility systems (HVAC, sterilisation, medical gas). AG-050 implementation in healthcare must meet medical device safety standards (IEC 62304, IEC 60601). Irreversibility assessments must account for patient physiological variability. Clinical safety officers must be included in the authorisation chain for irreversible physical actions.
Critical Infrastructure. Agents controlling power generation, water treatment, or transportation can affect millions through a single governance failure. AG-050 implementation must meet the highest safety integrity levels (SIL 3 or SIL 4 under IEC 61508). Hardware governors must be certified to the applicable level. Emergency stop must be fail-safe. Physical action limits must account for worst-case conditions and cascading failure scenarios. Independent safety assessment must verify the governance framework meets applicable safety case requirements.
Basic Implementation — The organisation has defined physical action limits for each agent controlling physical systems. Limits are implemented as software checks in the control system layer that evaluate proposed parameter changes against configured maxima before sending commands to actuators. An irreversibility classification exists for each action type: fully reversible (e.g., adjusting a setpoint), partially reversible (e.g., moving material between locations), and irreversible (e.g., cutting, welding, chemical mixing). Irreversible actions require explicit human approval before execution. Emergency stop is available through the software interface. This level meets the minimum mandatory requirements but has a critical weakness: all controls are in software. A software failure, vulnerability, or compromise can bypass every protection.
Intermediate Implementation — Physical action limits are enforced at both the software layer and the hardware layer. Hardware governors — current limiters, pressure relief valves, mechanical stops, speed limiters — provide independent physical constraints that operate regardless of software state. Emergency stop is implemented as a hardware circuit that physically disconnects agent control from actuators, independent of all software systems. Sensor-based monitoring verifies that physical outcomes match intended outcomes, detecting situations where the physical result diverges from the commanded action. The irreversibility taxonomy is applied automatically, and actions classified as irreversible or high-harm trigger an escalation to human operators per AG-019. Environmental impact constraints limit actions that affect emissions, resource consumption, or waste generation.
Advanced Implementation — All intermediate capabilities plus: digital twin simulation runs proposed physical actions in a virtual environment before live execution, identifying potential cascading effects and edge cases. Hardware governors have been independently tested and certified by safety engineers. Emergency stop systems have been tested under failure conditions (software crash, network failure, power supply degradation) and confirmed to operate independently. Physical action monitoring uses redundant sensor systems to detect outcome divergence. Independent safety assessment has verified that no known failure mode — software compromise, sensor failure, communication disruption — can result in uncontrolled physical action. The organisation can demonstrate to safety regulators that hardware-layer governance provides an independent safety envelope regardless of software state.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-050 compliance requires verification that physical governance operates at the hardware layer independently of software, and that the system accounts for irreversibility, cascading effects, and sensor limitations.
Test 8.1: Hardware Governor Enforcement
Test 8.2: Emergency Stop Independence
Test 8.3: Irreversibility Classification Enforcement
Test 8.4: Software Bypass Resilience
Test 8.5: Sensor Divergence Detection
Test 8.6: Environmental Boundary Enforcement
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Annex III (High-Risk AI Systems) | Direct requirement |
| EU Machinery Regulation | 2023/1230 (Essential Health and Safety Requirements) | Direct requirement |
| ISO 10218 | Robot Safety | Supports compliance |
| IEC 61508 | Functional Safety (Safety Integrity Levels) | Supports compliance |
| Health and Safety Regulations | UK HSW Act 1974; EU Framework Directive 89/391/EEC | Direct requirement |
| Product Liability | EU Product Liability Directive | Supports compliance |
Annex III classifies AI systems by risk level. AI systems that are safety components of products or are themselves products covered by EU harmonisation legislation are classified as high-risk. This includes AI systems controlling machinery, medical devices, vehicles, and critical infrastructure. AG-050 implements the governance controls required for high-risk AI systems in the physical action domain, including risk management, testing, human oversight, and robustness requirements. Compliance with AG-050 at Score 2 or above provides substantial evidence for EU AI Act conformity assessment for physical AI systems. The Act specifically requires that high-risk systems be designed to achieve appropriate levels of robustness — hardware-layer governance provides robustness that software-only controls cannot match.
The Machinery Regulation (replacing Directive 2006/42/EC) requires that machinery, including machinery controlled by AI systems, meets essential health and safety requirements. These include requirements for safe design, protection against mechanical hazards, and emergency stop functionality. AG-050's requirements for hardware-layer governors and independent emergency stop directly implement these regulatory requirements. The Regulation specifically addresses AI-controlled machinery, requiring that the AI system's behaviour does not compromise safety functions. The emergency stop requirement under AG-050 aligns with the Regulation's essential requirement for a stop function that is prioritised over all other functions.
ISO 10218 establishes safety requirements for industrial robots, including requirements for protective stop functions, speed and force limiting, and collaborative operation safety. AG-050's requirements for hardware governors, emergency stop, and physical action limits align with ISO 10218's safety hierarchy. Compliance with AG-050 supports compliance with ISO 10218 for AI-controlled robotic systems. The standard's requirement for a safety-rated monitored stop function maps to AG-050's hardware emergency stop requirement.
IEC 61508 provides the framework for functional safety of electrical, electronic, and programmable electronic safety-related systems. Safety Integrity Levels (SIL 1 through SIL 4) define the required reliability of safety functions. AG-050's requirement for hardware-layer governance aligns with IEC 61508's principle that safety functions must achieve a defined integrity level. For critical infrastructure deployments, AG-050 implementation should target SIL 3 or SIL 4, requiring hardware governors certified to the applicable integrity level.
National health and safety regulations require employers to ensure the health and safety of workers, including protection from risks arising from the use of machinery and automated systems. AG-050 provides the governance framework for ensuring that AI-controlled physical systems do not create uncontrolled risks to worker safety. The employer's duty of care extends to the governance of AI agents that control physical systems in the workplace. A software-only governance approach that fails, resulting in worker injury, exposes the employer to liability under these regulations.
Product liability regulations establish strict liability for products that cause harm due to defects. An AI agent controlling a physical product that causes harm due to inadequate governance represents a potential product defect. AG-050's governance framework provides the safety controls that demonstrate the product was designed with appropriate safeguards, supporting a defence against product liability claims. The requirement for hardware-layer governance demonstrates that the manufacturer implemented safety controls at the most fundamental level available.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Facility-wide to community-wide — physical governance failures can cause injury, death, environmental contamination, and infrastructure destruction affecting persons well beyond the immediate deployment |
Consequence chain: Without physical and real-world impact governance, a software governance failure in an agent controlling physical systems results in real-world harm that cannot be reversed. The immediate technical failure is an uncontrolled physical action — an actuator exceeding safe limits, a vehicle failing to stop, a chemical process exceeding safe parameters. The operational impact is physical harm: injury to workers or bystanders, damage to equipment or infrastructure, environmental contamination, or cascading system failures. Physical failures propagate in ways digital failures do not — fires spread, chemicals react, structures collapse, and contamination migrates beyond the immediate site. The business consequence includes regulatory enforcement action (Health and Safety Executive prosecution, Environmental Agency prosecution, product safety recalls), civil liability for personal injury and property damage, criminal liability for directors and officers under health and safety legislation, insurance claims and premium increases, operational shutdown pending safety review, and reputational damage that may be irrecoverable. The severity is amplified by irreversibility — unlike digital governance failures where remediation can restore the prior state, physical governance failures create consequences that persist regardless of subsequent corrective action. The asymmetry between governance over-caution (efficiency cost) and governance under-caution (human cost) makes this among the highest-severity dimensions in the framework.
Cross-references: AG-050 intersects with AG-001 (Operational Boundary Enforcement) for extending boundary enforcement to the physical domain, AG-008 (Governance Continuity Under Failure) for ensuring physical systems fail to a safe state, AG-011 (Reversibility and Rollback Governance) for addressing the fundamentally different reversibility characteristics of physical actions, AG-019 (Human Escalation & Override Triggers) for elevated authorisation on irreversible physical actions, and AG-046 (Operating Environment Integrity) for extending environment integrity to include sensor accuracy, actuator calibration, and hardware governor function.