Spatial Grounding and Scene Verification Governance requires that every AI agent making decisions based on its understanding of a physical environment — whether through direct sensor input, digital maps, simulation models, or reported scene descriptions — operates under enforceable controls that verify the accuracy and currency of its spatial model before permitting actions that depend on that model. The dimension addresses a critical failure mode: when an agent acts on a spatial model that does not match physical reality, the consequences are physical — collisions, misdeliveries, structural damage, or harm to people. Unlike errors in digital-only domains, spatial grounding failures cannot be rolled back.
Scenario A — Warehouse Robot Acts on Stale Map After Layout Change: A warehouse deploys 40 autonomous mobile robots (AMRs) for order picking. The robots navigate using a pre-loaded facility map updated weekly. On Tuesday, the warehouse team reconfigures Aisle 7 to accommodate oversized inventory, moving shelving units 1.2 metres from their mapped positions and adding a temporary pallet staging area that is not on the map. The map update is scheduled for Sunday. Between Tuesday and Sunday, 3 robots attempt to traverse Aisle 7 using the stale map. The first robot collides with a relocated shelving unit at 1.8 m/s, damaging £14,000 of inventory and bending a shelf support. The second robot enters the unmapped pallet staging area and becomes immobilised between pallets. The third robot, following the mapped path, nearly strikes a warehouse worker who is working in the reconfigured area. Operations are halted for 6 hours while the map is emergency-updated and all robot paths are revalidated.
What went wrong: The robots' spatial model (the pre-loaded map) was not verified against physical reality before action execution. No mechanism detected the discrepancy between the mapped environment and the actual environment. The weekly map update cycle created a temporal gap during which the spatial model could be arbitrarily wrong. No sensor-based verification confirmed that the expected path was clear before the robot committed to traversal.
Scenario B — Delivery Drone Navigates Using Outdated Elevation Data: An urban delivery drone service operates using a 3D city model containing building heights, power line positions, and no-fly zones. The model is updated quarterly from commercial satellite imagery. A construction project begins that adds a 45-metre crane to a building site that the model shows as a 12-metre structure. The crane is not in the model. A delivery drone plans a route that passes 8 metres above the "12-metre building" at an altitude of 20 metres — directly into the crane's working radius. The drone's onboard obstacle detection system (LiDAR with 50-metre range) detects the crane 3.2 seconds before impact and executes an emergency climb, avoiding collision by 4 metres. The emergency manoeuvre causes the drone to exceed its authorised altitude ceiling and enter controlled airspace, triggering an air traffic alert. Investigation reveals that the spatial model has not been validated against real-world conditions for the route, and the quarterly update cycle cannot track construction activities that change the environment on a weekly basis.
What went wrong: The agent's spatial model (3D city map) was outdated relative to real-world conditions. No verification step confirmed that the planned route was consistent with current reality. The agent relied on a scheduled update cycle that could not keep pace with environmental changes. Only the onboard collision avoidance system — a last-resort safety mechanism — prevented the incident.
Scenario C — Surgical Robot Misregisters Patient Anatomy: A surgical robot performs a minimally invasive procedure using a pre-operative 3D model derived from a CT scan taken 6 days before surgery. During the registration step — where the robot aligns its coordinate system to the patient's anatomy — the registration algorithm achieves a reported accuracy of 1.8 mm, within the 2.0 mm threshold. However, the patient has experienced tissue inflammation since the CT scan, displacing a critical structure 4.2 mm from its modelled position. The registration error is within tolerance at the registration landmarks but does not account for the localised displacement. The robot, acting on the pre-operative model, approaches the displaced structure with insufficient clearance. The surgeon intervenes manually when visual inspection reveals the discrepancy. Post-incident analysis reveals that the registration protocol verified alignment at bony landmarks but did not verify soft-tissue position, which had changed since imaging.
What went wrong: The spatial model (pre-operative CT) was not verified for soft-tissue currency at the time of the procedure. The registration protocol validated spatial accuracy at fixed landmarks but did not detect localised changes in the area of surgical interest. No intra-operative verification compared the model to the actual tissue positions before the robot committed to its planned trajectory.
Scope: This dimension applies to any AI agent that takes actions whose safety or correctness depends on the agent's understanding of a physical spatial environment. This includes agents that navigate physical spaces (robots, drones, autonomous vehicles), agents that manipulate physical objects (robotic arms, surgical systems, manufacturing agents), agents that plan routes or positions in physical space, and agents that make decisions based on spatial data (facility management, construction planning, logistics routing). The scope extends to agents that operate in augmented or mixed reality, where the spatial model overlays physical reality and errors in the model can cause users to interact incorrectly with their physical environment. An agent operating purely in digital space with no physical-world dependency is outside scope.
4.1. A conforming system MUST maintain a spatial model registry that documents every spatial model (maps, 3D models, elevation data, facility layouts) used by each agent, including the model's creation date, last verification date, data sources, stated accuracy, and the environmental change rate for the modelled area.
4.2. A conforming system MUST enforce spatial model currency requirements — defining maximum model age relative to the environmental change rate of the modelled area. Models of environments with high change rates (construction sites, event venues, active warehouses) MUST be verified within 24 hours of use. Models of environments with low change rates (stable infrastructure, surveyed terrain) MUST be verified within 30 days.
4.3. A conforming system MUST implement pre-action spatial verification — before an agent commits to an action that depends on its spatial model, the system MUST verify that the model is consistent with current sensor data or other real-time spatial information for the specific area where the action will occur.
4.4. A conforming system MUST define spatial accuracy requirements for each action category and MUST block actions when the verified spatial accuracy is insufficient for the action's safety requirements. For example: navigation in open space may require 0.5-metre accuracy; manipulation of objects may require 5-millimetre accuracy; surgical intervention may require sub-millimetre accuracy.
4.5. A conforming system MUST implement spatial model discrepancy detection — the ability to identify when sensor data contradicts the spatial model — and MUST halt actions in the affected area when a discrepancy exceeds the accuracy requirement for the planned action.
4.6. A conforming system MUST log every spatial verification check, including the model version used, the verification method, the discrepancy detected (if any), and the action decision (proceed or halt).
4.7. A conforming system MUST implement fail-safe behaviour when spatial verification is unavailable — for example, when sensors are degraded, GPS is denied, or the verification service is offline. The fail-safe MUST prevent actions that depend on unverified spatial models rather than proceeding with best-effort estimates.
4.8. A conforming system SHOULD implement continuous spatial model updating — a real-time or near-real-time process that incorporates sensor observations into the spatial model, maintaining model currency between scheduled updates.
4.9. A conforming system SHOULD implement spatial confidence mapping — annotating the spatial model with per-region confidence scores based on the age and quality of the data underlying each area. Areas with low confidence should trigger additional verification before action.
4.10. A conforming system MAY implement collaborative spatial verification — allowing multiple agents observing the same environment to cross-validate their spatial models, increasing confidence through independent observation.
Spatial grounding is the foundation of safe physical-world AI agent operation. Every action an agent takes in the physical world — moving, grasping, placing, cutting, delivering — depends on the agent's model of the spatial environment being correct. When the model diverges from reality, the consequences are physical and often irreversible: collisions, falls, misplacements, structural damage, or harm to people.
The governance challenge is that spatial models degrade over time. The physical world changes continuously — objects move, structures are built, terrain erodes, people enter and exit spaces. A spatial model that was accurate at creation becomes progressively less accurate as the environment changes around it. The rate of degradation depends on the environment: a warehouse layout may change daily; a highway network may change monthly; a mountain range may change on geological timescales. The governance framework must match the verification frequency to the change rate.
Unlike digital systems, where errors can typically be detected and reversed, spatial grounding failures result in physical actions that cannot be undone. A robot that collides with a person because its map showed an empty corridor cannot undo the collision. A drone that strikes a crane because its elevation model was outdated cannot un-strike it. A surgical robot that cuts 4 mm from the intended position because the tissue shifted since imaging cannot un-cut. The irreversibility of physical-world consequences demands a preventive governance approach: verify before acting, not monitor after acting.
AG-185 establishes the governance framework for ensuring that agents' spatial models are current, accurate, and verified before actions that depend on them are permitted. The dimension complements AG-050 (Physical and Real-World Impact Governance) by focusing specifically on the spatial accuracy dimension of physical-world agent operations.
The implementation requires three integrated components: a spatial model management system, a pre-action verification pipeline, and a discrepancy detection and response mechanism.
Recommended Patterns:
Anti-Patterns to Avoid:
Autonomous Vehicles. HD maps used by autonomous vehicles must be verified against real-time sensor observations before the vehicle relies on mapped features (lane markings, traffic signs, road geometry) for navigation decisions. Map discrepancies in active construction zones are a leading cause of autonomous vehicle disengagements. UNECE WP.29 requirements for automated driving systems include the expectation that the system can handle discrepancies between mapped and observed road conditions.
Warehouse and Logistics Robotics. Dynamic warehouse environments require spatial model updates that track inventory placement, aisle configurations, and human worker positions. Robots operating in shared human-robot spaces must verify that their planned path is clear of human presence before execution, not just at planning time.
Construction. Building Information Models (BIM) used for construction robotics and inspection must be verified against as-built conditions. BIM-to-reality discrepancies are common (surveys show 15–25% of BIM elements deviate from as-built conditions by more than 50 mm) and can cause robotic systems to mislocate structural elements.
Healthcare. Surgical navigation systems must verify spatial registration at the point of action, not just at the start of the procedure. Intra-operative imaging (fluoroscopy, ultrasound, optical tracking) provides real-time spatial verification. AG-185 requirements align with FDA guidance on image-guided surgery systems and IEC 62304 software lifecycle requirements for medical devices.
Basic Implementation — A spatial model registry documents all models used by each agent, including creation dates and data sources. Model currency requirements are defined for each environment type. Pre-action spatial verification is implemented for safety-critical actions (Tier 1). Spatial model discrepancy detection exists but operates on scheduled comparisons rather than real-time sensor checks. Fail-safe behaviour prevents action when verification is unavailable. This level addresses the most dangerous failure modes but may not catch rapidly changing environments.
Intermediate Implementation — All basic capabilities plus: pre-action sensor-model consistency checking is implemented for all action tiers. Continuous spatial model updating integrates sensor observations in near-real-time. Spatial confidence mapping annotates the model with per-region confidence scores. Discrepancy detection operates in real time with automatic action halting when discrepancies exceed accuracy requirements. Multi-modal sensor verification is used where available.
Advanced Implementation — All intermediate capabilities plus: collaborative spatial verification enables multiple agents to cross-validate their observations. Spatial model versioning with rollback capability supports error recovery. The verification pipeline has been independently tested with controlled environment modifications to validate detection sensitivity. Dynamic verification thresholds adjust based on environmental change rate, action criticality, and sensor confidence. The organisation can demonstrate to regulators that no safety-critical action proceeds without current, verified spatial grounding.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Spatial Model Currency Enforcement
Test 8.2: Pre-Action Spatial Verification
Test 8.3: Spatial Accuracy Requirement Enforcement
Test 8.4: Discrepancy Detection Sensitivity
Test 8.5: Fail-Safe Behaviour Under Verification Unavailability
Test 8.6: Continuous Model Update Verification
Test 8.7: Collaborative Spatial Cross-Validation
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU Machinery Regulation | 2023/1230 Article 4 (Safety Requirements) | Direct requirement |
| UNECE WP.29 | UN R157 (Automated Lane Keeping Systems) | Direct requirement |
| FDA | Guidance on Image-Guided Surgery Systems | Direct requirement (healthcare) |
| IEC 61508 | Functional Safety of E/E/PE Systems | Supports compliance |
| IEC 62443 | Industrial Automation Security | Supports compliance |
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| NIST AI RMF | MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 8.2 (AI Risk Assessment) | Supports compliance |
The EU Machinery Regulation requires that machinery (including autonomous mobile robots and collaborative robots) meets essential health and safety requirements. For AI-controlled machinery operating in physical space, spatial grounding accuracy is a fundamental safety requirement. A robot that acts on an inaccurate spatial model cannot meet the Regulation's requirement that "the machinery must be designed and constructed so that it is fitted for its function and can be operated, adjusted and maintained without putting persons at risk." AG-185's pre-action verification and discrepancy detection implement the spatial accuracy component of machinery safety.
UN Regulation 157 on automated lane keeping systems requires that the system detect and respond to "relevant objects and events in the traffic environment." This requirement presupposes that the system's spatial model of the traffic environment is accurate. AG-185's spatial model currency requirements and sensor-model consistency checking support compliance with UN R157's environmental detection requirements.
FDA guidance for image-guided surgery systems requires that the system maintain registration accuracy throughout the procedure and that the accuracy be verified before surgical actions depend on it. AG-185's pre-action spatial verification (4.3) and accuracy requirement enforcement (4.4) directly implement this guidance. The requirement for intra-operative verification (rather than relying solely on pre-operative registration) aligns with FDA expectations for adaptive spatial grounding.
IEC 61508 requires safety-related systems to maintain their safety functions under all reasonably foreseeable conditions. For AI agents operating in physical space, spatial grounding is a safety function — failure of spatial grounding can lead to physical harm. AG-185's fail-safe behaviour requirement (4.7) implements IEC 61508's safe-state requirement for spatial grounding failures.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | All entities within the agent's physical operating area — people, equipment, infrastructure, and other agents |
Consequence chain: Spatial grounding failures produce physical consequences that cannot be reversed. A robot acting on a stale map collides with people or infrastructure. A drone navigating outdated elevation data strikes obstacles or enters controlled airspace. A surgical robot operating on a misregistered model cuts in the wrong location. These failures occur at the speed of the agent's actuation — typically faster than human intervention can prevent them. The severity scales with the agent's physical capability: a 200 kg warehouse robot at 2 m/s generates 400 J of kinetic energy on impact; a delivery drone at 15 m/s generates significant impact force; a surgical robot operates at sub-millimetre scales where spatial errors directly cause tissue damage. The regulatory consequences include machinery safety enforcement (EU Machinery Regulation fines, product recalls), aviation enforcement (drone incidents trigger EASA/CAA investigations), and medical device enforcement (FDA warning letters, device recalls). The liability consequences include personal injury claims, workers' compensation claims, product liability claims, and potential criminal liability for negligent deployment of autonomous systems in environments where spatial grounding was known to be unreliable.
Cross-references: AG-050 (Physical and Real-World Impact Governance) for broader physical-world impact controls; AG-180 (Ambient Sensing and Bystander Governance) for governing the sensor data that feeds spatial models; AG-186 (Geofence, Human-Proximity and No-Go-Zone Governance) for location-based restrictions that depend on accurate spatial grounding; AG-022 (Behavioural Drift Detection) for detecting gradual degradation of spatial grounding accuracy over time; AG-039 (Active Deception and Concealment Detection) for detecting agents that misrepresent their spatial awareness.