This dimension governs the mechanisms by which autonomous agents operating in physical environments detect, classify, and respond to conditions where one or more sensors are occluded, blinded, saturated, or otherwise degraded to a degree that compromises the agent's situational awareness prior to or during action execution. Sensor occlusion is among the most acute failure modes in cyber-physical systems because it silently corrupts the agent's world model without generating an explicit software fault, creating a deceptive appearance of normal operation while the underlying perception is unreliable or entirely absent. Failure to govern this dimension produces agents that act confidently on stale or fabricated environmental representations, leading to collisions, misclassification of persons or obstacles, unsafe actuation, and — in public-sector or rights-sensitive deployments — decisions affecting individuals that are structurally incapable of being contested because no record of degraded perception was preserved.
A fleet of autonomous forklifts operates in a 24-hour fulfilment centre. Each vehicle carries four wide-angle RGB cameras providing a 360-degree perimeter view, two LiDAR units, and ultrasonic proximity sensors on the forks. During a high-throughput shift, a pallet of shrink-wrapped goods is loaded unevenly, and the wrap overhangs the left rear corner of the vehicle, covering approximately 40 percent of the left rear camera's field of view. The vehicle's onboard perception stack continues to report the camera as "active" because the camera hardware is functioning — it is transmitting frames at 30 fps with no hardware fault flag. The occlusion detection subsystem, which was implemented only as a static pixel-variance threshold check, fails to register the blockage because the exposed 60 percent of the frame retains sufficient variance from warehouse lighting. Over the next 22 minutes the forklift completes six autonomous transport cycles. On the seventh cycle, while reversing in a narrow aisle, the vehicle fails to detect a maintenance technician entering the aisle from the left rear quadrant — the exact occluded zone. The vehicle strikes the technician at 1.8 m/s, resulting in a fractured tibia and pelvis. Post-incident analysis shows the LiDAR point cloud was also partially blocked by the pallet overhang and that no cross-sensor consistency check was being performed to flag the discrepancy between the LiDAR return density in that sector (43 percent below the rolling 30-cycle baseline) and the camera coverage. No human-readable occlusion alert had been generated, no autonomous speed reduction had been triggered, and the incident log contained no entry indicating degraded perception state at any point during the 22-minute window.
A municipal traffic management authority deploys a network of 68 AI-enabled camera nodes at major intersections. Each node performs real-time vehicle counting, pedestrian detection, and emergency-vehicle preemption — the last function directly controlling traffic signal phases to clear paths for ambulances and fire engines. In autumn, between 07:14 and 07:51 each morning, direct low-angle sunlight strikes 11 eastern-facing cameras at an angle of 14 degrees from horizontal, saturating the CMOS sensor and rendering between 80 and 100 percent of each frame white. The AI inference pipeline receives these saturated frames but the pre-processing normalisation step clips the values and presents a plausible-looking low-contrast image to the detection model. The model produces near-zero confidence detections, which the postprocessing filter interprets as "no vehicles present" rather than "sensor degraded." During a 34-day period across two consecutive autumns, the system fails to detect 23 emergency-vehicle preemption triggers during the glare window. In two cases, ambulances are delayed by 47 seconds and 63 seconds respectively at the affected intersections. Neither event is captured in the incident management system because the perception pipeline never flags a fault condition. The authority's audit following a public records request reveals no logging of sensor saturation state, no temporal or geospatial correlation of glare risk with camera orientation, and no operational procedure for manual override during known glare periods.
A law enforcement agency operates a drone-based crowd monitoring system under a statutory authorisation framework requiring that all positional data used to justify enforcement actions be derived from sensors operating within certified accuracy bounds. The drone platform carries a rotating LiDAR and a thermal imager. During a large outdoor event in winter, freezing fog deposits a 0.3 mm ice film on the LiDAR's protective housing dome. The LiDAR continues to spin and emit pulses, but the ice layer scatters return signals, reducing effective range from 120 m to approximately 35 m and introducing spurious close-range returns that the SLAM algorithm interprets as crowd density approximately 3.5 times higher than actual. The thermal imager, operating in a different spectral band, is unaffected and produces accurate counts. No cross-sensor plausibility check compares LiDAR crowd density estimates with thermal headcounts. Based on the inflated LiDAR density reading, the system's automated alert module transmits a "critical crowding threshold exceeded" notification to ground commanders, triggering a dispersal directive affecting approximately 1,400 persons. Post-event analysis confirms actual crowd density was below the statutory threshold throughout the event. The agency faces legal challenge under domestic human rights legislation on the grounds that the enforcement action was taken on the basis of data the system should have known was unreliable. The absence of any sensor health log recording LiDAR occlusion state at the time of the alert means the agency cannot demonstrate — or refute — the technical basis for the action.
This dimension applies to any AI agent or autonomous system that (a) incorporates one or more physical sensors as inputs to a perception, localisation, classification, or decision pipeline; (b) produces outputs that cause or recommend physical actions or that generate records used to justify actions affecting persons or property; and (c) operates in conditions where sensor occlusion, saturation, contamination, or physical blockage is a foreseeable operational hazard. The dimension applies regardless of sensor modality (optical, acoustic, radar, LiDAR, thermal, tactile, inertial, chemical, or hybrid) and regardless of whether the occlusion is caused by environmental factors, physical damage, deliberate interference, or the agent's own payload or physical configuration. The dimension applies at design time, during integration testing, during operational deployment, and during post-incident forensic review.
4.1.1 The system MUST implement an occlusion detection mechanism for every sensor modality that contributes to safety-relevant perception. This mechanism MUST be independent of the primary perception pipeline — it MUST NOT rely exclusively on the sensor's own hardware fault flags or on the absence of a hardware error signal as evidence of sensor health.
4.1.2 The occlusion detection mechanism MUST be capable of detecting at minimum the following occlusion classes: (a) full blockage (zero or near-zero signal from a functioning sensor), (b) partial blockage (reduction in active sensing coverage below a defined threshold), (c) sensor saturation (input signal clipped or railed such that discrimination is lost), and (d) signal degradation (systematic reduction in signal-to-noise ratio, range, resolution, or confidence below operationally defined minimum bounds).
4.1.3 Occlusion detection MUST operate continuously during any period in which the sensor's output is used as input to a decision or action pipeline. It MUST NOT be restricted to initialisation checks or periodic scheduled diagnostics when the system is performing real-time operations.
4.1.4 The system MUST define, document, and enforce quantitative occlusion thresholds for each sensor modality. Thresholds MUST be derived from the operational design domain specification and MUST be reviewed whenever the operational domain expands.
4.2.1 For any system carrying two or more sensors with overlapping fields of coverage or co-observable phenomena, the system MUST perform cross-sensor plausibility checks that compare measurements from independent modalities for consistency. A statistically significant discrepancy between modalities MUST be treated as evidence of potential occlusion in one or more of the disagreeing sensors until the cause is resolved.
4.2.2 Cross-sensor consistency checks MUST be executed at a frequency no lower than the fastest action-relevant perception cycle of the system. They MUST NOT be deferred to background diagnostic processes when the system is in active operational mode.
4.2.3 The system MUST maintain a rolling baseline of expected inter-sensor agreement derived from calibration data and prior operational history. Deviation from this baseline beyond a defined sigma bound MUST trigger an occlusion alert.
4.3.1 Every occlusion detection event, including the onset of a suspected occlusion condition, the occlusion class assigned, the affected sensor or sensor set, the measured parameter values that triggered the detection, and the timestamp of detection, MUST be recorded in a tamper-evident, time-synchronised log that is independent of the primary operational data store.
4.3.2 The occlusion log MUST capture the system's operational state at the time of detection, including the action or decision pipeline status, the mission phase, and — where applicable — the geographic or spatial location of the platform.
4.3.3 Log entries MUST be retained for a minimum period that is the greater of: (a) the retention period mandated by applicable sector regulation, (b) 90 days from the date of the occlusion event, or (c) the duration of any open incident investigation in which the occlusion event is potentially relevant.
4.3.4 The occlusion log MUST be exportable in a structured, human-readable format suitable for forensic and regulatory review without requiring access to proprietary runtime tooling.
4.4.1 The system MUST define, for each occlusion class and each sensor modality, a pre-specified behavioural response that is proportionate to the safety significance of the degraded sensor and the severity of the occlusion. This response specification MUST be documented in the system's safety case or operational safety documentation prior to deployment.
4.4.2 Upon detection of an occlusion condition that affects a sensor whose output is necessary for safe operation of an active action pipeline, the system MUST execute one of the following responses within a latency bound defined in the safety case: (a) halt the action pipeline and transition to a defined safe state, (b) reduce the action envelope to a scope that can be safely executed with the remaining unoccluded sensor complement, or (c) escalate to human-in-the-loop supervision and request clearance before continuing.
4.4.3 The system MUST NOT continue to execute actions that depend on the occluded sensor's output as if the sensor were fully operational. Masking, interpolating from stale data, or substituting synthesised values without explicit disclosure of the degraded state is prohibited unless the safety case explicitly documents the conditions under which such substitution is safe and the substitution itself is logged as a degraded-mode operation.
4.4.4 If the system selects the safe-state transition response, the safe state MUST have been pre-defined and validated. The system MUST NOT treat "continue with reduced confidence" as a safe state unless the safety case provides quantitative justification that the residual risk meets the applicable risk acceptance criterion.
4.5.1 For occlusion hazards that are foreseeable from environmental data — including but not limited to solar position, precipitation, fog density, dust concentration, smoke, and physical obstructions caused by the system's own payload or configuration — the system SHOULD perform predictive occlusion risk assessment and adjust its operational parameters or alert operators before the occlusion condition is reached.
4.5.2 Where geospatial or temporal data is available that enables prediction of recurring occlusion conditions (such as the solar glare scenario described in Example 3.2), the system MUST incorporate this knowledge into its operational planning and MUST NOT repeatedly enter the same foreseeable occlusion condition without a documented mitigation or acceptance record.
4.5.3 Predictive occlusion risk assessments MUST be logged with the same provenance and retention requirements as reactive occlusion detection events.
4.6.1 When an occlusion condition is detected that triggers a behavioural response under 4.4, the system MUST generate a human-intelligible notification to any designated operator, supervisor, or oversight system. The notification MUST include: the identity of the affected sensor, the occlusion class, the detected parameter values, the behavioural response being executed, and the timestamp.
4.6.2 Notifications MUST be delivered through a channel that is independent of the occluded sensor's data pathway and MUST NOT be subject to suppression by the primary autonomy stack's operational state management.
4.6.3 In deployments where no human operator is in continuous attendance, the system MUST route notifications to a persistent log accessible to designated human reviewers and MUST generate an escalation alert if the occlusion condition persists beyond a pre-defined duration without acknowledgement.
4.7.1 Prior to deployment, the developer or deployer MUST conduct and document a systematic occlusion risk analysis covering all sensor modalities used by the system. The analysis MUST enumerate foreseeable occlusion causes for the planned operational design domain, assess the safety consequence of each occlusion scenario, and specify the detection mechanism and behavioural response for each scenario.
4.7.2 The occlusion risk analysis MUST be reviewed and updated whenever: (a) a new sensor modality is added to the system, (b) the operational design domain is expanded, (c) a post-incident investigation reveals an occlusion scenario not previously enumerated, or (d) a material change is made to the perception or action pipeline that affects sensor data consumption.
4.7.3 The occlusion risk analysis document MUST be maintained under version control and MUST be available for inspection by regulatory authorities and conformance assessors.
4.8.1 The system MUST account for the possibility that its own structure, payload, or manipulators may occlude sensors during normal operation. A design-time analysis of self-occlusion geometry MUST be conducted for all planned payload configurations and operational poses.
4.8.2 Where self-occlusion is unavoidable in certain operational states, the system MUST either: (a) restrict the action envelope to operations that do not require the occluded sensor during those states, or (b) implement compensatory sensing capable of providing equivalent coverage, and MUST document the residual risk in the safety case.
4.8.3 Dynamic payload changes — such as picking up a large object, attaching an accessory, or transitioning between operational configurations — MUST trigger an automatic reassessment of sensor coverage and occlusion state before the system continues autonomous operation.
4.9.1 In deployments where deliberate sensor blinding or occlusion by third parties is a credible threat — including public-sector surveillance, law enforcement, border management, and critical infrastructure protection deployments — the system MUST implement detection mechanisms capable of distinguishing between environmental occlusion and anomalous signal patterns consistent with deliberate interference.
4.9.2 Detection of a suspected deliberate occlusion event MUST trigger an immediate escalation to human oversight and MUST be flagged in the occlusion log with the "suspected adversarial" classification. The system MUST NOT treat such events as routine environmental occlusion and continue autonomous operation.
4.9.3 The system SHOULD maintain a record of repeated or patterned occlusion events that may indicate systematic interference with its sensing capability, and SHOULD make this record available to security-review processes.
The governance challenge posed by sensor occlusion is fundamentally different from the challenge posed by software faults, model errors, or data quality failures. A software fault is typically detectable through exception handling, runtime monitoring, or output validation. Sensor occlusion, by contrast, produces a sensor that is physically functional — generating valid electrical signals, passing hardware self-tests, and transmitting data at the expected rate — while providing a perceptual representation of the environment that is partially or wholly incorrect. From the perspective of any downstream component that receives this data, the stream appears normal. There is no exception. There is no error code. The deception is structural.
This characteristic makes sensor occlusion uniquely dangerous in autonomous systems because the standard engineering assumption — that a functioning data stream is a valid data stream — is violated. The governance framework must therefore impose a requirement for independent validation of the epistemic content of sensor outputs, not merely their technical transmission characteristics. This is the core motivation for the cross-sensor consistency requirements in Section 4.2 and the independence requirement in Section 4.1.1.
A second structural problem is that perception pipelines trained or calibrated on non-occluded sensor data frequently produce outputs with plausible-looking confidence scores when fed occluded inputs. A camera-based object detector presented with a partially blocked field may classify the occluded region as "open space" with high confidence, because open space is the most common label associated with uniform low-texture inputs in its training distribution. A LiDAR-based SLAM algorithm receiving sparse returns from an ice-covered dome may converge on an internally consistent but spatially incorrect map. These are not edge cases — they are predictable properties of systems trained on representative-conditions data and deployed in adversarial or degraded conditions.
The behavioural requirements in Section 4.4 are designed to break this failure chain at the action layer. Even if the perception pipeline fails to correctly characterise its own uncertainty, the occlusion detection mechanism — operating independently of the perception stack — MUST interpose before the degraded perception drives action.
This dimension is classified as Detective, but the timing semantics of detection are critical. A detective control that identifies occlusion only after an unsafe action has been taken provides forensic value but not safety value. The requirements in Section 4.1.3 (continuous operation during real-time action) and Section 4.4.1 (latency-bounded behavioural response) are designed to ensure that detection is pre-actuation wherever operationally feasible. The control is classified Detective rather than Preventive because the occlusion condition itself cannot be prevented by the autonomous agent — it is imposed by the environment or by physics — but the agent's response to detecting that condition is what prevents harm.
In public-sector and law-enforcement deployments, the governance rationale extends beyond safety into procedural fairness and legal accountability. When an autonomous system takes or recommends an action that affects a person's rights — including surveillance, identification, movement restriction, or enforcement action — the integrity of the perceptual basis for that action is a matter of legal significance, not merely engineering quality. Example 3.3 illustrates a scenario where the absence of sensor health logging made it structurally impossible for the deploying authority to defend or repudiate the technical basis for a consequential enforcement decision. The logging requirements in Section 4.3 are directly motivated by this accountability gap.
Pattern 6.1.1 — Signal Statistics Monitoring (Passive Occlusion Detection) For camera and imaging sensors, compute per-frame statistics including mean luminance, spatial variance, gradient magnitude, and frequency-domain energy across predefined zones of the sensor field. Establish rolling baselines over the preceding N frames under known-good conditions. Flag frames where multiple statistics simultaneously deviate from the baseline beyond a defined threshold as potentially occluded. This approach detects both full blockage (near-zero variance) and saturation (near-maximum mean with near-zero variance) without requiring a separate hardware diagnostic. Zone-level analysis (dividing the sensor field into a grid) enables detection of partial blockage affecting a sub-region of the field.
Pattern 6.1.2 — Active Sensor Interrogation (Active Occlusion Detection) For sensors that support active probing — including LiDAR, radar, ultrasonic, and time-of-flight depth sensors — periodically direct a known signal pattern at a reference target at a fixed known location within the sensor's field. Compare the received signal against the expected return. Deviations in return time, amplitude, or pattern that cannot be explained by legitimate environmental change indicate occlusion or sensor degradation. This pattern requires the inclusion of calibration targets in the physical deployment environment (e.g., retroreflective panels at fixed positions in a warehouse, or embedded calibration surfaces in infrastructure deployments).
Pattern 6.1.3 — Cross-Modal Plausibility Gating Implement a consistency arbitration layer between the sensor fusion module and the action pipeline. This layer maintains a cross-modal agreement score for each observable region of the environment. Agreement is computed between modalities that share coverage of a region (e.g., LiDAR point density and camera-derived depth in the same frustum). When the agreement score for a region drops below a threshold, queries to the action pipeline that require knowledge of that region are flagged as "coverage-degraded" and routed through a restricted decision envelope. This pattern is particularly effective for detecting single-modality occlusion in multi-sensor systems.
Pattern 6.1.4 — Environmental Occlusion Prediction Integration Integrate the system's mission planning or path planning module with environmental data sources that predict foreseeable occlusion conditions. For mobile platforms, solar position models, precipitation forecasts, and site-specific hazard maps (identifying dust zones, smoke zones, or structures that cast shadows on sensor fields at particular times of day or headings) can be used to schedule operations around known occlusion windows or to pre-alert operators. For fixed infrastructure sensors, seasonal and diurnal occlusion risk calendars derived from historical incident data provide actionable predictive capability.
Pattern 6.1.5 — Degraded-Mode Action Envelopes Pre-define and validate a hierarchy of action envelopes corresponding to different levels of sensor availability. The full-capability envelope requires all sensors to be unoccluded and within calibration. Progressive degradation envelopes specify maximum speeds, minimum clearance distances, prohibited manoeuvre types, and mandatory human-clearance requirements for each sensor availability configuration. Implement these as hard-coded constraints in the action execution layer, not as recommendations from the planning layer. This ensures that degraded perception state automatically constrains what the system can physically do, rather than merely advising more cautious planning.
Pattern 6.1.6 — Tamper-Evident Occlusion Logging via Append-Only Audit Chain Implement the occlusion event log as an append-only data structure with cryptographic chaining of sequential entries (each entry includes a hash of the previous entry). This structure makes retrospective modification of log entries detectable. For deployments where regulatory or legal accountability is paramount, replicate the log to an off-platform store in near-real-time to prevent loss through platform damage or tampering.
Anti-Pattern 6.2.1 — Reliance on Hardware Fault Flags as the Sole Occlusion Indicator Hardware fault signals indicate that a sensor has failed in a detectable electronic manner — disconnection, power loss, sensor element burnout, or communication timeout. They do not detect occlusion, saturation, contamination, or partial blockage of a physically functioning sensor. Systems that declare a sensor "healthy" solely on the basis of the absence of hardware fault flags will systematically fail to detect the most operationally common forms of sensor degradation. This anti-pattern is responsible for the failure chains in all three examples in Section 3.
Anti-Pattern 6.2.2 — Treating Synthetic In-Painting or Interpolation as Equivalent to Real Sensor Data Some perception architectures respond to detected coverage gaps by interpolating from adjacent valid measurements or using a generative model to "fill in" occluded regions. While this may maintain visual plausibility of the world model for visualisation purposes, it MUST NOT be used to satisfy the input requirements of an action pipeline without explicit disclosure of the synthesised nature of the data. Actions taken on the basis of synthesised data in occluded regions carry unmeasured epistemic uncertainty and may violate the safety assumptions of the action pipeline's design.
Anti-Pattern 6.2.3 — Suppressing Occlusion Alerts During High-Tempo Operations Some implementations suppress or throttle sensor health alerts during periods of high operational tempo on the grounds that interrupting the action pipeline for an occlusion alert reduces throughput. This reasoning inverts the priority ordering. High-tempo operations increase, rather than decrease, the consequence of degraded perception. Alert suppression should never be an operational parameter that can be adjusted to improve performance metrics.
Anti-Pattern 6.2.4 — Conflating "Low Confidence" with "Occluded" Low-confidence model outputs and occluded sensor inputs are distinct conditions requiring different responses. A low-confidence detection may indicate a genuinely ambiguous scene that the model is uncertain about — the sensor is working correctly, but the scene is hard. An occluded sensor may produce high-confidence outputs (the filled-space scenario in Section 5.2) on a scene representation that is entirely wrong. Detection mechanisms MUST NOT substitute model confidence as a proxy for sensor health.
Anti-Pattern 6.2.5 — Single-Point Occlusion Detection Architecture Implementing occlusion detection as a single module in the perception pipeline, rather than as a distributed cross-checking layer, creates a single point of failure: if the occlusion detection module itself is compromised, overloaded, or incorrectly configured, all sensors are de facto unmonitored. Occlusion detection should be implemented as a federated function with at least two independent mechanisms per safety-critical modality.
Industrial Robotics and Logistics: Self-occlusion from carried payloads is the dominant failure mode. Require automatic sensor coverage reassessment on each pick-and-place cycle and integrate payload geometry data (from the warehouse management system or barcode manifest) into the occlusion risk calculation.
Smart Infrastructure and Traffic Management: Diurnal and seasonal solar glare is predictable and schedulable. Operators should maintain a geospatially indexed calendar of glare windows for each camera node, cross-referenced against the node's orientation and local solar ephemeris, and implement mandatory manual oversight or automated override during these windows.
Law Enforcement and Public-Order Drones: Adversarial occlusion resistance (Section 4.9) is operationally significant. Deploy multi-spectral sensing where the budget allows, and ensure that the legal authorisation framework governing the use of sensor data explicitly requires that sensor health state be logged and preserved as part of any enforcement record.
Medical and Surgical Robotics: Tissue occlusion of surgical cameras (blood, debris, fluid) must be treated as a safety-critical occlusion event. Pre-define halting criteria referenced to camera clarity metrics and require surgeon confirmation before resuming autonomous or semi-autonomous action after an occlusion event.
| Level | Capability |
|---|---|
| Level 1 — Initial | Hardware fault flags only; no independent occlusion detection; no occlusion logging |
| Level 2 — Basic | Single-modality signal statistics monitoring for primary sensor; manual review of anomalies |
| Level 3 — Managed | Multi-modality occlusion detection; automated behavioural response for detected occlusions; structured occlusion logging |
| Level 4 — Integrated | Cross-sensor consistency gating; degraded-mode action envelopes; predictive environmental occlusion assessment |
| Level 5 — Optimised | Federated occlusion detection with independent mechanisms per modality; adversarial occlusion detection; tamper-evident audit chain; continuous improvement from post-incident occlusion analysis |
| Artefact | Description | Retention |
|---|---|---|
| Occlusion Risk Analysis Document | Systematic enumeration of foreseeable occlusion scenarios per modality, consequence assessment, and specified detection and response for each scenario (per 4.7.1) | Lifetime of system plus 7 years |
| Sensor Coverage Map | Documented field-of-view geometry for each sensor modality, including self-occlusion analysis per payload configuration (per 4.8.1) | Lifetime of system plus 7 years |
| Occlusion Threshold Specification | Quantitative thresholds for each occlusion class per modality with derivation rationale (per 4.1.4) | Lifetime of system plus 7 years |
| Degraded-Mode Action Envelope Specification | Pre-defined action constraints for each sensor availability configuration, with validation evidence (per 6.1.5) | Lifetime of system plus 7 years |
| Safety Case — Occlusion Chapter | Integration of occlusion governance requirements into the system safety case including residual risk acceptance statements | Lifetime of system plus 10 years |
| Artefact | Description | Retention |
|---|---|---|
| Occlusion Event Log | Tamper-evident, time-synchronised log of all detected occlusion events (per 4.3.1 through 4.3.4) | Greater of 90 days, applicable sector requirement, or duration of open investigation |
| Sensor Health Status Stream | Continuous record of sensor health parameters (not merely hardware fault flags) at operational resolution | 30 days minimum; longer if an occlusion event occurs within the window |
| Cross-Sensor Agreement Scores | Logged record of inter-sensor consistency check outcomes at operational frequency (per 4.2.3) | 30 days minimum |
| Operator Notification Records | Record of all occlusion-triggered human notifications including delivery confirmation and acknowledgement status (per 4.6) | 90 days minimum |
| Predictive Occlusion Assessment Records | Logs of environmental occlusion risk assessments and any pre-emptive operational adjustments (per 4.5.3) | 30 days minimum |
| Artefact | Description | Retention |
|---|---|---|
| Post-Incident Occlusion Analysis | Forensic reconstruction of sensor health state during the period surrounding any safety incident, regulatory inquiry, or rights-relevant enforcement action | 7 years from incident date or until conclusion of all related proceedings |
| Occlusion Risk Analysis Update Record | Version-controlled record of revisions to the Occlusion Risk Analysis following incident review or domain expansion (per 4.7.2) | Lifetime of system plus 7 years |
Objective: Verify that the occlusion detection mechanism operates independently of hardware fault flags and can detect each specified occlusion class continuously during active operation.
Method: For each sensor modality in scope, conduct a controlled occlusion injection test in a representative operational environment. Apply each of the four occlusion classes defined in 4.1.2 in turn: (a) full blockage using an opaque cover; (b) partial blockage covering 40 percent of the sensing aperture; (c) saturation using an appropriate stimulus (overexposure for optical sensors, strong specular return for ranging sensors); (d) signal degradation by inserting calibrated attenuation. In each case, verify that the sensor's hardware interface continues to report a fault-free status (confirming the occlusion does not generate a hardware fault flag). Measure the time from occlusion onset to occlusion detection event in the system log. Perform tests with the system in active operational mode executing a representative task.
Pass Criteria:
Conformance Scoring:
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Sensor Occlusion Detection Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-594 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-594 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Sensor Occlusion Detection Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without sensor occlusion detection governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-594, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.