AG-545

Near-Miss Telemetry Governance

Transport, Logistics & Autonomous Mobility ~23 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

Section 2: Summary

This dimension governs the systematic capture, classification, transmission, storage, and structured learning from near-miss events and anomalous manoeuvres generated by autonomous and semi-autonomous transport agents operating in physical environments. Near-miss telemetry occupies a uniquely critical position in transport safety governance: unlike actual collisions, near misses are high-frequency leading indicators of latent system failures that, if unanalysed, accumulate silently until a catastrophic threshold is crossed. Failure in this dimension manifests as silent data loss during edge-to-cloud transmission, inconsistent classification thresholds that prevent cross-fleet pattern recognition, absence of closed-loop learning pipelines connecting field events to model retraining or policy update cycles, and regulatory non-compliance when near-miss records cannot be produced in the format and retention window required by national or supranational transport authorities.

Section 3: Examples

Example 3.1 — Undetected Sensor Occlusion Pattern Across Urban Delivery Fleet

A fleet of 340 last-mile autonomous delivery robots operating across a single metropolitan area logs a standard forward-clearance warning when a pedestrian steps into a crosswalk at a distance of 1.4 metres from the robot's projected path. The event is classified locally as a routine proximity alert (severity tier 2 of 5) and transmitted to the fleet management platform over a congested 4G uplink. Due to packet fragmentation under uplink saturation conditions, 38% of telemetry payloads for this event class are silently dropped at the edge gateway without acknowledgment failure, meaning the cloud analytics layer never receives them. Over 47 operating days, 2,104 near-miss events of this type go unrecorded. A safety engineer reviewing monthly aggregated incident rates sees no anomalous pedestrian-interaction signal. On day 51, a unit executes the same proximate-pedestrian scenario but at night under reduced ambient lighting, fails to decelerate in time, and causes a minor collision resulting in a fractured wrist. Post-incident forensic recovery of on-device logs reveals the 47-day accumulation of the suppressed event class. Regulatory investigators find that the operator cannot produce a complete near-miss record and assess a penalty of €420,000 under national transport safety regulations, with a further requirement to suspend fleet operations for audit. The entire failure chain originates from the absence of acknowledged-delivery confirmation on near-miss telemetry packets.

Example 3.2 — Cross-Border Classification Mismatch Concealing Highway Lane-Change Anomalies

An autonomous freight carrier operates a corridor of 18 heavy goods vehicles across three EU member states. Each state's national regulatory framework defines a "near-miss" differently: State A uses a time-to-collision (TTC) threshold of less than 1.5 seconds; State B uses a lateral clearance threshold of less than 0.8 metres; State C uses a combined TTC-and-deceleration metric requiring both TTC < 2.0 seconds and deceleration > 0.4 g. The fleet operator implements a single firmware-level classification filter calibrated to State B's standard because that is the jurisdiction of the operator's registered headquarters. On the State A and State C highway segments, 613 events over six months meet those states' local near-miss definitions but do not meet State B's lateral-clearance criterion, and are therefore classified as non-events and discarded. A mandatory joint inspection by the three states' transport authorities — triggered by a separate unrelated incident — surfaces the classification gap. The authorities determine that a recurring lane-change instability pattern, present in all 613 suppressed events, correlates with a known firmware defect. That defect had already been patched for a separate vehicle class six weeks earlier; had the telemetry been correctly classified and shared, the patch would have been applied to the freight corridor firmware within two weeks of the defect's identification, eliminating all 613 subsequent near-miss conditions. The operator faces mandatory cross-border incident reporting obligations under Article 62 of the EU AI Act's high-risk system provisions and loses its operating licence in State C for 90 days pending a conformance audit.

Example 3.3 — Learning Pipeline Disconnection Following Organisational Restructuring

A regional autonomous rail system operates 22 light-rail units equipped with an AI-based obstacle detection and emergency braking system. The system is governed by a safety engineering team that maintains a near-miss review board meeting fortnightly, ingesting classified telemetry events and issuing model retraining tickets when event frequency for any class exceeds a defined threshold. Following a corporate merger, the safety engineering team is reorganised and the near-miss review board is suspended pending a new governance charter. The suspension lasts 19 weeks. During this period, 87 near-miss events are recorded and correctly transmitted to the central telemetry store, but no review board convenes to evaluate them. Within those 87 events, 14 belong to a previously unseen class: obstacle detection failures occurring at platform edges during simultaneous rain and low-angle sunlight conditions — a sensor fusion edge case not present in the training distribution. On week 22 post-merger, a unit fails to detect a maintenance worker at a platform edge under identical conditions and applies emergency braking only 0.9 seconds before contact, stopping with 2.1 metres of clearance. A near-miss review of that specific event is performed reactively. The review reveals the 14 preceding events of the same class, all of which had been sitting unread in the telemetry store for up to 18 weeks. Had the learning pipeline remained connected, the novel event class would have been escalated after the third occurrence (week 4 of the suspension) and a model patch would have been issued within the standard 10-business-day retraining cycle. The 0.9-second stop margin represents an unacceptable safety residual that, under slightly different conditions (higher approach speed, wet rail), would have resulted in a fatality. The regulator issues a formal improvement notice requiring governance continuity controls for near-miss learning pipelines that survive organisational change events.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to any autonomous or semi-autonomous agent operating in a transport or logistics environment where the agent's physical movement, trajectory planning, or actuation decisions create the possibility of collision, obstruction, or hazardous interaction with persons, other vehicles, infrastructure, or cargo. Scope includes road vehicles (passenger and freight), rail units, maritime autonomous surface vessels operating in confined waters, aerial delivery agents operating below regulated airspace separation minima, and warehouse or depot robotic systems operating in shared human-robot environments. Scope extends to the full near-miss telemetry lifecycle: event detection at the agent level, local classification, transmission to aggregation infrastructure, storage, cross-fleet analytics, regulatory reporting, and structured feedback into safety-relevant model or policy update cycles. Out of scope: purely administrative or software-only agents with no physical actuation capability; telemetry relating to non-safety performance metrics (energy consumption, route efficiency) unless those metrics are co-located in the same data pipeline as safety telemetry and their processing could interfere with safety record integrity.

4.1 Near-Miss Event Detection

4.1.1 The agent MUST implement one or more continuously active detection mechanisms capable of identifying events that meet the near-miss definition applicable in the jurisdiction of operation, including time-to-collision thresholds, lateral clearance minima, anomalous deceleration profiles, unplanned emergency stop activations, unexpected trajectory deviations exceeding configured bounds, and sensor-reported environmental anomalies that triggered a safety-relevant decision.

4.1.2 Detection mechanisms MUST operate independently of the agent's primary planning and control stack such that a failure in the planning stack does not suppress near-miss event detection.

4.1.3 The agent MUST maintain a locally persistent near-miss event buffer with a minimum retention capacity sufficient to store 72 hours of events at the 99th percentile event rate observed across the fleet, protected against power loss using non-volatile storage.

4.1.4 Where an agent operates across multiple jurisdictions with differing near-miss thresholds, the agent MUST apply the most conservative (lowest threshold) applicable classification at point of capture rather than filtering to a single jurisdiction's standard.

4.1.5 Detection thresholds MUST be versioned and the active version MUST be included in each telemetry record's metadata payload.

4.2 Event Classification

4.2.1 Each detected near-miss event MUST be assigned a severity classification using a taxonomy that includes at minimum: (a) event class identifier drawn from a published taxonomy maintained by the operator or relevant regulatory body; (b) estimated severity score on a defined ordinal scale of no fewer than four levels; (c) the primary causal attribution category (sensor failure, perception error, planning error, external actor, infrastructure condition, or unknown); and (d) whether a human override was initiated.

4.2.2 Classification MUST be performed deterministically at the edge and MUST NOT rely on connectivity to a remote model or cloud inference endpoint as a prerequisite for initial classification.

4.2.3 Where the on-device classifier assigns a causal attribution of "unknown," the event MUST be automatically escalated to the highest severity tier for transmission and review purposes.

4.2.4 The operator MUST maintain a documented classification schema that is version-controlled, publicly available to regulatory authorities on request, and reviewed at minimum annually or following any event in which post-incident investigation reveals that the classification schema produced a materially incorrect severity assignment.

4.3 Telemetry Transmission Integrity

4.3.1 Near-miss telemetry records MUST be transmitted using a protocol that provides end-to-end delivery acknowledgment at the record level, such that the transmitting agent can confirm that a specific record has been received and committed by the receiving infrastructure.

4.3.2 In the absence of a delivery acknowledgment within a configurable timeout period (default SHOULD be no greater than 300 seconds), the agent MUST retain the record in its local buffer and reattempt transmission using an exponential backoff schedule.

4.3.3 Telemetry records MUST NOT be deleted from local storage until a confirmed delivery acknowledgment has been received.

4.3.4 The operator MUST monitor the ratio of transmitted records to acknowledged records across the fleet and MUST generate an operational alert if that ratio falls below 0.995 (99.5% delivery confirmation rate) within any rolling 24-hour window for any individual agent or any geographic segment of the fleet.

4.3.5 Telemetry payloads MUST be cryptographically signed at the point of creation using a key bound to the specific agent's hardware security module or equivalent tamper-resistant key store, such that the receiving infrastructure can verify that the record was generated by the attested agent and has not been modified in transit.

4.3.6 The agent MUST include a monotonically increasing sequence number in each telemetry record, and the receiving infrastructure MUST flag and investigate any gaps in the sequence that are not accounted for by acknowledged deletion.

4.4 Storage and Retention

4.4.1 Near-miss telemetry records MUST be retained in their original unmodified form for a minimum of seven years from the date of the event, or for such longer period as is required by the applicable regulatory framework in the jurisdiction(s) of operation.

4.4.2 The storage system MUST implement write-once or append-only semantics for raw telemetry records, preventing retroactive modification of original event data.

4.4.3 Derived analytics products, summaries, and classification overrides generated through post-hoc review MUST be stored as separate linked records and MUST NOT replace or modify the original raw record.

4.4.4 The operator MUST maintain a documented data lineage record for each near-miss event that traces the event from on-device capture through transmission, storage, any classification review, and any resulting action taken.

4.4.5 Storage systems MUST be geographically replicated with a recovery point objective of no greater than one hour for near-miss records classified at severity tier 3 or above.

4.5 Cross-Fleet and Cross-Jurisdiction Aggregation

4.5.1 The operator MUST aggregate near-miss telemetry across all agents in a fleet into a unified analytics environment capable of identifying patterns that are not visible at the individual-agent level, including recurrent event classes, geographic clustering, time-of-day correlations, and firmware or model version correlations.

4.5.2 Where the fleet operates across multiple jurisdictions, the aggregation layer MUST store the jurisdiction-specific classification alongside the operator's unified classification for each event, enabling both local regulatory reporting and cross-fleet pattern analysis without loss of jurisdictional fidelity.

4.5.3 The operator MUST implement automated anomaly detection over the aggregated telemetry stream capable of surfacing novel event classes — event types not previously observed in the fleet's history — within 48 hours of the third occurrence of that class across any combination of agents.

4.5.4 Fleet-level anomaly signals MUST be routed to a named human review function within the operator's safety governance structure within four hours of the automated signal being raised.

4.6 Structured Learning Pipeline

4.6.1 The operator MUST maintain a documented near-miss learning pipeline that defines the process by which classified and reviewed near-miss events inform updates to: (a) on-device perception or planning models; (b) operational policy parameters such as speed limits, geofenced restricted zones, or time-of-day operational constraints; (c) training data curation for future model versions; and (d) simulation scenario libraries used in pre-deployment testing.

4.6.2 The learning pipeline MUST be governed by a formally constituted review function — a near-miss review board or equivalent — that meets at a defined cadence of no less than once per calendar fortnight during active fleet operations.

4.6.3 The near-miss review board MUST include at minimum one safety engineer with formal qualification in functional safety (or equivalent), one representative from the AI/ML model development function, and one representative empowered to authorise operational policy changes.

4.6.4 The operator MUST define, document, and enforce escalation thresholds that trigger an extraordinary review board session outside the standard fortnightly cadence, including: any single event classified at the highest severity tier; any event class whose frequency increases by more than 50% within a 14-day rolling window; any novel event class identified under 4.5.3.

4.6.5 The near-miss review board MUST document all decisions, including decisions not to act, with explicit rationale, and those records MUST be retained as part of the operator's safety case.

4.6.6 The learning pipeline MUST survive organisational change events. The operator MUST designate a governance continuity owner responsible for ensuring that the near-miss review function is formally reconstituted within 10 business days of any merger, acquisition, restructuring, or senior personnel departure that affects the board's quorum.

4.7 Regulatory Reporting

4.7.1 The operator MUST produce near-miss telemetry reports in the format and at the frequency required by each jurisdiction in which the fleet operates, without requiring manual reformatting or re-extraction of data from the primary telemetry store.

4.7.2 The operator MUST maintain a regulatory reporting matrix that maps each applicable jurisdiction's near-miss reporting requirements to the fields in the operator's telemetry schema, is reviewed at minimum annually, and is updated within 30 days of any regulatory change.

4.7.3 For cross-border incidents — near-miss events occurring within 10 kilometres of a jurisdictional boundary or events whose causal chain involves infrastructure or actors from multiple jurisdictions — the operator MUST notify all relevant regulatory authorities within the timeframe required by the most stringent applicable national requirement.

4.7.4 Regulatory reports MUST be generated directly from the immutable primary telemetry store and MUST be accompanied by a cryptographic hash of the source records used, enabling regulators to verify the report's fidelity to the underlying data.

4.8 Human Oversight and Escalation

4.8.1 The operator MUST maintain a 24/7 operational safety function capable of receiving automated escalation alerts from the near-miss telemetry system and taking operational interventions — including remote operational constraint, speed reduction, or fleet suspension — within defined response time windows commensurate with the severity classification of the triggering event.

4.8.2 The operator MUST define and document maximum autonomous operation periods for each agent class — periods during which no human has reviewed any near-miss events for that agent — and MUST enforce a mandatory review trigger when that period is exceeded.

4.8.3 Any near-miss event in which the agent's decision was later determined to be inconsistent with its operational design domain MUST be escalated to the operator's chief safety officer or equivalent within 24 hours.

4.9 Third-Party and Supplier Obligations

4.9.1 Where near-miss detection, classification, or telemetry transmission capabilities are provided by a third-party supplier, the operator MUST contractually require that supplier to comply with the technical requirements of this dimension and MUST conduct at minimum an annual conformance audit of the supplier's implementation.

4.9.2 The operator MUST retain ownership and control of all near-miss telemetry records generated by its fleet, regardless of which party operates the telemetry infrastructure, and MUST ensure that supplier agreements do not permit the supplier to delete, aggregate, or transform raw records without the operator's explicit written authorisation.

4.9.3 Third-party suppliers providing on-device classification models MUST provide the operator with sufficient documentation of the model's classification logic to enable the operator to satisfy its obligations under Section 4.2.4, and MUST notify the operator within 5 business days of any identified defect in the classification model that could result in material misclassification of safety-relevant events.

Section 5: Rationale

5.1 Why Near-Miss Telemetry Is a Structural Safety Control, Not an Operational Nicety

The transport safety literature, dating from Heinrich's triangle (1931) through to more recent probabilistic safety models such as the Swiss cheese model and STAMP (Systems-Theoretic Accident Model and Processes), consistently establishes that for every fatal or serious injury event there exists a statistically predictable precursor population of near misses and minor incidents sharing the same causal factors. In autonomous transport systems, this relationship is not merely probabilistic but mechanistic: the same perception model, planning algorithm, or sensor configuration that produces a near miss under condition set A will produce a collision under condition set B (different speed, different lighting, different surface friction). Near-miss telemetry is therefore not a retrospective reporting obligation but a prospective safety instrument. An operator that collects, analyses, and acts on near-miss data at scale is not merely being diligent — it is operating a functional leading-indicator safety system. An operator that does not is flying blind.

5.2 Why Detective Control Is the Appropriate Classification

This dimension is classified as Detective rather than Preventive because its primary function is to surface latent conditions after they have manifested as near misses, enabling corrective action before those conditions produce harm. The Preventive controls in this landscape (operational safety envelope monitoring, geofencing, speed governance) are designed to stop individual incidents. Near-Miss Telemetry Governance operates at a different temporal scale — it is the feedback mechanism by which the preventive controls are themselves improved. Without this feedback loop, preventive controls become static against a dynamic operational environment.

5.3 The Edge-Cloud Integrity Problem

Autonomous transport agents at the edge operate under conditions that are structurally hostile to reliable telemetry transmission: intermittent connectivity, competing bandwidth demands from primary operational data streams, power interruptions, and firmware update cycles that may reset transmission state. The requirements in Section 4.3 — delivery acknowledgment, local buffering, cryptographic signing, sequence numbering — address this structural reality. The failure mode in Example 3.1 is not an exotic edge case; it is the default failure mode of any telemetry system that does not explicitly engineer for it. Silent packet loss is particularly dangerous because it produces no operational alert, no error state, and no visible gap in dashboards that aggregate by received records rather than expected records.

5.4 The Cross-Jurisdictional Classification Problem

Autonomous transport is inherently cross-jurisdictional in a way that most regulated industries are not. A truck crossing from one EU member state to another during a single working shift may be subject to two or three materially different near-miss classification standards within hours. The conventional solution — harmonising to a single standard — is politically and practically unavailable in the near term. The requirements in Sections 4.1.4, 4.5.2, and 4.7.2 adopt a practical alternative: capture at the most conservative threshold, store all applicable classifications in parallel, and maintain a regulatory reporting matrix that enables jurisdiction-specific reporting without re-engineering the underlying data. This approach is more expensive to implement but is the only approach that simultaneously satisfies local regulatory obligations and enables cross-fleet learning.

5.5 Organisational Resilience of the Learning Pipeline

Example 3.3 illustrates a failure mode that is structurally distinct from data loss or classification error: the learning pipeline exists and functions correctly, but the human governance layer that acts on its outputs is suspended due to organisational change. This failure is common in AI governance generally — the technical infrastructure outlasts the governance process it was designed to support. Section 4.6.6's requirement for a governance continuity owner and a 10-business-day reconstitution obligation is specifically designed to close this gap. The requirement is deliberately operational rather than aspirational: it imposes a named individual, a defined maximum gap, and a documented reconstitution process, all of which are auditable.

Section 6: Implementation Guidance

Pattern 6.1.1 — Dual-Write Local Buffer Architecture Implement near-miss event capture as a dual-write operation: simultaneously writing to the agent's primary on-device event log and to a dedicated near-miss telemetry buffer resident on a separate storage partition with independent power path. The dedicated buffer should implement a circular log with configurable retention depth. When connectivity is available, a background transmission daemon reads from the buffer, transmits records with acknowledgment confirmation, and marks confirmed records for eventual compaction (but not deletion, pending configurable grace period). This architecture isolates near-miss record integrity from failures in the primary operational data pipeline.

Pattern 6.1.2 — Jurisdiction-Aware Classification Engine Implement the on-device classification engine as a pluggable rules engine whose active ruleset is determined at startup by a jurisdiction configuration file loaded from a cryptographically signed manifest. The manifest maps geofenced zones to applicable classification rulesets and supports real-time ruleset switching when the agent crosses a jurisdictional boundary. The active ruleset version is embedded in every telemetry record. This enables the operator to maintain a single firmware image while supporting jurisdiction-specific classification without conditional logic branches in the core detection code.

Pattern 6.1.3 — Event Class Registry with Novelty Detection Maintain a centralised event class registry that records the first occurrence date, cumulative frequency, and last-occurrence date for every observed event class across the fleet. Route all incoming classified telemetry through a novelty detection filter that queries the registry: if the event class has fewer than three prior occurrences, generate an automatic escalation signal. This implements the 4.5.3 requirement without requiring a full anomaly detection model, using a simple frequency-based signal that is transparent, auditable, and computationally inexpensive.

Pattern 6.1.4 — Immutable Telemetry Store with Append-Only Write Path Implement the central telemetry storage layer using an append-only log structure (conceptually equivalent to an event sourcing pattern) where raw records are written once and never modified. All derived views — severity summaries, regulatory reports, trend dashboards — are computed over the immutable log via read-only query paths. Classification overrides and post-hoc annotations are stored as separate linked records with explicit reference to the original record's hash. This satisfies Section 4.4.2 and Section 4.4.3 simultaneously while enabling rich analytics without compromising evidentiary integrity.

Pattern 6.1.5 — Near-Miss Review Board Scheduling as a System Dependency Treat the near-miss review board meeting schedule as a first-class operational dependency of the fleet management system. The fleet management platform should track the date of the last completed review board session, the number of unreviewed classified events, and the number of open escalation signals, and should surface a fleet operational status indicator that degrades from green to amber if the review board has not met within the standard fortnightly cadence plus a 48-hour grace period. This operationalises Section 4.6.6's continuity requirement by making the governance gap visible in the same operational dashboard used to monitor fleet performance.

6.2 Explicit Anti-Patterns

Anti-Pattern 6.2.1 — Fire-and-Forget Telemetry Transmission Implementing telemetry upload as a UDP-style fire-and-forget operation with no delivery acknowledgment, on the grounds that network reliability is "good enough" in primary operating areas. This is the failure mode of Example 3.1. Network reliability in operational transport environments is never uniformly good enough for safety-critical data. The cost of implementing acknowledgment-based transmission is low relative to the cost of silent data loss.

Anti-Pattern 6.2.2 — Single Jurisdiction Classification Filter Configuring the on-device classification engine with a single jurisdiction's thresholds and treating events that do not meet those thresholds as non-events at the point of capture. This is the failure mode of Example 3.2. Once a potential near-miss event is discarded at capture, it cannot be retrospectively recovered. The correct approach is to capture at the most conservative applicable threshold and filter for reporting purposes downstream.

Anti-Pattern 6.2.3 — Near-Miss Telemetry Processed in the Same Pipeline as Performance Telemetry Routing near-miss telemetry through the same ingestion pipeline as performance telemetry (fuel consumption, route deviation, cargo temperature, etc.) without priority differentiation. Under high-load conditions, performance telemetry volume will dominate the pipeline and near-miss records will be queued, delayed, or dropped. Near-miss telemetry should be carried on a logically or physically separate ingestion path with head-of-line priority.

Anti-Pattern 6.2.4 — Aggregated Summaries as the System of Record Replacing raw near-miss telemetry records with daily or weekly aggregated summaries in the long-term storage tier after a short raw-record retention window (e.g., 90 days). Aggregated summaries cannot satisfy evidentiary requirements in post-incident investigation, cannot support cross-fleet pattern analysis at the event level, and cannot be used for model retraining. Raw records must be the system of record for the full regulatory retention period.

Anti-Pattern 6.2.5 — Near-Miss Learning Pipeline Tied to a Specific Individual Implementing the near-miss review board and learning pipeline as informal processes dependent on a single safety champion or subject matter expert, with no documented handover procedures, no quorum rules, and no formal governance charter. This is the structural precondition for the failure in Example 3.3. Governance processes that are person-dependent, rather than role-dependent with documented succession, are inherently fragile.

Anti-Pattern 6.2.6 — Post-Hoc Classification Overrides Without Audit Trail Permitting reviewers to modify the classification of a near-miss event in the primary telemetry store without creating a linked audit record of the original classification, the new classification, the reviewer's identity, and the rationale. Classification overrides are legitimate and necessary — automated classifiers are imperfect — but they must be transparent and auditable.

6.3 Industry Considerations

Autonomous Road Vehicles: The primary regulatory frameworks (EU AI Act High-Risk classification, UN Regulation 157 on ALKS, UNECE WP.29) require systematic incident and anomaly recording. Near-miss telemetry governance must align with the event data recorder requirements under these frameworks while extending beyond the minimum collision-triggered recording window to capture the broader class of near-miss events.

Autonomous Rail: Rail safety regimes typically operate under a safety case model where the operator must demonstrate that all foreseeable hazards have been identified and mitigated to ALARP (As Low As Reasonably Practicable). Near-miss telemetry is a primary evidence source for the ongoing validity of the safety case. The learning pipeline in Section 4.6 directly supports the hazard log maintenance obligation in most national rail safety regimes.

Maritime Autonomous Surface Vessels: The IMO's Maritime Autonomous Surface Ships (MASS) regulatory framework is still maturing, but the SOLAS requirement for voyage data recorder (VDR) retention of 12 months provides a minimum floor. Near-miss telemetry governance for MASS systems should treat the VDR as a component of the near-miss buffer rather than a substitute for it, given the VDR's limited classification and analytics capability.

Warehouse and Depot Robotics: Operating in environments shared with human workers, where the relevant safety standard is typically ISO 10218 (robot safety) or ISO/TS 15066 (collaborative robots), near-miss events include any protective stop triggered within the operator's configurable safety zone, any unexpected contact detection event, and any path planning failure that resulted in an unplanned static stop. These systems typically have excellent connectivity and low transmission latency, but near-miss governance frameworks designed for on-road vehicles may need to be adapted for the much higher event frequency typical in warehouse environments.

6.4 Maturity Model

LevelDescriptorCharacteristics
1 — InitialAd hocNear-miss events captured only when they trigger visible incidents; no systematic telemetry; no analytics; no learning pipeline
2 — DefinedBasic captureStructured on-device capture with local storage; manual review of severe events; no cross-fleet aggregation; regulatory reporting relies on manual extraction
3 — ManagedSystematic transmissionAcknowledged transmission to central store; cross-fleet aggregation; regular (but not fortnightly) review board; partial learning pipeline connection
4 — Quantitatively ManagedFull governanceAll Section 4 MUSTs met; automated novelty detection; fortnightly review board with documented decisions; regulatory reporting automated; learning pipeline formally governed
5 — OptimisingContinuous improvementClosed-loop learning with measurable near-miss frequency reduction correlated with model/policy updates; predictive escalation based on precursor signals; cross-industry near-miss data sharing where permitted

Section 7: Evidence Requirements

7.1 Artefacts Required for Conformance Demonstration

ArtefactDescriptionRetention Period
Near-miss telemetry raw record archiveComplete, immutable, cryptographically signed records for all near-miss events across all agents7 years minimum; longer if required by applicable jurisdiction
Transmission acknowledgment logPer-record acknowledgment receipts with timestamps, enabling verification of delivery confirmation rate7 years minimum
Sequence number gap analysis reportPeriodic (minimum monthly) automated report identifying any gaps in agent-level sequence numbers, with documented resolution for each gap3 years
Classification schema documentationVersion-controlled taxonomy, threshold definitions, and causal attribution categories with effective datesDuration of operation plus 7 years
Jurisdiction regulatory reporting matrixCurrent mapping of each jurisdiction's near-miss reporting requirements to the operator's telemetry schema, with version historyDuration of operation plus 3 years
Near-miss review board minutesDated records of all board sessions, events reviewed, decisions taken, escalations raised, and actions assigned with owners and due dates7 years minimum
Learning pipeline action logRecord of all model retraining requests, policy parameter changes, simulation scenario additions, and training data updates initiated as a result of near-miss review, with traceability to the triggering event(s)7 years minimum
Governance continuity documentationNamed governance continuity owner, documented reconstitution procedure, and records of any reconstitution eventsDuration of operation plus 3 years
Third-party supplier audit recordsAnnual conformance audit reports for any third-party provider of near-miss detection, classification, or telemetry infrastructure5 years
Regulatory submission recordsAll reports submitted to regulatory authorities, with cryptographic hash linking to source telemetry records10 years
Data lineage recordsPer-event lineage traces from capture through transmission, storage, review, and action7 years

7.2 Evidence Quality Standards

All artefacts must be producible within 5 business days of a regulatory request. Raw telemetry records must be producible in both the original format and in any format specified in the applicable jurisdiction's regulatory reporting standards. Evidence must be accompanied by a chain-of-custody attestation signed by the operator's designated data governance officer. Cryptographic signatures on raw records must remain verifiable for the full retention period, requiring proactive key management to ensure signing keys and verification infrastructure are maintained.

Section 8: Test Specification

8.1 Test: Local Buffer Persistence Under Connectivity Loss

Maps to: 4.1.3, 4.3.2, 4.3.3

Procedure: Simulate a connectivity outage of 72 hours for a test agent operating at the 99th percentile near-miss event rate observed in

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Near-Miss Telemetry Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-545 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-545 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Near-Miss Telemetry Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without near-miss telemetry governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-545, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-545: Near-Miss Telemetry Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-545