AG-167

Sensor, Telemetry and External State Integrity Governance

Execution Integrity, Accountability & Approval Quality ~17 min read AGS v2.1 · April 2026
EU AI Act FCA NIST ISO 42001

2. Summary

Sensor, Telemetry and External State Integrity Governance requires that every AI agent validate the integrity, freshness, and provenance of sensor data, telemetry feeds, and external state inputs before using them to make decisions or take actions. AI agents increasingly operate on real-world data — temperature readings, GPS coordinates, market prices, inventory levels, network metrics, health monitors, and environmental sensors. If this data is stale, spoofed, corrupted, or selectively omitted, the agent's decisions become unreliable regardless of how sound its reasoning is. The principle is straightforward: an agent that acts on unverified external state is an agent that can be manipulated by controlling its inputs. AG-167 requires structural controls that verify data integrity at the point of ingestion, detect anomalies that suggest tampering or degradation, and prevent the agent from acting on data that fails integrity checks — shifting the trust boundary from "the agent trusts its inputs" to "the agent verifies its inputs against defined integrity criteria before acting."

3. Example

Scenario A — Stale Price Feed Causes Incorrect Arbitrage: A financial trading agent monitors real-time price feeds from three exchanges to identify arbitrage opportunities. Exchange A's price feed experiences a network interruption lasting 47 seconds. During the interruption, the agent continues to read the last cached price from Exchange A (GBP 142.30) while Exchanges B and C update to GBP 148.70. The agent identifies what appears to be a GBP 6.40-per-unit arbitrage opportunity and executes a buy of 10,000 units on Exchange A at the stale price. The order arrives at Exchange A, which has already updated to GBP 149.10. The order fills at the current market price, and the agent has created a GBP 68,000 loss instead of the expected GBP 64,000 profit.

What went wrong: The agent treated cached data as current data. No staleness check verified that the Exchange A price feed was within an acceptable freshness window (e.g., less than 2 seconds old). The 47-second gap was not detected because no heartbeat or timestamp validation was performed. Consequence: GBP 68,000 trading loss, potential FCA investigation for inadequate algorithmic trading controls under MiFID II Article 17.

Scenario B — Spoofed Sensor Reading Causes Physical Harm: An industrial control agent monitors a chemical reactor's temperature via a network-connected sensor. An attacker compromises the sensor's firmware and injects readings showing a stable 85°C when the actual temperature is 143°C and rising. The agent, trusting the spoofed reading, does not trigger the cooling system. A redundant thermocouple on a separate network segment reads 147°C, but the agent's decision logic uses the primary sensor by default and only queries the redundant sensor for logging purposes. The reactor overheats, causing a pressure release event that damages equipment worth EUR 2.3 million and injures two operators.

What went wrong: The agent had no mechanism to cross-validate sensor readings against redundant sources. The spoofed reading was within the normal operating range, so no anomaly detection flagged it. No cryptographic authentication verified that the reading originated from the genuine sensor. The redundant sensor was available but was not part of the decision-critical data path. Consequence: EUR 2.3 million equipment damage, two injuries, regulatory investigation under IEC 61511, insurance claim disputed on grounds of inadequate safety instrumented system design.

Scenario C — Selective Telemetry Omission Hides Resource Exhaustion: A cloud infrastructure agent manages auto-scaling based on CPU utilisation, memory usage, and network throughput telemetry from 50 application servers. An internal actor with access to the telemetry pipeline configures a filter that drops telemetry from the 8 servers running the organisation's financial reporting application. The agent sees only 42 servers, all reporting normal utilisation. The 8 hidden servers reach 98% CPU utilisation and begin dropping transactions. The agent does not scale up because its view of the fleet shows adequate capacity. Financial report generation fails during month-end close, delaying regulatory filings by 36 hours.

What went wrong: The agent had no mechanism to verify completeness of telemetry — it processed whatever data arrived without validating that all expected sources were reporting. No inventory reconciliation checked that the number of telemetry sources matched the known fleet size. Consequence: 36-hour delay in regulatory filing, potential FCA enforcement for late reporting, GBP 50,000 in manual recovery costs, reputational damage with the regulator.

4. Requirement Statement

Scope: This dimension applies to all AI agents that consume external data to inform decisions or trigger actions. External data includes sensor readings, telemetry streams, market data feeds, inventory levels, system metrics, health check results, GPS coordinates, environmental measurements, and any other data originating outside the agent's own reasoning process. The scope covers both real-time streaming data and point-in-time queries to external state stores. It includes data from trusted internal sources (internal monitoring systems) and untrusted external sources (third-party APIs, public data feeds, IoT sensors in uncontrolled environments). Read-only agents consuming external data for informational purposes are within scope if the data influences outputs that humans rely on for decisions — an agent that reports a false temperature reading to a human operator is a governance concern even if the agent itself takes no action.

4.1. A conforming system MUST validate the freshness of every external data input against a defined maximum acceptable staleness threshold before using the data to make decisions or take actions.

4.2. A conforming system MUST authenticate the provenance of sensor and telemetry data, verifying that data originates from a registered and expected source, using cryptographic signatures, mutual TLS, or equivalent mechanisms.

4.3. A conforming system MUST detect and reject data inputs that fall outside physically plausible or operationally expected ranges, using predefined bounds or statistical anomaly models.

4.4. A conforming system MUST verify completeness of data inputs by reconciling the set of reporting sources against a known inventory of expected sources, detecting omissions within a defined detection window.

4.5. A conforming system MUST fail safe when data integrity checks fail — blocking or deferring the dependent action rather than proceeding with unverified data.

4.6. A conforming system MUST log all data integrity validation results, including passed checks, failed checks, and the action taken in response to failures, in a tamper-evident record per AG-006.

4.7. A conforming system SHOULD cross-validate critical data inputs against at least one independent source before taking high-impact actions (actions exceeding a defined value threshold or affecting safety-critical systems).

4.8. A conforming system SHOULD implement heartbeat monitoring for continuous data streams, detecting feed interruptions within a configurable detection window (e.g., 5 seconds for real-time trading, 60 seconds for infrastructure monitoring).

4.9. A conforming system MAY implement data quality scoring that degrades gracefully — using data with lower confidence for lower-impact decisions while requiring higher confidence for higher-impact actions.

5. Rationale

The integrity of an AI agent's decisions is bounded by the integrity of its inputs. This is not a new principle — it is the foundational concept of "garbage in, garbage out" — but it acquires new urgency when the agent operates at machine speed, across many data sources, with real-world consequences. A human operator receiving a suspicious sensor reading will pause, question it, check another source, or call a colleague. An AI agent will process the reading and act in milliseconds unless structural controls force it to validate first.

The attack surface is broad. Sensor data can be spoofed by compromising the sensor itself, intercepting the data in transit, or manipulating the data at rest. Telemetry can be selectively omitted by filtering streams, dropping messages, or deregistering sources. External state queries can return stale data if caching layers are not properly invalidated. Market data feeds can be manipulated by injecting artificial quotes or delaying genuine ones. In each case, the agent sees a plausible but incorrect view of the world, and acts on it.

The consequences scale with the agent's authority and the domain. A trading agent acting on stale prices can lose millions in seconds. An industrial control agent acting on spoofed sensor data can cause physical harm. A healthcare agent acting on incorrect vital signs can make dangerous clinical recommendations. A logistics agent acting on incorrect inventory data can disrupt supply chains.

AG-167 addresses this by requiring validation at the point of ingestion — before the data enters the agent's decision process. The validation includes freshness (is the data current?), provenance (did it come from the expected source?), plausibility (is the value within expected ranges?), and completeness (are all expected sources reporting?). When validation fails, the agent must fail safe — blocking or deferring the dependent action rather than proceeding with potentially compromised data.

6. Implementation Guidance

Sensor and telemetry integrity governance requires a layered approach: transport-level security, ingestion-point validation, cross-source verification, and continuous monitoring. No single layer is sufficient on its own.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Market data integrity is regulated under MiFID II Article 17 for algorithmic trading systems. Price feeds must be validated for freshness and accuracy before execution. The FCA expects firms to implement pre-trade controls that include data quality checks. Staleness thresholds for high-frequency trading may be as low as 100 milliseconds.

Healthcare. Medical device telemetry — vital signs monitors, infusion pumps, ventilators — must be validated per FDA 21 CFR Part 11 and IEC 62304. Anomalous readings must trigger clinical alerts, not automated clinical decisions. The IEC 80001 series provides guidance on networked medical device risk management.

Critical Infrastructure. Industrial sensor integrity is governed by IEC 62443 for cybersecurity and IEC 61511 for safety instrumented systems. Safety-critical sensors must be on dedicated, segregated networks. The concept of Safety Integrity Level (SIL) directly maps to the level of redundancy and cross-validation required for sensor data used in safety-critical decisions.

Autonomous Vehicles and Robotics. Embodied agents rely on LIDAR, radar, camera, and IMU data. Sensor fusion algorithms must account for individual sensor degradation or spoofing. ISO 21448 (SOTIF — Safety of the Intended Functionality) addresses the scenario where the agent behaves as designed but the sensor data is insufficient or misleading.

Maturity Model

Basic Implementation — Freshness checks are implemented for external data feeds, rejecting data older than a configured threshold. Basic range-bound plausibility checks reject readings outside defined min/max values. Data sources authenticate via API keys or network-level controls. Failed checks are logged and the dependent action is blocked. Coverage: at least 90% of external data inputs are subject to freshness and range validation.

Intermediate Implementation — All basic capabilities plus: cryptographic provenance verification for sensor and telemetry data. Completeness monitoring reconciles reporting sources against a known inventory. Heartbeat monitoring detects feed interruptions within a configurable window. Statistical anomaly detection supplements range-bound checks for critical data sources. Cross-validation against redundant sources is implemented for high-value or safety-critical decisions. Coverage: 100% of external data inputs validated; all safety-critical data cross-validated.

Advanced Implementation — All intermediate capabilities plus: independent adversarial testing has attempted to spoof sensors, inject stale data, selectively omit telemetry, and manipulate data in transit, and all attempts were detected. Data quality scoring enables graceful degradation — the agent adjusts its decision authority based on the confidence level of available data. The system can demonstrate to regulators that no known attack vector against data integrity can cause the agent to act on compromised inputs without detection.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Staleness Detection and Rejection

Test 8.2: Provenance Spoofing Resistance

Test 8.3: Plausibility Range Enforcement

Test 8.4: Completeness Detection — Source Omission

Test 8.5: Fail-Safe on Integrity Check Failure

Test 8.6: Cross-Validation Divergence Detection

Test 8.7: Heartbeat Interruption Detection

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Direct requirement
MiFID IIArticle 17 (Algorithmic Trading)Direct requirement
IEC 62443SR 3.5 (Input Validation)Direct requirement
IEC 61511Clause 11 (SIS Design and Engineering)Supports compliance
NIST AI RMFMAP 2.3, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Supports compliance

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy and robustness. An agent that acts on unverified sensor data cannot meet accuracy requirements because its decisions are only as accurate as its inputs. The robustness requirement explicitly covers resilience to attempts by unauthorised third parties to manipulate inputs — directly addressing sensor spoofing and telemetry manipulation.

MiFID II — Article 17 (Algorithmic Trading)

Article 17 requires investment firms using algorithmic trading systems to have effective systems and risk controls, including pre-trade controls. Data quality is a foundational pre-trade control — an algorithm trading on stale or manipulated price data violates the requirement for effective risk controls. The FCA and ESMA have both issued guidance emphasising that data quality controls are within the scope of Article 17 obligations.

IEC 62443 — SR 3.5 (Input Validation)

IEC 62443 requires industrial automation and control systems to validate inputs at the boundary between security zones. For AI agents consuming sensor data in industrial environments, this maps directly to the ingestion gateway validation requirement. The security level (SL) classification determines the rigour of validation required.

IEC 61511 — Safety Instrumented Systems

IEC 61511 requires safety instrumented systems to achieve a defined Safety Integrity Level. Sensor integrity — including redundancy, cross-validation, and fault detection — is a core requirement. AI agents that act on sensor data in safety-critical environments must meet the same sensor integrity requirements as traditional safety instrumented systems.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusDomain-specific — localised for single-sensor failures, potentially catastrophic for systematic data manipulation

Consequence chain: Without sensor and telemetry integrity governance, an AI agent's decisions are only as trustworthy as the least-secured data input it consumes. The immediate technical failure is a decision based on incorrect data — a trade at the wrong price, a control action based on a false reading, a scaling decision based on incomplete telemetry. The operational impact depends on the domain: in financial services, a stale price feed can cause losses scaling from thousands to millions of pounds per incident, with MiFID II enforcement exposure. In industrial control, a spoofed sensor reading can cause physical harm, equipment damage, and environmental release — the Stuxnet attack demonstrated that manipulated sensor readings could cause centrifuges to destroy themselves while operators saw normal readings. In healthcare, incorrect vital sign telemetry can lead to missed clinical interventions. In all domains, the failure is insidious because the agent's reasoning appears sound — the error is in the inputs, not the logic, making post-incident diagnosis difficult without comprehensive ingestion logging. Systematic manipulation of multiple data sources can cause widespread operational failures that correlate across systems, amplifying the blast radius.

Cross-references: AG-006 (Tamper-Evident Record Integrity) for immutable logging of all data ingestion events; AG-011 (Action Reversibility and Settlement Integrity) for reversing actions taken on compromised data; AG-019 (Human Escalation & Override Triggers) for escalation when data integrity checks fail; AG-033 (Implied Authority Detection) for detecting data inputs that implicitly claim authority; AG-049 (Governance Decision Explainability) for explaining how data integrity influenced decisions; AG-159 (Trusted Timestamp and Temporal Ordering Governance) for verifying temporal integrity of data timestamps.

Cite this protocol
AgentGoverning. (2026). AG-167: Sensor, Telemetry and External State Integrity Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-167