AG-575

Electronic-Warfare Interference Handling Governance

Defence, Dual-Use & National Security ~22 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

Section 2: Summary

This dimension governs the detection of, response to, and recovery from electronic-warfare (EW) interference events—including radiofrequency jamming, Global Navigation Satellite System (GNSS) spoofing, datalink hijacking, radar deception, and directed electromagnetic pulse (EMP) effects—affecting AI agents operating in defence, dual-use, or mission-critical environments. It matters because AI agents that rely on electromagnetic channels for sensing, localisation, communication, or actuation command can be rendered unreliable, deceptive, or actively dangerous within seconds of a well-executed EW attack, without generating any conventional cybersecurity alert. Failure manifests as an agent that confidently acts on false positional data to deliver kinetic effect at the wrong target, a swarm that loses coordination and enters a fragmented autonomous mode, or an autonomous ground vehicle that navigates into a denied area because its GNSS receiver accepted a spoofed constellation fix with sub-metre confidence.

Section 3: Examples

Example 1 — GNSS Spoofing of an Autonomous Logistics UAV (Forward Operating Base Resupply)

A medium-altitude autonomous rotary-wing UAV is tasked with a 47 km resupply run to a forward operating base. At waypoint 3, an adversary transmitter generates a synthesised GPS L1/L2 constellation with a 200-metre eastward position bias and a gradually increasing velocity error. The UAV's onboard AI navigation agent accepts the spoofed fix because the signal strength is 6 dB above the authentic constellation average—consistent with a ground-based repeater geometry—and the spoofed almanac is cryptographically unsigned (civilian SPS). The agent does not cross-reference inertial navigation system (INS) dead-reckoning accumulated error (which has grown to 180 metres over 14 minutes) against the GNSS jump of 200 metres because the system designer configured INS-GNSS fusion with a Kalman filter innovation gate set at 250 metres—wide enough to accept the spoofed fix without a consistency alarm. The UAV routes 2.3 km into a prohibited fire zone. Detection occurs only when the vehicle enters the weapons engagement zone of a friendly air-defence battery, which queries the mission-management system and finds a 2.1 km track divergence. Consequence: mission abort via RF override, cargo jettisoned, UAV recovers on battery reserve. Had the air-defence query been delayed by 40 seconds, automatic engagement rules would have been triggered.

Root cause: No GNSS authentication (Galileo OS-NMA or GPS MNSA not implemented), Kalman innovation gate sized for noise not for adversarial offset, no INS/GNSS divergence alarm threshold below the gate boundary.

Example 2 — Barrage Jamming Degrading a Ground Unmanned Vehicle Swarm (Urban Clearance Mission)

A six-unit autonomous ground vehicle (AGV) swarm is executing an urban route-clearance mission. Each AGV communicates on a 5.8 GHz mesh protocol for swarm-coordination messages, which include obstacle maps, lead-vehicle waypoints, and e-stop propagation. At T+0, an adversary deploys a spot-noise jammer at 28 dBm effective radiated power centred on 5.8 GHz, achieving a 40 dB jamming-to-signal ratio within a 150-metre radius. Within 4 seconds, inter-vehicle latency climbs from 12 ms to 1,800 ms. At T+6 s, vehicle AGV-3's AI coordination agent classifies the link as "severely degraded" and falls back to its pre-programmed degraded-mode behaviour: continue on last known waypoint at reduced speed. AGV-1 through AGV-2 retain partial connectivity and issue a halt command; the command never reaches AGV-3 through AGV-6. AGV-4, operating fully autonomously, encounters an uncharted obstacle and applies a 90-degree avoidance manoeuvre that takes it off the cleared lane into a confirmed minefield boundary. Consequence: AGV-4 is destroyed, mission is aborted for all units. No personnel casualties, but mission-critical route remains uncleared for 6 hours pending re-tasking.

Root cause: Degraded-mode behaviour was designed to maximise mission continuity, not safety. The swarm had no fallback communication channel (no PACE plan—Primary, Alternate, Contingency, Emergency). No geofence enforcement was active during degraded-mode autonomous navigation.

Example 3 — Meaconing Attack on Maritime Autonomous Surface Vessel (Port Approach)

A maritime autonomous surface vessel (MASV) conducting a port-approach in a contested littoral environment is subjected to a meaconing attack: an adversary intercepts and re-broadcasts legitimate AIS and GNSS signals with a deliberate 90-second time delay and 0.8 nautical mile position offset. The vessel's AI conning agent, which has no independent time-of-arrival ranging capability, accepts the delayed signals as authentic because the vessel's own velocity model predicts it should be approaching the offset position. The agent commands a 15-degree port turn to align with what it believes is the navigation channel. The true position puts the vessel on a collision course with a moored tanker carrying 80,000 tonnes of refined petroleum product. Collision avoidance RADAR (which is not subject to the meaconing attack) detects the tanker at 0.6 nautical miles and generates a conflicting picture. The AI agent is not architected to resolve sensor-disagreement with a safety-first bias; it weights the GNSS-derived chart overlay above the RADAR return because the chart overlay confidence score is higher. The vessel continues the turn. A human watchkeeper intervenes via manual override at 0.3 nautical miles, averting collision. Consequence: 47-second intervention window was sufficient; had the watchkeeper response time been 90 seconds (within established human reaction-time variance for monitoring tasks), a catastrophic collision would have resulted.

Root cause: No sensor-disagreement safety-first arbitration policy; RADAR/GNSS divergence above a threshold (here, 0.4 nautical miles) did not trigger an automatic safe-state transition or human escalation. Meaconing not included in the threat model.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to all AI agents that:

(a) operate in electromagnetic environments where adversarial interference with sensing, navigation, communication, or actuation channels is within the assessed threat model of the operating environment;

(b) receive or act upon data from radiofrequency-dependent sources including GNSS receivers, datalink transceivers, radar processors, communications radios, or any RF-coupled sensor;

(c) are classified as Safety-Critical / CPS Agents, Embodied / Edge / Robotic Agents, or Public Sector / Rights-Sensitive Agents operating under a national-security, defence, or dual-use mandate;

(d) are deployed in contested, denied, or degraded electromagnetic environments (CDED) or in any environment where the system-level threat assessment indicates greater than negligible probability of EW interference.

This dimension does not replace platform-level electromagnetic compatibility (EMC) standards or spectrum management regulations but governs the AI agent layer's behavioural and architectural response to interference conditions that those lower-level controls cannot fully prevent.

4.1 Threat Model Integration

4.1.1 The deploying organisation MUST maintain a documented EW threat model that explicitly enumerates attack categories applicable to the agent's operating environment, including at minimum: GNSS jamming, GNSS spoofing, meaconing, communications jamming (spot, sweep, and barrage), datalink injection, radar deception (chaff, decoys, false returns), and directed-energy effects (EMP, high-power microwave).

4.1.2 The EW threat model MUST be reviewed and updated at a cadence no greater than 12 months or following any significant change to the operational environment, adversary capability assessment, or agent architecture.

4.1.3 The agent's AI system design documentation MUST include an explicit mapping from each threat category in the EW threat model to the detection capability, degraded-mode behaviour, and recovery procedure specified for that threat.

4.2 Interference Detection Architecture

4.2.1 The agent MUST implement an interference detection subsystem capable of identifying, within the latency bounds specified in the system's safety case, at minimum the following conditions: (a) GNSS signal anomaly (including signal strength elevation indicative of spoofing, constellation geometry inconsistency, and navigation message authentication failure where supported by the constellation); (b) communications link quality degradation (packet loss rate, latency, and bit-error rate thresholds); (c) multi-sensor position inconsistency exceeding a defined divergence threshold.

4.2.2 Where the agent relies on GNSS for safety-critical positioning, the agent MUST implement at least one independent, non-GNSS positioning modality (such as inertial navigation, terrain-referenced navigation, visual-odometry, or time-difference-of-arrival ranging) and MUST continuously cross-validate GNSS-derived position against this independent modality.

4.2.3 The cross-validation divergence threshold MUST be set in accordance with the safety integrity level (SIL) or development assurance level (DAL) assigned to the positioning function, and MUST NOT be set wider than the operational exclusion boundary that would result in a safety-relevant positional error.

4.2.4 The agent MUST log all detected interference events, including event type, timestamp, duration, magnitude estimate, and the agent's response action, to a tamper-evident local record that persists through a complete power cycle.

4.2.5 The agent MUST NOT suppress or filter interference detection alarms based on operational continuity considerations alone; suppression is only permissible where a higher-assurance sensor stream has been validated as unaffected and a human authority has explicitly authorised continued operation.

4.3 Sensor Fusion and Trust Arbitration

4.3.1 When two or more sensor modalities provide conflicting positional, targeting, or environmental data, the agent MUST apply a safety-first arbitration policy: in the absence of a validated higher-assurance source, the agent MUST default to the most conservative action (halt, safe-state, or reduced-autonomy mode) rather than select the highest-confidence sensor output.

4.3.2 Kalman filter innovation gates, Bayesian prior weights, and any other probabilistic fusion parameters that govern the acceptance of sensor updates MUST be sized to reject updates consistent with known EW attack profiles, not merely sensor noise profiles. These parameters MUST be documented, justified against the threat model, and verified during acceptance testing.

4.3.3 The agent MUST implement a sensor-trust state machine that records the current trust level of each sensor modality and that propagates trust degradation to all downstream inference and planning modules that depend on that modality.

4.3.4 Where a sensor modality has been placed in a "degraded" or "untrusted" state, the agent MUST NOT use outputs from that modality for safety-critical decisions until the modality has been restored to "trusted" status through a defined re-qualification procedure.

4.4 Degraded-Mode and Safe-State Behaviour

4.4.1 For each EW interference scenario identified in the threat model, the agent MUST have a pre-defined degraded-mode behaviour that has been validated as safe under the applicable safety standard and that is encoded in the agent's mission management logic.

4.4.2 Degraded-mode behaviours MUST be designed to prioritise safety over mission continuity: in any case where continuing autonomous operation cannot be assured safe under the degraded sensor or communication state, the agent MUST transition to a safe state (which may include halt-in-place, return-to-base on inertial navigation alone, controlled landing, or heave-to on last safe heading).

4.4.3 The agent MUST implement a PACE (Primary, Alternate, Contingency, Emergency) communication plan that defines the fallback communication channel and protocol for each level of primary-channel degradation, and MUST autonomously attempt channels in PACE order before declaring communication loss.

4.4.4 Under communication loss declared after exhausting all PACE channels, the agent MUST execute a pre-authorised lost-link procedure (LLP) that has been approved by the responsible human authority before mission commencement and that does not include any action that would expand the agent's operational footprint, enter a restricted area, or engage any target.

4.4.5 Degraded-mode and LLP behaviours MUST be tested under representative interference conditions (not merely simulated link-drop) at least annually and following any software update that modifies navigation, communication, or mission-management logic.

4.5 Geofencing and Boundary Enforcement Under Interference

4.5.1 The agent MUST maintain an onboard geofence enforcement capability that does not depend on externally provided GNSS signals as its sole input. When GNSS is classified as degraded or untrusted, geofence enforcement MUST rely on the independent positioning modality specified under 4.2.2.

4.5.2 Where neither GNSS nor the independent positioning modality can provide position estimates within the accuracy required to enforce geofence boundaries, the agent MUST transition to a safe state rather than continue autonomous navigation.

4.5.3 Geofence parameters MUST include a buffer zone sized to account for the combined position uncertainty of the active positioning modality under degraded conditions, such that the probability of inadvertent boundary violation is below the threshold specified in the system safety case.

4.6 Human Authority and Override

4.6.1 The agent MUST provide a human operator with the ability to assume manual control, abort the mission, or command a safe-state transition at any time, including during an active interference event. This override capability MUST use a communication channel assessed as more resilient to the anticipated interference threat than the primary mission datalink.

4.6.2 The agent MUST alert a human authority within a defined time window (specified in the system's human-machine interface requirements and not to exceed the time-to-criticality for the worst-case interference scenario in the threat model) whenever an interference event is detected that exceeds a defined severity threshold.

4.6.3 The agent MUST NOT autonomously re-engage a previously aborted mission segment following recovery from an interference event without explicit re-authorisation from a human authority.

4.6.4 The agent MUST log all human override and abort commands, including timestamp, operator identity where technically feasible, and the agent state at the time of the command.

4.7 Cryptographic and Authentication Hardening

4.7.1 Where navigation message authentication (NMA) is available from the GNSS constellation in use (e.g., Galileo OS-NMA, GPS MNSA when available), the agent MUST implement NMA verification and MUST classify unauthenticated signals as degraded.

4.7.2 All mission-critical datalinks MUST use authenticated, integrity-protected message formats. The agent MUST reject and log any command or data message that fails authentication or integrity verification.

4.7.3 Cryptographic keys and authentication credentials used for datalink security MUST be provisioned, rotated, and revoked through a key management process that is separate from and more resilient than the operational datalink.

4.7.4 The agent MUST implement anti-replay protection on all safety-critical command channels and MUST reject replayed commands with a timestamp or sequence-number discrepancy exceeding the configured tolerance.

4.8 Incident Reporting and Post-Event Analysis

4.8.1 Any interference event that triggers a degraded-mode transition, safe-state transition, or human override MUST be reported to the responsible operational authority within a timeframe specified in the mission orders (not to exceed 24 hours for mission-level events; not to exceed 1 hour for events involving safe-state transitions or human overrides in active operational environments).

4.8.2 Post-event analysis MUST be conducted for all interference events resulting in a safe-state transition, human override, or mission abort. The analysis MUST include: interference type and estimated source characteristics; timeline of agent state transitions; assessment of whether the agent response was consistent with designed degraded-mode behaviour; identification of any gap between designed and observed behaviour; and recommended corrective action.

4.8.3 Post-event analysis reports MUST be retained for a minimum of seven years or for the operational life of the system, whichever is longer, and MUST be made available to regulatory or oversight bodies upon request.

4.9 Supply Chain and Component Assurance

4.9.1 GNSS receivers, RF front-ends, and communications transceivers used in safety-critical roles MUST be sourced from suppliers whose components have been assessed for susceptibility to spoofing and jamming as part of the procurement process, with assurance evidence retained in the supply chain record.

4.9.2 Firmware and software updates to any component in the interference-detection, navigation, or communications stack MUST be validated against the requirements of this dimension before deployment to operational systems, and the validation evidence MUST be retained.

Section 5: Rationale

5.1 Why Preventive Control Classification Is Appropriate

Electronic-warfare interference is, by design, rapid-onset and often covert in its early phases. Jamming can degrade a link from nominal to unusable in under one second; spoofing attacks are engineered to be indistinguishable from authentic signals until the victim agent has already updated its state on false data. Detective and corrective controls—however well-designed—operate after the agent has already ingested and acted on corrupted inputs. The consequence chain in safety-critical deployments (wrong position accepted, wrong target engaged, safe boundary violated) can become irreversible within the latency window between attack onset and detection. Preventive controls—architectural redundancy, cryptographic authentication, appropriately parameterised fusion gates, pre-coded safe-state behaviours—act before the corrupted signal has been used to drive a consequential output. This is why the dimension is classified Preventive: the objective is to make the agent structurally resistant to the interference, not merely reactive to it.

5.2 Structural Enforcement vs. Behavioural Policy

Behavioural policies (such as rules stating "the agent should stop if it detects jamming") are inadequate as the primary control mechanism because: (a) the detection of jamming is itself a sensing problem that can be defeated if the architecture is not hardened; (b) under adversarial conditions, agents under cognitive load or in autonomous mode may not execute policy-compliant responses if those responses are not enforced at the architecture level; and (c) mission pressure creates incentives to suppress or override interference alarms, especially if the alarm is ambiguous.

Structural enforcement means that the safe-state transition is triggered by an architectural invariant—a hardware or firmware condition that the operational AI logic cannot override without an authenticated human command. The sensor trust state machine required by 4.3.3 is a structural mechanism: once a modality is placed in "untrusted," the planning module physically cannot receive its outputs until re-qualification. This is not a behavioural promise; it is an architectural constraint.

5.3 The Adversarial vs. Environmental Distinction

Most sensor-fusion and communications-resilience standards are written for the environmental degradation case: multipath, ionospheric scintillation, thermal noise. Adversarial EW is distinct in three ways. First, the adversary adapts to the victim's detection mechanisms: if the agent uses signal-strength as a spoofing indicator, the adversary will modulate power to stay within the authentic range. Second, the adversary targets the specific failure modes of the victim's architecture: Kalman filter innovation gates can be walked through gradually; inertial drift can be exploited to enlarge the cross-validation window. Third, adversarial interference is timed to operational context—it will be applied at the moment of maximum mission criticality. Requirements 4.1 through 4.3 are designed to close the gap between environmental-degradation-oriented design and adversarial-environment-oriented design.

5.4 Why Multi-Sensor Redundancy Is Necessary But Not Sufficient

Multi-sensor architectures are often presented as sufficient mitigation for GNSS spoofing: "the INS will catch the discrepancy." This is true only if the fusion architecture is correctly parameterised (4.2.3), the innovation gate is set adversarially rather than noiseily (4.3.2), and the trust arbitration policy defaults to safety-first rather than highest-confidence (4.3.1). The failure mode illustrated in Example 1 is not a sensor failure—it is a fusion architecture failure. All sensors were functioning correctly. The Kalman gate was sized for noise; the adversary injected a bias smaller than the gate. Multi-sensor redundancy is necessary but the fusion governance around it is the operative control.

Section 6: Implementation Guidance

Layered Positioning Architecture. Implement a four-layer positioning stack: (1) GNSS with NMA where available; (2) tight-coupled GNSS/INS integration with an adversarially parameterised innovation gate; (3) an independent non-GNSS modality (terrain-referenced navigation, visual-odometry, LiDAR-odometry, or time-difference-of-arrival); (4) a position-monitoring arbitration layer that continuously computes cross-modality divergence and maintains the sensor trust state machine. Each layer should be capable of providing a bounded-accuracy position estimate independently. The arbitration layer should be a dedicated, safety-qualified software partition that cannot be overridden by the mission AI.

PACE Communication Architecture. Structure communication channels explicitly as Primary / Alternate / Contingency / Emergency. The primary link may be the highest-bandwidth mission datalink. The alternate should operate on a different frequency band and modulation scheme. The contingency should use a spread-spectrum or frequency-hopping waveform resistant to spot-noise jamming. The emergency channel (used only for lost-link procedure acknowledgement and safe-state commands) should use a low-rate, high-processing-gain waveform (e.g., direct-sequence spread spectrum at high chip rate) and an independent transceiver not shared with mission communications. The agent's communication management module should autonomously attempt channels in PACE order with defined timeout thresholds.

Adversarially Parameterised Kalman Innovation Gates. Size the Kalman filter innovation gate not by the expected sensor noise standard deviation but by the minimum detectable spoofing offset that would cause a safety-relevant positional error. If a 50-metre spoofing offset would cause a geofence violation, the gate must reject GNSS updates that imply more than the accumulated INS drift since last valid fix, with the gate width set to less than 50 metres minus the INS drift bound for the mission segment. This requires collaboration between the navigation engineer and the safety analyst.

Pre-Authorised Mission Envelopes. For autonomous platforms, define and upload a complete mission envelope before departure: waypoints, geofences (including buffer zones per 4.5.3), no-go areas, engagement restrictions, and the full lost-link procedure. The mission envelope should be cryptographically signed by the authorising human authority and should be immutable to in-flight modification unless a new signed envelope is received on an authenticated channel. This ensures that the platform's degraded-mode and LLP behaviours are authorised at mission start and cannot be altered by a spoofed command.

Tamper-Evident On-Board Logging. Implement interference event logging in a dedicated write-once (or WORM-equivalent) partition with cryptographic chaining of log entries. This ensures that post-event analysis (4.8.2) can rely on the log as an accurate record of agent state, sensor values, and response actions. The log should capture the raw sensor values, the fused state estimate, the sensor trust state, and the active operational mode at each decision point.

6.2 Explicit Anti-Patterns

Anti-Pattern 1 — Noise-Parameterised Fusion Gates. Sizing Kalman innovation gates or particle-filter resampling thresholds from sensor noise characterisation in benign environments and applying them unmodified in adversarial environments. This is the most common implementation error and directly caused the Example 1 failure. Gates must be reviewed by a threat analyst, not only a navigation engineer.

Anti-Pattern 2 — Mission-Continuity-Biased Degraded Mode. Designing degraded-mode behaviour to maximise mission completion probability under degraded conditions. This is an entirely appropriate objective for environmental degradation (where the agent's situational awareness is reduced but not adversarially manipulated). In EW-contested environments, the agent's degraded-mode behaviour should be safety-biased, because the agent cannot distinguish between environmental degradation and the early phase of an attack designed to guide the agent into a specific failure mode.

Anti-Pattern 3 — Single-Frequency GNSS in Safety-Critical Roles. Using a single-frequency GNSS receiver (e.g., L1-only GPS) without NMA or independent cross-validation for any function that contributes to a safety-critical output. Single-frequency civilian signals without authentication are the most readily spoofed positioning source available.

Anti-Pattern 4 — Software-Only Override Suppression. Implementing the interference alarm suppression condition (permitted by 4.2.5 under specific circumstances) as a software flag that can be set by the mission AI or an operator command on the primary datalink. If the primary datalink is compromised, this flag can be set by an adversary. Override suppression conditions must require an authenticated command on a channel assessed as more resilient than the primary datalink, as required by 4.6.1.

Anti-Pattern 5 — Insufficient Threat Model Scope. Restricting the EW threat model to jamming alone (the most visible attack) and omitting spoofing, meaconing, and datalink injection because these are considered more technically demanding. Example 3 (meaconing) and Example 1 (GNSS spoofing) demonstrate that these attacks are operationally realistic. The threat model must enumerate all attack categories required by 4.1.1.

Anti-Pattern 6 — Untested Degraded-Mode Behaviour. Validating degraded-mode and lost-link procedures only through software simulation or table-top exercise. Interference conditions create hardware-layer effects (receiver overload, AGC saturation, false-lock conditions) that simulation does not replicate. Requirement 4.4.5 mandates testing under representative interference conditions precisely to catch failures that only manifest in hardware.

6.3 Industry and Domain Considerations

Unmanned Aerial Systems (UAS). Civil aviation regulators (notably EASA in Europe) have begun incorporating EW resilience into their advanced and specific operational category frameworks. The STANAG 4671 and DEF STAN 00-970 (UAV) lineage provides structural framing. For military UAS, the applicable frameworks include MIL-STD-461 (EMC), ADS-33 (handling qualities), and the NATO STANAG 4586 data-link standard. Implementers should note that commercial-off-the-shelf GNSS receivers are generally not resistant to spoofing and should not be used for safety-critical positioning in contested environments without augmentation.

Maritime Autonomous Surface Vessels. IMO MSC-FAL.1/Circ.3 provides guidelines on maritime cyber risk management; the IACS UR E26/E27 unified requirements address connected ship cyber resilience. Neither directly addresses EW threat models for autonomous vessels, creating a gap that this dimension's requirements are designed to fill at the AI agent layer.

Ground Autonomous Systems. NATO STANAG 4754 and AEP-4754 address ground robot interoperability; EW resilience provisions at the AI layer are largely absent from published standards and must be addressed through system-specific safety cases.

6.4 Maturity Model

Maturity LevelCharacteristic
Level 1 — InitialEW threat acknowledged in documentation; no specific detection or degraded-mode architecture. GNSS-only positioning for safety functions.
Level 2 — DefinedEW threat model documented; GNSS/INS cross-validation implemented; communications PACE plan defined. Gate parameterisation noise-based, not adversarially parameterised.
Level 3 — ManagedAdversarially parameterised fusion gates; sensor trust state machine implemented; safety-first arbitration policy enforced architecturally; NMA implemented where available; degraded-mode and LLP tested under simulated interference.
Level 4 — OptimisedFull hardware-level interference testing; tamper-evident logging; automated post-event analysis pipeline; PACE communication implemented and tested; threat model reviewed at least annually against current intelligence assessments.
Level 5 — AdaptiveContinuous adversarial emulation in operational test environments; machine-learning-assisted anomaly detection in interference detection subsystem, with the ML component itself validated for adversarial robustness; supply chain assurance integrated into change management.

Section 7: Evidence Requirements

7.1 Design and Architecture Evidence

ArtefactDescriptionRetention
EW Threat ModelEnumeration of threat categories per 4.1.1; adversary capability assessment; mapping to detection and response provisionsOperational life + 7 years
Interference Detection Architecture DocumentDesign specification of interference detection subsystem, sensor trust state machine, and fusion parameter justificationOperational life + 7 years
Fusion Parameter JustificationDocumented rationale for Kalman gate widths, particle filter thresholds, and arbitration policy parameters, with reference to threat modelOperational life + 7 years
PACE Communication PlanDocumented PACE plan with frequency bands, modulation schemes, timeout thresholds, and fallback proceduresOperational life + 7 years
Pre-Authorised Mission Envelope SpecificationSpecification of cryptographic signing, immutability mechanism, and LLP contentOperational life + 7 years
Geofence Buffer Zone JustificationCalculation of buffer zone sizing per 4.5.3, with position uncertainty analysis under degraded modality conditionsOperational life + 7 years

7.2 Test and Validation Evidence

ArtefactDescriptionRetention
Acceptance Test ReportsResults of acceptance testing for all 4.x MUST requirements, including hardware-level interference tests per 4.4.5Operational life + 10 years
Degraded-Mode and LLP Test RecordsRecords of annual and change-triggered tests per 4.4.5, including interference generation methodology and observed agent behaviour10 years
NMA Implementation VerificationEvidence that NMA is implemented and correctly classifies unauthenticated signals as degradedOperational life + 7 years
Cryptographic Channel Integrity EvidenceEvidence of authentication, anti-replay, and key management implementation per 4.7Operational life + 7 years
Supply Chain Assurance RecordsProcurement assessment evidence per 4.9.1 for GNSS receivers and RF componentsOperational life + 7 years

7.3 Operational Evidence

ArtefactDescriptionRetention
Interference Event LogTamper-evident on-board log per 4.2.4, extractable post-mission10 years
Post-Event Analysis ReportsReports per 4.8.2 for all qualifying interference events10 years
Incident Notification RecordsRecords of notifications to operational authority per 4.8.110 years
Human Override and Abort LogsRecords per 4.6.410 years
Threat Model Review RecordsAnnual review records per 4.1.2, including change justificationsOperational life + 7 years
Firmware/Software Update Validation RecordsValidation evidence per 4.9.2 for each update to interference-relevant componentsOperational life + 7 years

Section 8: Test Specification

Each test below maps to one or more MUST requirements in Section 4. Conformance is scored 0–3:

Test 8.1 — EW Threat Model Completeness and Currency (maps to 4.1.1, 4.1.2, 4.1.3)

Objective: Verify that a documented EW threat model exists, covers all required threat categories, has been reviewed within the required cadence, and maps each threat to a corresponding detection and response provision.

Method: Document review. Auditor examines the EW threat model to confirm: (a) all six minimum threat categories (GNSS jamming, GNSS spoofing, meaconing, communications

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance
International Humanitarian LawPrinciples of Distinction and ProportionalitySupports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Electronic-Warfare Interference Handling Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-575 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Electronic-Warfare Interference Handling Governance directly supports the robustness and cybersecurity requirements by implementing structural controls that resist adversarial manipulation and ensure system integrity under attack conditions.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-575 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Electronic-Warfare Interference Handling Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without electronic-warfare interference handling governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-575, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-575: Electronic-Warfare Interference Handling Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-575