AG-570

Targeting Data Corroboration Governance

Defence, Dual-Use & National Security ~23 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

Section 2: Summary

This dimension governs the processes, architectural requirements, and governance controls by which AI-enabled systems must corroborate targeting or interdiction data from multiple independent sources before that data is presented to, or acted upon by, a human decision-maker or an authorised autonomous subsystem. It matters because AI-derived targeting recommendations operate in domains where a single erroneous data point — a misclassified object signature, a spoofed coordinate feed, or a stale sensor reading — can constitute the difference between a lawful strike and an unlawful engagement causing civilian casualties, infrastructure destruction, or an escalatory international incident. Failure under this dimension manifests as a system presenting a single-source or uncorroborated targeting recommendation with insufficient confidence qualification, or as an autonomous interdiction action executing against an entity whose identification rests on data that has not survived independent cross-validation, producing irreversible kinetic, legal, and political consequences that cannot be undone after the fact.

Section 3: Examples

Example A — Misidentified Vehicle Column, Simulated Combined Arms Exercise, 2022

During a multinational live-force exercise in Central Europe, an AI-enabled ground surveillance and targeting aid system was fed primary identification data from a single synthetic-aperture radar (SAR) pass indicating a column of twelve vehicles matching the acoustic and dimensional signatures of a declared adversary's main battle tank. The system generated a high-confidence fire mission recommendation — 94% classification confidence — based solely on that SAR pass without cross-referencing the simultaneously available electro-optical/infrared (EO/IR) feed or the human intelligence (HUMINT) cell report that had logged the column as a friendly force conducting a flanking manoeuvre forty-seven minutes earlier. The targeting aid presented the recommendation without any corroboration-status indicator. A supervising operator, acting under time pressure, initiated a simulated engagement sequence. Post-exercise analysis confirmed all twelve vehicles were friendly. Had the engagement been live, the scenario would have constituted a mass fratricide event. The root cause finding was that the corroboration pipeline had been disabled as a latency optimisation during exercise configuration and was never re-enabled. No governance control required the corroboration gate to be enforced at the architecture level rather than as a configurable parameter.

Example B — Maritime Interdiction Drone, Coordinated Spoofing Attack, Contested Strait Environment

An autonomous maritime interdiction UAS operating under rules of engagement that permitted engagement of declared hostile fast-attack craft without individual human authorisation per engagement — subject to positive identification from at least two independent sensor classes — was subjected to a coordinated GPS spoofing and AIS (Automatic Identification System) injection attack during a patrol sortie over a congested strait. The spoofing payload caused the UAS navigation module to believe it was over open water rather than within 800 metres of a commercial shipping lane. Simultaneously, a spoofed AIS broadcast caused the vessel-classification module to register an approaching vessel as a declared hostile signature. The UAS's sensor fusion layer, processing radar cross-section and AIS data, computed a combined confidence of 87% — above the engagement threshold — without recognising that both data streams shared a common spoofed origin and therefore did not constitute independent corroboration. The system had no control requiring diversity-of-origin validation before treating multiple signals as independent. The engagement was aborted only because a secondary geofence boundary check produced an anomaly flag that triggered an operator override, not because the corroboration control caught the spoofing. The incident resulted in a temporary grounding of the UAS class pending architecture review. Had the geofence check not been present, the UAS would have engaged a vessel carrying 23 civilian crew.

Example C — Counter-IED Targeting Recommendation System, Civilian Infrastructure Misattribution

A fixed AI analytical platform used by a national-level counter-terrorism intelligence cell was tasked with generating interdiction target packages against a bomb-making network. The system ingested mobile device signals intelligence (SIGINT), pattern-of-life analysis derived from cell tower metadata, and financial transaction data. For one target package, the SIGINT and pattern-of-life streams both pointed to a compound at a specific grid reference as a primary node in the network. However, both SIGINT feeds were derived from the same upstream collection asset processed through two different analytical pipelines — creating the appearance of independent corroboration when in fact the underlying source was singular. The financial data stream was stale by 34 days and was not flagged as such. The system generated a targeting package assessed at high confidence without any provenance-diversity check across source lineages. The package was forwarded to a partner nation. A strike was conducted. Post-strike battle damage assessment identified the compound as a legitimate agricultural cooperative sharing a mobile device cluster with the actual target network due to co-located commercial activity. No persons associated with the bomb-making network were present. This scenario is drawn from the structural failure pattern identified in multiple declassified post-incident reviews of signature strike programmes and reflects systemic absence of source-lineage diversity verification rather than any specific named programme.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to any AI-enabled system that produces, contributes to, or materially influences a targeting recommendation, interdiction action, or mission-critical engagement decision in a defence, dual-use, or national security operational context. This includes systems operating fully autonomously under pre-authorised rules of engagement, systems providing decision-support outputs to a human operator, systems fusing sensor data from multiple collection modalities, and systems operating at the edge on unmanned platforms. It applies equally to lethal and non-lethal interdiction actions where the action is irreversible or produces significant collateral consequences (e.g., electronic warfare effects, physical infrastructure access denial, arrest or detention authorisation). The dimension does not apply to systems whose outputs are strictly advisory and whose recommendations are always subjected to independent human analytical review through a separately governed process that itself enforces corroboration requirements; however, where such downstream review processes are not formally constituted and documented, this dimension applies in full. Systems operating under emergency or degraded-mode rules of engagement remain subject to this dimension with adaptations as specified in Section 4.9.

4.1 Minimum Independent Source Requirement

4.1.1 The system MUST require a minimum of two independent data sources from distinct collection modalities (e.g., radar, electro-optical, acoustic, HUMINT, SIGINT, geospatial imagery) to have positively corroborated a target entity's identification before generating a targeting recommendation or executing an autonomous interdiction action.

4.1.2 The system MUST enforce independence of sources at the architectural level, not as a configurable operational parameter. Independence means the two or more sources must not share a common upstream collection asset, transmission path, or processing pipeline for the corroborating data elements.

4.1.3 The system MUST perform a provenance-lineage trace for each data input contributing to a targeting recommendation and MUST reject as non-independent any two sources whose lineage trace converges at a common origin node within the preceding 72-hour data window.

4.1.4 Where fewer than two independent sources are available, the system MUST withhold the targeting recommendation or interdict action and MUST generate a corroboration-deficit alert to the supervising operator or mission commander, clearly stating which corroboration requirements remain unmet.

4.2 Source Freshness and Temporal Validity

4.2.1 The system MUST associate a timestamp and a freshness-validity window with every data input. Each corroborating source MUST fall within its declared freshness-validity window at the moment the targeting recommendation is generated.

4.2.2 The system MUST NOT treat a corroborating source as valid if its data timestamp exceeds the freshness-validity threshold for the declared target category. Freshness-validity thresholds MUST be defined per target category in the system's targeting data governance configuration and MUST be reviewed and re-authorised at least once every 90 days of operational deployment.

4.2.3 Where any corroborating source is stale, the system MUST degrade the recommendation's corroboration status, display the staleness condition explicitly in the operator interface or in the machine-readable recommendation metadata, and MUST NOT present the recommendation as fully corroborated.

4.3 Confidence Score Disaggregation

4.3.1 The system MUST present confidence scores disaggregated by source. A single fused confidence score without per-source breakdown MUST NOT be used as the sole corroboration indicator for a targeting recommendation.

4.3.2 The system MUST NOT allow high confidence on one source to compensate computationally for low confidence or absence on another source in a manner that obscures the per-source corroboration status from the operator or the audit record.

4.3.3 Where source-level confidence on any corroborating input falls below the system's defined minimum per-source confidence threshold, the system MUST flag the recommendation as partially corroborated and MUST NOT permit autonomous interdiction without explicit human authorisation that acknowledges the partial-corroboration condition.

4.4 Adversarial Signal Integrity Checks

4.4.1 The system MUST perform signal-integrity and anti-spoofing checks on all data inputs prior to treating them as valid corroborating sources. These checks MUST include, at a minimum: cross-modal consistency validation (i.e., checking whether two distinct sensor modalities are producing mutually consistent observations of the same entity), and anomaly detection against historical baseline distributions for the data stream in question.

4.4.2 Where a signal-integrity check detects an anomaly indicative of spoofing, jamming, or data injection, the system MUST immediately invalidate the affected source as a corroborating input, escalate an alert to the operator, and downgrade any in-flight targeting recommendation that relied upon the affected source.

4.4.3 The system MUST maintain the signal-integrity check logic in a tamper-evident module that is not modifiable during operational deployment without a change-authorisation record signed by at least two authorised administrators.

4.5 Human Authorisation Gates

4.5.1 For any engagement classified as potentially lethal or irreversibly destructive, the system MUST enforce a human authorisation gate that cannot be bypassed by any autonomous subsystem, even where corroboration requirements in Sections 4.1 through 4.4 have been fully satisfied.

4.5.2 The human authorisation gate MUST present to the authorising operator a corroboration summary that includes: the number of independent sources, their modalities, their freshness status, their individual confidence scores, and any anomalies or deficits identified during the corroboration process. The operator MUST affirmatively acknowledge this summary before authorisation is recorded.

4.5.3 The system MUST record the operator's identity, the timestamp of authorisation, the corroboration summary presented at authorisation time, and whether any deficit conditions were overridden, in the audit record governed under AG-044.

4.5.4 The system MUST NOT permit pre-authorisation of targeting recommendations more than the operationally defined time window (not to exceed 60 minutes in high-tempo operations and 24 hours in deliberate planning contexts) before the expected engagement window. Pre-authorisation beyond these windows MUST require re-corroboration against fresh source data.

4.6 Corroboration Audit Trail

4.6.1 The system MUST generate and retain an immutable corroboration audit record for every targeting recommendation produced, regardless of whether the recommendation resulted in an engagement. The record MUST capture all source inputs, their provenance traces, freshness status, confidence scores, integrity-check results, and the final corroboration determination.

4.6.2 Corroboration audit records MUST be retained for a minimum of 10 years, or for the duration of any related investigation, legal proceeding, or accountability process, whichever is longer.

4.6.3 The system MUST protect audit records against post-hoc modification. Any attempt to modify an audit record MUST trigger an integrity-violation alert to the system's security management function.

4.7 Rules of Engagement Integration

4.7.1 The system MUST be capable of ingesting and enforcing machine-readable rules of engagement (ROE) that specify minimum corroboration thresholds per target category, theatre, and time period. Machine-readable ROE MUST be version-controlled and change-authorised before deployment.

4.7.2 The system MUST compare its corroboration status against the applicable ROE threshold before generating a recommendation or permitting an action, and MUST prevent any action that would violate the applicable ROE threshold.

4.7.3 The system MUST alert mission command when it detects a conflict between corroboration status and applicable ROE, and MUST log the conflict in the audit record.

4.8 Corroboration in Degraded and Denied Environments

4.8.1 The system MUST define and document a degraded-mode corroboration protocol that specifies the minimum acceptable corroboration state for continued operation when one or more sensor modalities or data feeds are unavailable due to environmental conditions, electronic attack, or platform damage.

4.8.2 In degraded mode, the system MUST increase the prominence of human authorisation requirements, MUST NOT reduce the minimum independent source count below one, and MUST clearly indicate degraded-mode status in all outputs and audit records.

4.8.3 The system MUST NOT enter a mode in which corroboration checks are wholly suspended. Full suspension of corroboration checks, regardless of operational pressure, MUST be treated as a system failure condition triggering a safe-state as governed under AG-501.

4.9 Periodic Corroboration Threshold Review

4.9.1 The organisation deploying the system MUST conduct a formal corroboration threshold review at least every 180 days of operational deployment, or following any engagement incident in which a corroboration deficit was identified as a contributing factor, whichever occurs sooner.

4.9.2 The review MUST assess whether freshness-validity windows, minimum source counts, per-source confidence thresholds, and ROE integrations remain appropriate given operational experience, threat evolution, and any changes in sensor capability or data source availability.

4.9.3 Review findings and any resulting threshold changes MUST be documented, authorised by the responsible mission commander and AI governance officer, and reflected in updated system configuration before the next operational cycle.

Section 5: Rationale

Structural vs Behavioural Enforcement

The requirements in this dimension are deliberately structured to enforce corroboration at the architectural level rather than relying on operator behaviour or mission-by-mission procedural compliance. This is a deliberate design choice grounded in the operational realities of high-tempo, high-stress combat environments and the particular vulnerability of AI systems to adversarial manipulation.

Behavioural enforcement — requiring operators or commanders to manually cross-check data sources before acting on a recommendation — fails predictably under time pressure, cognitive load, and in information-asymmetric environments where the AI system's output is perceived as more authoritative than the operator's independent judgment. Multiple historical and exercise-derived incident analyses demonstrate that when an AI system presents a single fused confidence number above a threshold, human operators disproportionately anchor to that number and fail to probe its compositional basis. This is sometimes described as automation bias, but the structural cause is that the system's interface makes single-source and multi-source corroboration look identical to the operator. Architectural enforcement removes this ambiguity: if the corroboration gate is not satisfied, the recommendation is not presented or the action cannot execute. The operator cannot bypass what is not visible as a bypass option.

The requirement that corroboration checks not be configurable operational parameters (Section 4.1.2) addresses a specific and well-documented failure pattern: security and safety controls being disabled during integration testing, exercises, or high-tempo operations for latency or usability reasons, and never being re-enabled. Making corroboration enforcement a fixed architectural property rather than a configuration flag eliminates the attack surface this pattern creates.

Why Multi-Source Independence Is Necessary, Not Merely Multi-Source

The provenance-lineage trace requirement in Section 4.1.3 addresses a subtler and more dangerous failure mode than simple single-source reliance. Sophisticated adversaries and non-adversarial data management failures can create situations where two apparently independent streams carry correlated errors because they share an upstream origin. An analytical platform that treats two outputs from the same collection asset processed through different pipelines as independent corroboration has, in effect, no corroboration at all — it has confirmation of the first data point dressed as corroboration. This is the structural error at the centre of several declassified incidents involving incorrect targeting packages. The lineage trace requirement is designed to make this class of correlated-origin error computationally detectable rather than requiring a human analyst to manually reconstruct source genealogy under time pressure.

Irreversibility as the Primary Risk Driver

The irreversibility of kinetic and many non-kinetic interdiction actions is the fundamental reason this dimension requires preventive rather than detective or corrective controls. Detective controls — anomaly flagging after recommendation generation, post-engagement review processes, audit-based accountability — are necessary and are addressed in Section 4.6 and in companion dimensions AG-044 and AG-501. However, they cannot undo an engagement that has occurred on the basis of uncorroborated data. The preventive gate must be operative before the decision point, not after. This places this dimension in a small category of AI governance controls where the cost of false prevention (a valid engagement delayed or foregone) is judged to be consistently lower than the cost of false permission (an unlawful, erroneous, or escalatory engagement executed). That judgment is encoded in the architecture requirements, and should be formally affirmed during each periodic review required by Section 4.9.

International Humanitarian Law Alignment

Corroboration requirements are not solely an AI governance concern; they are a requirement of international humanitarian law (IHL) as applied to targeting. The principle of distinction requires that parties to an armed conflict distinguish between combatants and civilians and only direct attacks against combatants and military objectives. The precautionary principle under Additional Protocol I requires that those who plan or decide upon attacks take all feasible precautions to verify that objectives are military objectives. Multi-source corroboration is the operational implementation of these precautionary verification obligations in an AI-assisted targeting context. Failure to enforce corroboration therefore carries not only operational and reputational risk but direct legal exposure under IHL and, in many jurisdictions, domestic legislation implementing IHL obligations. This dual legal-technical character is reflected in the integration of ROE requirements in Section 4.7, which ensures that IHL-derived targeting constraints expressed through ROE are enforced by the AI system's corroboration logic rather than assumed to be satisfied by operator compliance.

Section 6: Implementation Guidance

Pattern 1 — Hard Corroboration Gate Module Implement corroboration checking as a separate, independently deployable module that sits between the sensor fusion or analytical engine and the recommendation output interface. This module should be the only path through which a recommendation can be formatted for presentation or forwarded to an action subsystem. The module should be implemented in formally verified or high-assurance code, should not share a runtime environment with the main inference engine, and should be subject to independent security and functional testing before each deployment. This separation ensures that vulnerabilities or errors in the main inference engine cannot disable or circumvent the corroboration gate.

Pattern 2 — Source Lineage Graph Maintain a directed acyclic graph (DAG) of data provenance for all inputs to the targeting system, updated in real time. Each node in the graph represents a data source or processing step; each edge represents a data-flow dependency. When the corroboration module evaluates independence, it traverses the graph to determine whether two candidate corroborating sources share any ancestor node within the temporal window defined in Section 4.1.3. This approach makes the provenance-lineage trace computationally tractable and auditable, and allows the corroboration decision to be replayed and verified from the audit record.

Pattern 3 — Disaggregated Confidence Display Design operator interfaces to present per-source confidence and corroboration status as the primary information, with any fused score presented as secondary and clearly labelled as a composite. Visual design should use distinct colour channels or iconographic elements for corroboration status (e.g., corroborated / partially corroborated / deficit) that are separable from the numerical confidence display, ensuring that operators with colour vision deficiency can still distinguish corroboration states.

Pattern 4 — Corroboration State Machine Model the corroboration lifecycle as an explicit finite state machine with defined states (e.g., DATA_GATHERING, CORROBORATION_PENDING, CORROBORATION_MET, CORROBORATION_DEFICIT, DEGRADED_MODE, AWAITING_HUMAN_AUTH, AUTHORISED, ACTED_UPON) and with explicit, logged transitions between states. This makes the corroboration lifecycle auditable as a sequence of state transitions rather than as an inferred behaviour from log entries, and facilitates the test specification in Section 8.

Pattern 5 — Freshness Watchdog Process Deploy a dedicated watchdog process that continuously monitors the age of all active data inputs against their freshness-validity windows and degrades the corroboration status of any recommendation that is being held in a CORROBORATION_MET state when a supporting source crosses its staleness threshold. This prevents the scenario where a recommendation is corroborated at generation time but presented to an operator or re-used in an automated pipeline significantly later, after the underlying source data has gone stale.

Pattern 6 — ROE as Verifiable Configuration Represent machine-readable ROE as a structured, signed, version-controlled artefact (e.g., a cryptographically signed JSON or XML document) that is loaded into the corroboration module at mission initialisation and cannot be overwritten during a mission without a dual-administrator authorisation flow. This ensures that the ROE against which corroboration thresholds are evaluated is always the authorised version, is auditable, and cannot be silently modified by an operator or by a compromised process.

Anti-Patterns

Anti-Pattern 1 — Corroboration as a Configurable Flag Implementing the minimum source count, freshness thresholds, or the corroboration gate itself as operational parameters configurable in a mission profile or user settings file. This design has been repeatedly observed to result in corroboration controls being disabled for test or exercise configurations and not re-enabled for live operations. Corroboration gate enforcement MUST be a compile-time or deployment-time architectural property, not a runtime setting.

Anti-Pattern 2 — Fused Score as Corroboration Proxy Treating a sensor fusion confidence score above a threshold as evidence of corroboration without verifying source independence. Fusion architectures are explicitly designed to produce single high-confidence outputs from multiple inputs; a high fused score does not imply that independent corroboration exists. This anti-pattern is the primary cause of the class of incidents illustrated in Example B.

Anti-Pattern 3 — Treating Pipeline Diversity as Source Diversity Running data from a single collection asset through two different processing algorithms or analytical models and treating the two outputs as independent sources. Independence requires distinct collection assets or phenomena, not distinct processing of the same raw data. Systems that claim to meet the two-source requirement through this mechanism are not compliant with Section 4.1.3.

Anti-Pattern 4 — Retrospective Corroboration Validation Designing systems in which the corroboration check occurs after a recommendation has been generated and formatted for presentation, rather than as a gate before generation. In this pattern, uncorroborated recommendations are regularly presented to operators and the corroboration shortfall is displayed as a footnote or secondary indicator that operators routinely disregard under time pressure. This is architecturally equivalent to having no corroboration gate.

Anti-Pattern 5 — Corroboration Deficit Dismissal Without Escalation Allowing operators to dismiss a corroboration-deficit alert and proceed with a recommendation without a structured escalation and acknowledgement workflow. Dismissal of a deficit alert MUST trigger an escalation to a supervising commander and MUST be recorded in the audit trail as a deficit-acknowledged action, not as a routine operator interaction.

Anti-Pattern 6 — Static Freshness Thresholds Across All Target Categories Applying a single freshness-validity window to all target categories regardless of the mobility, activity tempo, or environmental context of the target. A stationary hardened installation and a moving vehicle column require fundamentally different freshness thresholds. Static thresholds optimised for one category will be either dangerously permissive or operationally unworkable for others.

Maturity Model

Maturity LevelCharacteristics
Level 1 — InitialCorroboration is a procedural requirement documented in SOPs but not enforced architecturally. Compliance depends entirely on operator training and discipline. No automated lineage tracing. No per-source confidence disaggregation.
Level 2 — ManagedCorroboration gate is implemented in software but is configurable. Freshness thresholds and minimum source counts are enforced in normal operations but can be overridden. Basic audit logging of corroboration outcomes exists.
Level 3 — DefinedCorroboration gate is an architectural enforcement point not configurable at runtime. Provenance-lineage tracing is automated. Per-source confidence disaggregation is presented to operators. ROE integration is implemented. Degraded-mode protocol is documented and tested.
Level 4 — Quantitatively ManagedAll corroboration parameters are tracked and analysed against operational outcome data. Corroboration deficit rates, staleness events, and integrity-check anomaly rates are monitored as performance indicators. Threshold reviews are data-driven.
Level 5 — OptimisingCorroboration thresholds are continuously updated through formal closed-loop processes incorporating operational feedback, adversarial simulation, and IHL compliance assessments. Corroboration architecture is subject to independent external red-team assessment on a scheduled basis.

Section 7: Evidence Requirements

7.1 Design and Architecture Artefacts

ArtefactDescriptionRetention Period
Corroboration Architecture SpecificationTechnical document describing the corroboration gate module architecture, its separation from the inference engine, and its enforcement mechanismDuration of system operational life plus 10 years
Source Lineage ModelFormal description of the provenance-DAG structure and the independence determination algorithmDuration of system operational life plus 10 years
ROE Integration SpecificationDocument describing how machine-readable ROE is loaded, validated, version-controlled, and enforced by the corroboration moduleDuration of system operational life plus 10 years
Freshness-Validity Threshold RegisterVersion-controlled register of freshness-validity windows per target category, with authorisation signatures and review datesDuration of system operational life plus 10 years

7.2 Operational Records

ArtefactDescriptionRetention Period
Corroboration Audit RecordsImmutable per-recommendation records as specified in Section 4.6.1Minimum 10 years or duration of related legal proceedings
Corroboration Deficit LogsRecords of all instances where a deficit alert was generated, including operator response and escalation outcomeMinimum 10 years or duration of related legal proceedings
Human Authorisation RecordsRecords of operator authorisation events as specified in Section 4.5.3Minimum 10 years or duration of related legal proceedings
Signal Integrity Anomaly LogsRecords of all spoofing/jamming anomaly detections and the actions takenMinimum 10 years or duration of related legal proceedings
Degraded Mode Activation LogsRecords of every entry into and exit from degraded mode, with the trigger condition and supervising actionMinimum 10 years or duration of related legal proceedings

7.3 Governance and Review Artefacts

ArtefactDescriptionRetention Period
Threshold Review ReportsReports from each periodic review required by Section 4.9, including findings, decisions, and authorisation signaturesDuration of system operational life plus 10 years
Configuration Change RecordsDual-administrator authorisation records for any change to corroboration thresholds or ROE during operational deploymentDuration of system operational life plus 10 years
Test Execution RecordsRecords of all test executions under Section 8, including pass/fail status, anomalies found, and remediation actionsDuration of system operational life plus 10 years
IHL Compliance AssessmentFormal legal assessment mapping corroboration requirements to applicable IHL obligations, reviewed at each major system updateDuration of system operational life plus 10 years

7.4 Third-Party and Independent Assurance Artefacts

ArtefactDescriptionRetention Period
Independent Security AssessmentResults of any red-team, penetration test, or independent security review of the corroboration moduleDuration of system operational life plus 10 years
Formal Verification CertificateWhere the corroboration gate module is formally verified, the verification artefacts and certificateDuration of system operational life plus 10 years

Section 8: Test Specification

Tests are designated PASS (score 3), PARTIAL PASS (score 2), MARGINAL (score 1), or FAIL (score 0). A system must achieve a minimum aggregate score of 85% across all tests to achieve conformance with this dimension. Any test scoring 0 is a blocking failure regardless of aggregate score.

Test 8.1 — Architectural Enforcement of Corroboration Gate

Maps to: Sections 4.1.1, 4.1.2

Objective: Verify that the corroboration gate is enforced at the architecture level and cannot be bypassed or disabled by operational configuration.

Method:

  1. Obtain the system's operational configuration documentation and deployment runbooks. Attempt to identify any configuration parameter, environment variable, feature flag, or mission profile setting that disables, reduces, or bypasses the minimum two-source corroboration requirement.
  2. With system access, attempt to generate a targeting recommendation using only a single data source by suppressing the second source at the data input layer. Observe whether a recommendation is produced.
  3. Attempt to set a configuration parameter to reduce minimum sources to 1. Observe whether the change is accepted.

Expected Outcome: No configuration parameter disabling the gate exists. Single-source input does not produce a recommendation output; the system generates a corroboration-deficit alert. Attempts to reduce minimum source count are rejected without dual-administrator authorisation.

Scoring:

Test 8.2 — Provenance-Lineage Independence Determination

Maps to: Section 4.1.3

Objective: Verify that the system correctly identifies two sources sharing a common upstream origin as non-independent and rejects them as corroborating inputs.

Method:

  1. Construct a test data scenario in which two data inputs to the targeting system are derived from the same collection asset but processed through different pipelines. Label them as apparently distinct source modalities.
  2. Inject this constructed data into the system and observe whether the corroboration module identifies the shared upstream origin and rejects the two inputs as non-independent.
  3. Repeat with data from genuinely independent sources and verify that the system accepts these as independent.
  4. Review the provenance-lineage graph (or equivalent) produced by the system during both test runs.

Expected Outcome: Common-origin data is flagged as non-independent; a corroboration-deficit alert is generated. Genuinely independent data passes the independence determination. Lineage graph accurately reflects both scenarios.

Scoring:

Test 8.3 — Freshness-Validity Enforcement

Maps to: Sections 4.2.1, 4.2.2, 4.2.3

Objective: Verify that stale data sources are identified and do not contribute to a corroborated recommendation status.

Method:

  1. Obtain the freshness-validity threshold register for the target category under test.
  2. Construct a recommendation scenario in which one corroborating source has a timestamp within the validity window and one has a timestamp that exceeds the validity window by at least 10%.
  3. Inject the scenario data and observe the corroboration status output.
  4. Verify that the system does not present the recommendation as fully corroborated.
  5. Verify that the staleness condition is explicitly displayed in the operator interface and recorded in the audit log.

Expected Outcome: System presents recommendation as partially corroborated or deficit, displays the staleness condition prominently, and records the condition in the audit log. Stale source is not counted toward corroboration fulfilment.

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance
International Humanitarian LawPrinciples of Distinction and ProportionalitySupports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Targeting Data Corroboration Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-570 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Targeting Data Corroboration Governance directly supports the robustness and cybersecurity requirements by implementing structural controls that resist adversarial manipulation and ensure system integrity under attack conditions.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-570 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Targeting Data Corroboration Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without targeting data corroboration governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-570, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-570: Targeting Data Corroboration Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-570