AG-131

Source Conflict Escalation Governance

Data-Layer Governance & Evidence ~16 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST ISO 42001

2. Summary

Source Conflict Escalation Governance requires that when an AI agent receives contradictory data from two or more sources for the same fact, entity, or decision input, the system detects the conflict, prevents the agent from silently resolving the conflict through its own reasoning, and escalates the conflict to a defined resolution path — whether automated (applying a deterministic precedence policy) or human-mediated. Without this control, agents faced with conflicting inputs either pick one arbitrarily, blend the contradictions into an averaged output, or hallucinate a reconciliation that satisfies neither source. AG-131 mandates that conflicts are detected structurally, that resolution follows a governed precedence hierarchy informed by AG-128 source classifications, and that every conflict and its resolution are recorded for audit.

3. Example

Scenario A — Contradictory Credit Scores Drive Wrong Lending Decision: A lending agent consumes credit risk data from two sources: an internal credit model (score: 680, recommendation: approve with conditions) and a third-party credit bureau (score: 540, recommendation: decline). The two sources use different scoring methodologies and assessment dates. The agent, receiving no guidance on how to handle the conflict, averages the two scores to produce 610 and approves the loan with minimal conditions. The borrower defaults within 6 months, causing a £175,000 loss. The post-mortem reveals that the internal model was using stale income data from 18 months ago, and the credit bureau score reflected a recent county court judgement. Had the conflict been escalated, a human reviewer would have investigated the divergence and discovered the stale input.

What went wrong: The agent received contradictory data and resolved the conflict through averaging — a strategy that was not authorised by any governance policy. No conflict detection mechanism existed. No precedence hierarchy defined which source should take priority for credit decisions. The averaged result was worse than either source alone because it masked the signal (the low bureau score) that correctly predicted default.

Scenario B — Conflicting Patient Allergy Records: A clinical decision support agent retrieves a patient's allergy records from two systems: the hospital's electronic health record (EHR) system, which lists "penicillin — anaphylaxis," and a recently migrated pharmacy database, which lists "no known allergies." The pharmacy database was migrated from a legacy system and the allergy field was mapped incorrectly during migration, defaulting to "no known allergies" for 12,000 patient records. The agent, with no conflict detection, uses the pharmacy database (which it accessed most recently) and recommends a penicillin-class antibiotic. The prescribing physician, trusting the AI recommendation, does not independently verify allergies. The patient experiences an anaphylactic reaction requiring emergency intervention.

What went wrong: Two authoritative-seeming sources contradicted each other on a safety-critical data element. The agent resolved the conflict implicitly by using the most recently retrieved source — a strategy with no clinical basis. No conflict detection flagged the discrepancy. No escalation path existed to route allergy conflicts to a pharmacist or physician for manual verification.

Scenario C — Conflicting Regulatory Status Across Jurisdictions: A cross-border compliance agent checks whether a corporate entity is subject to sanctions. Source A (the agent's internal sanctions database, updated weekly) shows the entity as "clear." Source B (a real-time API to OFAC's SDN list) shows the entity as "designated" — the designation was published 3 days ago, after the internal database's last weekly update. The agent, receiving conflicting signals, follows a "majority rules" heuristic: since its internal database (which it considers more reliable because it is internal) says "clear" and only one external source says "designated," it processes the transaction. The transaction violates sanctions, exposing the organisation to enforcement action with potential penalties exceeding $10 million per violation.

What went wrong: The agent applied an implicit "majority rules" conflict resolution strategy that had no governance basis. For sanctions screening, the correct precedence rule is "most restrictive source wins" — any positive sanctions match should block the transaction regardless of what other sources say. No conflict escalation mechanism existed to route the disagreement to a compliance officer.

4. Requirement Statement

Scope: This dimension applies to every AI agent that consumes data from two or more sources for the same decision input — whether those sources are different databases, different APIs, different documents, different sections of the same document, or different time-stamped versions of the same source. The scope includes explicit conflicts (two sources provide different values for the same field) and semantic conflicts (two sources provide information that, while not directly contradictory, implies different conclusions when applied to the agent's decision). The scope covers runtime inference (data consumed during a request), retrieval-augmented generation (conflicting chunks retrieved from a vector store), and training data (contradictory labels or facts in training datasets). Agents that consume data from a single source with no replication or alternative are outside scope for that specific data element, but few production agents consume only single-source data.

4.1. A conforming system MUST implement conflict detection that identifies when two or more data sources provide contradictory values or semantically incompatible information for the same decision input.

4.2. A conforming system MUST prevent agents from silently resolving detected conflicts through unstructured reasoning, averaging, majority voting, or any other implicit resolution strategy.

4.3. A conforming system MUST define a conflict resolution precedence hierarchy for each data type and use case, specifying which source takes priority when conflicts arise, based on source classification (AG-128), freshness (AG-129), and domain-specific precedence rules.

4.4. A conforming system MUST escalate conflicts that cannot be resolved by the precedence hierarchy to human review within defined escalation timeframes.

4.5. A conforming system MUST log every detected conflict, including: the conflicting source identities, the conflicting values, the resolution method applied (precedence rule or human escalation), the resolution outcome, and the identity of the resolver (system rule ID or human reviewer).

4.6. A conforming system MUST block actuation on the conflicted data element until the conflict is resolved — the agent may continue processing non-conflicted elements but must not act on the contested input.

4.7. A conforming system SHOULD implement confidence-weighted conflict detection that accounts for the inherent uncertainty in different source types — a 2% discrepancy between two market data feeds may be within normal variance, while a binary contradiction on an allergy record is always a genuine conflict.

4.8. A conforming system SHOULD aggregate conflict frequency metrics per source pair to identify systematic data quality issues — a source pair that generates conflicts on more than 5% of shared data elements indicates a systemic quality problem requiring investigation.

4.9. A conforming system MAY implement automated conflict resolution for defined safe categories where the precedence rule is deterministic and the consequences of incorrect resolution are bounded (e.g., choosing the more recent timestamp when two sources provide different "last updated" dates for the same record).

5. Rationale

Source Conflict Escalation Governance addresses a systemic weakness in AI agent deployments: agents are remarkably capable of producing plausible outputs from contradictory inputs. When a human analyst encounters two sources that disagree, the disagreement itself is a signal — it prompts investigation, verification, and judgement about which source to trust. When an AI agent encounters contradictory inputs, it typically produces a confident output that incorporates, averages, or selectively cites the inputs in ways that mask the underlying conflict.

This behaviour is not a bug — it is an inherent property of how large language models and similar systems process information. They optimise for coherent output, not for flagging incoherent input. An agent instructed to "assess the credit risk" will produce a risk assessment even when the input data is contradictory — it will not spontaneously report "I cannot assess risk because my input data contradicts itself." This means conflict detection must be structural, not left to the agent's reasoning.

The governance requirement is not that all conflicts must be resolved before any action is taken — that would be operationally impractical. The requirement is that conflicts must be (1) detected, (2) resolved through a governed process rather than implicit agent reasoning, and (3) recorded for audit. The governed process may be fully automated (a precedence hierarchy that deterministically selects Source A over Source B based on their AG-128 classifications) or human-mediated (escalation to a domain expert) — but it must be explicit, documented, and auditable.

The precedence hierarchy depends on AG-128 source classifications. An "authoritative" source should generally take precedence over a "supplementary" source. A fresher source (AG-129) should generally take precedence over a staler source for time-sensitive data. But domain-specific rules may override these defaults — in sanctions screening, the most restrictive source wins regardless of trust tier; in clinical decisions, the source closest to the patient record of truth wins regardless of freshness.

6. Implementation Guidance

The core implementation artefact is a conflict detection and resolution engine — a component that sits between data retrieval and agent reasoning, comparing values from multiple sources and applying resolution rules before the agent processes the data.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Conflict resolution for market data should follow established best-execution standards: use the exchange-direct feed over aggregated feeds, use real-time over delayed, use the primary listing exchange over secondary exchanges. For credit data, regulatory guidance typically requires that adverse information takes precedence — a negative signal should not be averaged away by a positive signal from another source. For sanctions screening, the resolution rule is absolute: any positive match from any source blocks the transaction.

Healthcare. Allergy and medication interaction conflicts are safety-critical and must always escalate to a clinician. The EHR should be the source of truth for patient clinical data, with any conflict from pharmacy, laboratory, or external systems flagged for clinician review. False negatives (missing an allergy conflict) are more dangerous than false positives (escalating a non-issue).

Legal and Compliance. Regulatory status conflicts (e.g., is this entity licensed, sanctioned, or under investigation?) should resolve to the most restrictive determination pending human review. An entity that appears "clear" in one source and "sanctioned" in another must be treated as sanctioned until a compliance officer investigates.

Maturity Model

Basic Implementation — The organisation has defined precedence hierarchies for high-risk data types (financial values, regulatory status, patient safety data). Conflict detection is implemented for structured data fields where multiple sources are queried. Resolution follows the precedence hierarchy or escalates to human review. Conflicts are logged. This level meets the minimum mandatory requirements but does not address semantic conflicts in unstructured data, does not monitor systematic conflict patterns, and relies on manual escalation management.

Intermediate Implementation — Conflict detection covers both structured fields and semantic conflicts in retrieved content (RAG chunks, document extractions). A tiered resolution cascade handles automatic resolution, confidence-threshold escalation, and mandatory human escalation. Conflict audit records are immutable and queryable. Systematic conflict monitoring identifies high-conflict source pairs for investigation. Actuation is structurally blocked on conflicted data elements until resolution completes.

Advanced Implementation — All intermediate capabilities plus: machine learning-assisted semantic conflict detection identifies implicit contradictions that simple comparison would miss. Conflict resolution precedence hierarchies are dynamically adjusted based on historical resolution outcomes and source reliability tracking. Real-time conflict dashboards feed into source quality governance under AG-128. Independent adversarial testing verifies that no mechanism can bypass conflict detection or force resolution without proper escalation.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Testing AG-131 compliance requires verifying that conflicts are detected, resolution follows governed rules, and silent resolution by the agent is prevented.

Test 8.1: Structured Data Conflict Detection

Test 8.2: Silent Resolution Prevention

Test 8.3: Precedence Hierarchy Application

Test 8.4: Human Escalation Path

Test 8.5: Actuation Block on Unresolved Conflicts

Test 8.6: Conflict Audit Record Completeness

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 10 (Data and Data Governance)Supports compliance
EU AI ActArticle 14 (Human Oversight)Supports compliance
GDPRArticle 5(1)(d) (Accuracy)Supports compliance
MiFID IIArticle 27 (Best Execution)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
NIST AI RMFMAP 2.3, MANAGE 2.2, MANAGE 3.1Supports compliance
ISO 42001Clause 8.2 (AI Risk Assessment), Clause 8.4 (AI System Development)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Supports compliance

EU AI Act — Article 10 (Data and Data Governance)

Article 10 requires that data governance practices address the identification of data gaps and shortcomings. Conflicting data from multiple sources is a data shortcoming that must be identified and addressed. AG-131's conflict detection directly implements this requirement by ensuring that contradictions in input data are detected rather than silently absorbed by the AI system.

EU AI Act — Article 14 (Human Oversight)

Article 14 requires that high-risk AI systems are designed to allow human oversight. Source conflict escalation is a direct implementation of human oversight for data quality — when the system cannot deterministically resolve a contradiction, a human reviewer is brought into the decision process. This ensures that AI systems do not autonomously resolve ambiguities that require human judgement.

MiFID II — Article 27 (Best Execution)

When an investment agent receives conflicting price data from multiple venues, the resolution directly affects best execution. Article 27 requires firms to take sufficient steps to obtain the best possible result for clients. AG-131's conflict resolution precedence hierarchy, when aligned with best execution policies (e.g., prioritising the primary listing exchange's price), ensures that conflicting market data is resolved in the client's interest rather than by arbitrary agent reasoning.

GDPR — Article 5(1)(d) (Accuracy)

The accuracy principle requires controllers to take "every reasonable step" to ensure that inaccurate personal data is rectified or erased without delay. When two sources provide contradictory personal data, at least one source is inaccurate. AG-131's conflict detection identifies this inaccuracy, and the resolution process ensures that the accurate value is used for processing — fulfilling the "every reasonable step" standard.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusDecision-specific, but systemic where conflicts arise from source-level data quality failures affecting multiple records

Consequence chain: When an agent silently resolves a data conflict, the resolution is ungoverned — it may be correct, partially correct, or catastrophically wrong, and no audit trail exists to determine which. In financial services, a silently averaged credit score masked a genuine default risk signal, resulting in a £175,000 loss on a single loan (Scenario A); scaled across a lending portfolio, the systemic impact of ungoverned conflict resolution could reach millions. In healthcare, a silently resolved allergy record conflict led to a near-fatal anaphylactic event (Scenario B). In compliance, a silently resolved sanctions conflict resulted in a prohibited transaction with potential penalties exceeding $10 million (Scenario C). The common thread is that the conflict itself was a valuable signal — it indicated that something was wrong with the data — and silent resolution destroyed that signal. Cross-references: AG-128 (Data Source Classification Governance) provides the trust tier metadata that informs conflict resolution precedence; AG-129 (Stale Data Actuation Prevention) provides freshness metadata for time-based precedence; AG-132 (Vector Store and RAG Governance) must implement conflict detection for contradictory retrieved chunks; AG-133 (Source Record Lineage Governance) enables tracing which source contributed to a conflicted decision.

Cite this protocol
AgentGoverning. (2026). AG-131: Source Conflict Escalation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-131