AG-498

Upstream Policy Compatibility Governance

Third-Party, Supply Chain & Open Source ~24 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST ISO 42001

2. Summary

Upstream Policy Compatibility Governance requires that organisations systematically evaluate whether the policies, terms, and conditions imposed by upstream component providers — including licence terms, acceptable use policies, data handling requirements, telemetry obligations, export restrictions, and usage constraints — are compatible with the organisation's own governance requirements, regulatory obligations, and operational mandates before integrating those components into AI agent systems. Third-party components do not arrive as neutral building blocks; they carry policy obligations that flow downstream to every system that incorporates them. When an upstream policy conflicts with an internal governance requirement or a regulatory mandate, the conflict creates a compliance impossibility — the organisation cannot simultaneously satisfy both the upstream obligation and its own requirement, forcing either a governance violation, a contractual breach, or an emergency component replacement. This dimension mandates proactive compatibility analysis, continuous monitoring for upstream policy changes, and formal conflict resolution procedures.

3. Example

Scenario A — Upstream Telemetry Policy Conflicts with Data Sovereignty Requirements: A public sector AI agent deployed by a European government agency for citizen benefit eligibility assessment uses a commercial inference optimisation library. At integration time, the engineering team evaluates the library's technical capabilities and licence cost but does not review its data handling policy in detail. The library's terms of service include a clause permitting the vendor to collect anonymised usage telemetry, including input tensor shapes, inference latency distributions, and model architecture metadata. The telemetry is transmitted to the vendor's servers located in a non-EU jurisdiction. The government agency operates under strict data sovereignty requirements mandating that all data generated during citizen interactions — including metadata about the processing of citizen data — must remain within EU borders. Nine months after deployment, a data protection audit identifies the telemetry transmission. The agency discovers that the library has been transmitting processing metadata for 340,000 citizen benefit assessments to servers outside the EU. The data protection authority opens an investigation into potential GDPR Article 44 violations for international data transfers without adequate safeguards. The library must be replaced immediately, but no drop-in replacement exists — the migration requires 14 weeks of re-engineering.

What went wrong: The upstream component's data handling policy was not evaluated for compatibility with the agency's data sovereignty requirements before integration. The telemetry clause was present in the terms of service but was not identified as a conflict because no policy compatibility review process existed. The technical evaluation focused on performance and functionality, treating the policy dimension as an afterthought. Consequence: 340,000 citizen records potentially exposed to non-EU jurisdiction, GDPR investigation with potential fine of up to 4% of budget equivalent, 14-week emergency migration, suspension of new benefit assessments during migration period affecting an estimated 12,000 citizens.

Scenario B — Acceptable Use Policy Prohibits Financial Decision-Making: A financial-value agent performing automated credit scoring integrates a machine learning framework whose acceptable use policy (AUP) was updated 6 months after initial integration. The updated AUP now states: "This software may not be used in systems that make or materially contribute to decisions about credit, employment, housing, insurance, or other consequential decisions about individuals." The financial institution's agent uses the framework as the core inference engine for credit scoring — a direct violation of the updated AUP. The conflict is not detected for 8 months because no monitoring process exists for upstream policy changes. When the conflict is discovered during a vendor compliance review, the institution faces a dilemma: continuing to use the framework violates the AUP (and potentially the licence, rendering use unauthorised), while replacing the framework requires rebuilding the inference pipeline — a 20-week project. During the 20-week migration, every credit decision made by the agent is potentially tainted by the unauthorised use of the framework. The vendor, upon learning of the violation, terminates the licence and demands removal within 30 days, compressing the 20-week migration into a 4-week emergency.

What went wrong: No monitoring process existed for upstream policy changes post-integration. The initial AUP was compatible, but the vendor unilaterally updated it to restrict consequential decision-making use cases. The organisation had no mechanism to detect the policy change, evaluate its impact, or trigger a migration before the conflict became acute. Consequence: 8 months of unauthorised software use, licence termination with 30-day removal demand, £3.2 million emergency migration cost, 4-week period of degraded credit scoring operations affecting approximately 85,000 applications, reputational risk from vendor publicly disclosing the AUP violation.

Scenario C — Open-Source Contribution Policy Conflicts with Proprietary Model Protection: A crypto/Web3 agent uses an open-source model serving framework whose contributor licence agreement (CLA) includes a clause requiring that any modifications to the framework, including configuration-as-code and integration adapters, must be contributed back to the upstream project under the same open-source licence. The organisation's AI team has written custom integration adapters that expose details of the proprietary trading model's architecture — specifically, the batching strategy, latency optimisation parameters, and feature preprocessing pipeline. Under the CLA's contribution-back requirement, these adapters must be open-sourced. Publishing them would reveal competitive intelligence about the trading model's architecture. The conflict is discovered when a developer attempts to upstream a bug fix and the CLA review process flags the contribution-back obligation. Investigation reveals that 14 custom adapters containing proprietary trading architecture details are subject to the contribution-back requirement.

What went wrong: The CLA's contribution-back clause was not evaluated for compatibility with the organisation's intellectual property protection requirements before integration. The engineering team treated the framework as a standard open-source dependency without analysing the policy implications of creating custom modifications. Consequence: 14 proprietary adapters potentially subject to forced disclosure, £890,000 legal review and remediation cost, framework replacement required at £1.4 million, 6-month delay in trading model deployment while the serving infrastructure is rebuilt on a policy-compatible framework.

4. Requirement Statement

Scope: This dimension applies to every AI agent deployment that incorporates third-party components carrying policy obligations — including but not limited to: software licence terms (commercial and open-source), acceptable use policies, data handling and privacy policies, telemetry and data collection terms, export control restrictions, contributor licence agreements, service-level agreements with policy provisions, terms of service, and any other binding or quasi-binding policy instrument that constrains how the component may be used, modified, distributed, or integrated. The scope covers policies at the time of initial integration and changes to those policies throughout the component's lifecycle. Components with no discernible policy obligations (e.g., public domain code with no licence, no terms, and no conditions) are technically exempt but should be documented as policy-evaluated to provide evidence of due diligence. The scope explicitly includes cloud platform policies, API terms of service, and marketplace distribution policies that may constrain agent deployment, in addition to traditional software licence terms.

4.1. A conforming system MUST perform a formal policy compatibility assessment for every third-party component before integration into an AI agent system, evaluating the component's policy obligations against the organisation's internal governance requirements, applicable regulatory mandates, and contractual obligations to customers and partners.

4.2. A conforming system MUST maintain a policy obligation register that documents, for every integrated third-party component, the specific policy obligations imposed by the upstream provider, the internal governance requirements and regulatory mandates against which each obligation was evaluated, the compatibility determination (compatible, conditionally compatible with stated conditions, or incompatible), and the date and author of the assessment.

4.3. A conforming system MUST implement continuous monitoring for upstream policy changes — including licence term modifications, AUP updates, data handling policy revisions, and new restriction announcements — and trigger a re-assessment of policy compatibility within 30 days of detecting any change.

4.4. A conforming system MUST define and follow a formal conflict resolution procedure for cases where an upstream policy obligation is determined to be incompatible with an internal governance requirement or regulatory mandate, with resolution options including: component replacement, policy negotiation with the upstream provider, architectural isolation of the conflicting component, or formal risk acceptance with documented compensating controls.

4.5. A conforming system MUST block the integration of any component whose policy obligations are determined to be incompatible with applicable regulatory mandates, unless a formal risk-acceptance exception is approved by the compliance function and the designated risk owner, with the exception subject to time-bounded renewal no longer than 90 days and an active remediation plan.

4.6. A conforming system MUST evaluate policy compatibility across all jurisdictions in which the AI agent operates, identifying jurisdiction-specific conflicts where a policy obligation is compatible in one jurisdiction but incompatible in another.

4.7. A conforming system SHOULD implement automated policy change detection that monitors upstream provider websites, licence repositories, and policy announcement channels for changes to the policies governing integrated components.

4.8. A conforming system SHOULD maintain a pre-assessed catalogue of approved components whose policy obligations have been determined to be compatible with the organisation's standard governance requirements, enabling rapid integration without per-instance full assessment for catalogued components.

4.9. A conforming system MAY implement policy compatibility simulation — the ability to model the impact of a proposed upstream policy change on all integrated components and affected agents before the change takes effect, enabling pre-emptive migration planning.

5. Rationale

Software components are not policy-neutral. Every library, framework, service, and platform carries a set of policy obligations that bind the consumer — the organisation that integrates the component into its systems. These obligations may restrict how the component can be used (acceptable use policies), what data the component collects and where it sends it (data handling policies), who can access systems incorporating the component (export restrictions), what modifications must be disclosed (contribution-back requirements), and under what conditions use can be terminated (licence revocation clauses). When an organisation integrates a component without understanding these obligations, it is making policy commitments it may not be able to keep.

The risk is particularly acute for AI agent systems because of the regulatory density surrounding AI. AI agents operating in financial services, healthcare, public sector, or safety-critical domains are subject to regulatory requirements that constrain data handling, decision-making transparency, jurisdictional data residency, and algorithmic accountability. An upstream policy that conflicts with any of these regulatory requirements creates a compliance impossibility — the organisation literally cannot comply with both the upstream obligation and the regulatory mandate simultaneously. This is not a theoretical risk: Scenario A illustrates a real pattern where a vendor's data collection policy conflicts with data sovereignty requirements, and Scenario B illustrates a real pattern where an AUP is updated to prohibit the very use case the organisation depends on.

The challenge is compounded by three factors. First, upstream policies are unilaterally modifiable. Unlike negotiated contracts, many upstream policies — particularly for open-source components and cloud services — can be changed by the provider at any time, often with minimal notice. The organisation integrates a component under one set of policy conditions and discovers months later that the conditions have changed. Without monitoring, the changed conditions may not be detected until an audit, an incident, or a vendor enforcement action. Second, policy obligations are transitive. If Component A is subject to a contribution-back requirement, and Component B is a modification of Component A, Component B is also subject to the requirement — even though the organisation's policy assessment focused on Component A. Transitive policy obligations flow through the dependency tree, creating hidden compliance exposure. Third, policy language is often ambiguous. Terms like "anonymised data," "usage telemetry," "material contribution," and "derivative work" have different interpretations in different legal jurisdictions. A policy clause that is benign under one interpretation may be restrictive under another.

The regulatory landscape reinforces the need for upstream policy compatibility governance. The EU AI Act requires that providers of high-risk AI systems maintain documentation covering the supply chain, including the components used and their conditions of use. If an upstream policy contradicts the EU AI Act's requirements — for example, a component whose data handling policy is incompatible with GDPR — the provider cannot demonstrate compliant use of that component. DORA requires financial entities to assess and manage third-party ICT risks, explicitly including the contractual and policy dimensions of third-party relationships. SOX requires that internal controls are not undermined by third-party dependencies with conflicting obligations. Across regulatory frameworks, the consistent message is that organisations are responsible for ensuring that their third-party dependencies do not create compliance conflicts.

The cost of reactive discovery — finding a policy conflict after integration, after deployment, after months of operation — is orders of magnitude higher than proactive assessment. Scenario B illustrates a £3.2 million emergency migration that a pre-integration policy review would have prevented entirely (by selecting a framework without AUP restrictions on consequential decision-making). Proactive policy compatibility assessment is not an overhead — it is a risk-reduction investment with clear and quantifiable return.

6. Implementation Guidance

Upstream Policy Compatibility Governance requires a structured process that evaluates policy obligations before integration, monitors for changes after integration, and resolves conflicts promptly when they arise. The core principle is that policy compatibility is a first-class integration criterion, evaluated with the same rigour as technical compatibility, performance, and security.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial institutions face a dense intersection of upstream policies and regulatory requirements. Model serving frameworks with AUPs prohibiting consequential decision-making (Scenario B) are a direct threat to credit scoring, fraud detection, and trading agents. Data handling policies that permit cross-border telemetry transmission conflict with data localisation requirements imposed by multiple financial regulators. Financial institutions should require legal review of all upstream policies for components in the critical path of financial decision-making and should maintain pre-negotiated enterprise agreements that override standard AUPs where possible.

Crypto/Web3. Crypto and Web3 agents face unique policy compatibility challenges: some upstream components have acceptable use policies that restrict cryptocurrency-related use cases, some jurisdictions restrict the use of certain cryptographic libraries, and the contribution-back obligations of certain open-source licences may conflict with the need to protect proprietary trading strategies (Scenario C). Organisations should pay particular attention to the IP implications of CLA terms when building custom modifications to open-source infrastructure.

Public Sector. Government agencies face sovereign-specific policy requirements that may conflict with the global-by-default policies of many upstream providers. Data sovereignty, security classification handling, procurement regulations (requiring domestic sourcing preferences), and accessibility mandates create a complex policy landscape. Agencies should maintain jurisdiction-specific policy compatibility matrices and should favour components with government-specific licence tiers that accommodate sovereign requirements.

Safety-Critical / CPS. Safety-critical deployments face the additional constraint that upstream policy changes — particularly termination or revocation clauses — could force the removal of a component from a system that cannot tolerate downtime. A vendor who revokes a licence gives the organisation a choice between operating without a valid licence and shutting down a safety-critical system. AG-496 (Escrow and Source-Access Governance) provides a partial mitigation, but policy compatibility assessment should specifically flag termination clauses that could create safety-critical availability conflicts.

Maturity Model

Basic Implementation — The organisation performs a policy compatibility assessment for every new third-party component before integration, using a standardised checklist. The policy obligation register documents the key obligations for each integrated component. Policy conflicts identified at integration time are resolved before the component is deployed. The assessment covers licence terms, acceptable use policies, and data handling provisions. Monitoring for policy changes is manual (periodic review of vendor policy pages).

Intermediate Implementation — All basic capabilities plus: automated policy change detection monitors upstream policy sources and generates alerts when changes are detected. Re-assessment is completed within 30 days of a detected change. The approved component catalogue enables rapid integration of pre-cleared components. Jurisdiction-specific policy matrices are maintained for cross-border agents. The policy obligation register includes all policy dimensions (licence, AUP, data handling, telemetry, export, CLA, termination). Legal review is mandatory for components in the critical path of regulated functions.

Advanced Implementation — All intermediate capabilities plus: policy compatibility simulation models the impact of proposed upstream policy changes across all integrated components and affected agents. The approved component catalogue is integrated with the SBOM and the component lifecycle inventory (AG-497), enabling unified management of technical, security, lifecycle, and policy dimensions. Transitive policy obligation tracking identifies policy constraints inherited through the dependency tree. The organisation can demonstrate that every component in every agent's dependency tree has a current, valid policy compatibility assessment with no unresolved conflicts.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Pre-Integration Policy Compatibility Assessment Enforcement

Test 8.2: Policy Obligation Register Completeness

Test 8.3: Upstream Policy Change Detection and Re-Assessment

Test 8.4: Incompatible Component Integration Block

Test 8.5: Cross-Jurisdiction Policy Compatibility

Test 8.6: Conflict Resolution Procedure Execution

Test 8.7: Risk-Acceptance Exception Governance for Policy Conflicts

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 17 (Quality Management System)Direct requirement
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
NIST AI RMFGOVERN 1.5, MAP 3.4Supports compliance
ISO 42001Clause 8.4 (Operation of AI Systems)Supports compliance
DORAArticle 28 (ICT Third-Party Risk)Direct requirement

EU AI Act — Article 17 (Quality Management System)

Article 17 requires providers of high-risk AI systems to implement a quality management system that includes procedures for the management of the AI system's supply chain, including documentation of the components used and their conditions of use. "Conditions of use" directly encompasses the policy obligations imposed by upstream providers. If an upstream component's conditions of use conflict with the EU AI Act's own requirements — for example, a data handling policy incompatible with the Act's transparency obligations — the provider cannot demonstrate that its quality management system ensures compliant use of the component. AG-498 provides the systematic process for evaluating and managing these conditions of use throughout the system's lifecycle.

DORA — Article 28 (ICT Third-Party Risk)

DORA Article 28 imposes specific requirements on financial entities regarding their management of ICT third-party risk, including requirements to assess the terms and conditions of third-party arrangements. The article requires financial entities to ensure that contractual arrangements with ICT third-party service providers include provisions on data access, security, and termination rights. AG-498 extends this principle beyond formal contractual arrangements to cover all policy instruments (AUPs, licence terms, data handling policies) that create binding obligations, ensuring that financial entities assess policy compatibility not only for contracted vendors but for all upstream components including open-source libraries and cloud platform services that may not have formal contracts but still impose policy obligations.

SOX — Section 404 (Internal Controls Over Financial Reporting)

For AI agents involved in financial reporting, an upstream policy conflict that forces emergency component replacement represents a control disruption. If a vendor revokes a licence or updates an AUP to prohibit financial use cases, the agent's continued operation is jeopardised. SOX auditors assess whether the organisation has identified and managed risks to the continuity and integrity of its internal controls. Unmanaged upstream policy risks — policies that could force emergency changes to financial processing systems — represent a control environment weakness. AG-498 ensures that these risks are identified, assessed, and managed before they materialise.

FCA SYSC — 6.1.1R (Systems and Controls)

The FCA expects firms to manage risks arising from third-party dependencies, including the risk that third-party policy changes could disrupt regulated activities. A firm that has not assessed whether its component providers' policies are compatible with its regulatory obligations has not established adequate systems and controls for managing third-party risk. AG-498 provides the specific governance framework for this assessment.

NIST AI RMF — GOVERN 1.5, MAP 3.4

GOVERN 1.5 addresses ongoing monitoring and periodic review of the AI risk management process, which includes monitoring third-party component risks. MAP 3.4 addresses the identification and documentation of risks arising from third-party data and components. Upstream policy incompatibilities are a specific third-party risk that must be identified, documented, and managed within the NIST AI RMF structure. AG-498 provides the operational controls for this risk category.

ISO 42001 — Clause 8.4 (Operation of AI Systems)

ISO 42001 requires organisations to manage the operational aspects of AI systems, including relationships with suppliers and third parties. Policy compatibility is a dimension of supplier relationship management — the organisation must ensure that supplier-imposed conditions do not conflict with its own AI management system requirements. Organisations pursuing ISO 42001 certification must demonstrate that upstream policy obligations have been systematically evaluated and managed.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusComponent-scoped initially, but can escalate to agent-level or organisation-level if the incompatible component is widely used or if the conflict triggers licence revocation or regulatory enforcement

Consequence chain: An upstream policy obligation conflicts with an internal governance requirement or regulatory mandate, and the conflict is not detected or resolved. The immediate state is latent non-compliance — the organisation is simultaneously bound by two incompatible obligations and is necessarily violating one. If the violated obligation is the upstream policy (e.g., operating in breach of an AUP), the consequence trajectory includes licence revocation, vendor enforcement action, and loss of the right to use the component — potentially forcing emergency migration with all the costs and risks illustrated in Scenario B. If the violated obligation is the regulatory mandate (e.g., transmitting data to a non-EU jurisdiction in violation of GDPR), the consequence trajectory includes regulatory investigation, enforcement action, fines (up to 4% of annual turnover under GDPR), and potential suspension of the agent's operations pending remediation. In either case, the downstream consequences compound: emergency migration disrupts operations, introduces regression risk, and consumes engineering capacity that would otherwise be allocated to risk-reducing activities. The reputational impact extends beyond the immediate incident — regulators and auditors who discover unmanaged policy conflicts will increase scrutiny across the organisation's entire governance programme, questioning whether other third-party risks are similarly unmanaged. For cross-border agents, a policy conflict in one jurisdiction can trigger enforcement actions in multiple jurisdictions simultaneously, as a data handling policy violation may be relevant under GDPR, local data protection laws, and sector-specific regulations all at once. The cost of proactive policy compatibility assessment is a fraction of the cost of reactive conflict resolution — prevention is not merely preferable but economically imperative.

Cross-references: AG-007 (Governance Configuration Control) provides the configuration management foundation that AG-498 extends to upstream policy configurations. AG-489 (Open-Source Licence Policy Binding Governance) addresses the specific licence dimension of upstream policy, while AG-498 covers the full spectrum of policy obligations. AG-490 (Maintainer Trust and Project Health Governance) provides upstream health signals that may predict policy instability. AG-491 (Dependency Provenance and SBOM Attestation Governance) provides the component inventory that AG-498's policy obligation register maps against. AG-495 (Procurement Security Requirement Governance) addresses security requirements in procurement, complementing AG-498's broader policy compatibility scope. AG-497 (End-of-Support Migration Governance) addresses the migration process that may be triggered when a policy conflict requires component replacement. AG-020 (Regulatory Change Detection) detects changes in the regulatory mandates against which upstream policies are evaluated. AG-048 (Cross-Border Data Sovereignty Governance) addresses the data sovereignty requirements that frequently conflict with upstream data handling policies.

Cite this protocol
AgentGoverning. (2026). AG-498: Upstream Policy Compatibility Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-498