AI Component Bill of Materials Governance requires that every AI agent deployment maintains a comprehensive, machine-readable inventory of all software components, models, adapters, plugins, data sources, and external dependencies that constitute or influence the agent's behaviour. This inventory — the AI Bill of Materials (AI-BOM) — must be versioned, cryptographically signed, and updated whenever any component changes. Without a complete and accurate component inventory, an organisation cannot assess supply-chain risk, respond to vulnerability disclosures, verify regulatory compliance, or determine whether a deployed agent matches its approved configuration. The AI-BOM is the foundational artefact for all other supply-chain governance dimensions: you cannot govern what you have not inventoried.
Scenario A — Undetected Vulnerable Dependency in Production Agent: An organisation deploys an enterprise workflow agent built on an open-source orchestration framework (version 3.2.1) that incorporates 147 transitive dependencies. Six months after deployment, a critical remote code execution vulnerability (CVE-2025-41923, CVSS 9.8) is disclosed in one of the transitive dependencies — a serialisation library three levels deep in the dependency tree. The security team receives the advisory and asks: "Are any of our agents affected?" Without an AI-BOM, the answer requires manual inspection of every agent's build environment, which takes the team 11 days. During those 11 days, the vulnerable agent remains in production processing 2,400 customer requests per day.
What went wrong: The organisation had no component inventory for its AI agents. The vulnerability existed in a transitive dependency that no one had explicitly chosen or reviewed. The 11-day identification window created exposure to a known critical vulnerability across approximately 26,400 customer interactions. With an AI-BOM, the query "which agents include library X version Y" would have returned results in seconds. Consequence: 11 days of known-vulnerable production operation, potential regulatory notification obligations under DORA Article 19, customer data exposure risk, and remediation costs including emergency patching, regression testing, and incident response.
Scenario B — Model Component Substitution Without Detection: A research team deploys an agent that uses a base language model (7B parameters, version hash abc123) with a custom LoRA adapter trained on proprietary medical data. During a routine infrastructure update, the container image is rebuilt from the latest base image, which silently pulls a newer version of the base model (version hash def456). The adapter was trained against the original base model; its behaviour against the new base model has not been validated. The agent begins producing subtly different clinical recommendations. The drift is not detected for 3 weeks because the outputs remain superficially plausible.
What went wrong: The AI-BOM, had one existed, would have recorded the exact model version hash. A rebuild that changed the base model hash would have been flagged as a component change requiring re-validation. Without the AI-BOM, the model swap was invisible to governance processes. Consequence: 3 weeks of unvalidated clinical recommendations, potential patient safety incidents, regulatory exposure under medical device regulations, and loss of confidence in the agent's outputs requiring full re-validation of all recommendations issued during the affected period.
Scenario C — Licence Compliance Failure at Scale: An organisation builds an AI agent pipeline incorporating 12 models, 34 Python packages, and 8 external API integrations. One of the Python packages (included as a transitive dependency of a transitive dependency) is licensed under AGPL-3.0, which requires disclosure of source code for any networked service using the library. The organisation's legal team was never informed of this dependency. The agent is deployed as a customer-facing SaaS product. A competitor discovers the AGPL dependency through reverse engineering and files a licence compliance complaint.
What went wrong: No component inventory existed to flag licence obligations. The AGPL-licensed library was four levels deep in the dependency tree and was never explicitly selected by any developer. An AI-BOM with licence metadata would have flagged the AGPL component during the admission review. Consequence: Legal costs for licence compliance remediation, potential requirement to open-source proprietary code or replace the dependency, reputational damage, and delay to product roadmap while the dependency is replaced.
Scope: This dimension applies to all AI agent deployments where the agent's behaviour is influenced by software components, models, adapters, plugins, data pipelines, or external dependencies that are not wholly developed and maintained by the deploying organisation. This includes: base models (whether open-weight or accessed via API), fine-tuned models and adapters, orchestration frameworks, tool-calling libraries, vector databases and embedding models, retrieval-augmented generation pipelines, external API integrations, and any software library that processes, transforms, or routes agent inputs or outputs. The scope explicitly includes transitive dependencies — components included indirectly through other components. An agent that uses a single open-source library with 200 transitive dependencies has 201 components in scope. Read-only analytical agents that do not affect external state remain in scope because their component integrity affects the accuracy and reliability of their outputs.
4.1. A conforming system MUST maintain a machine-readable AI Bill of Materials (AI-BOM) for every deployed agent, listing all components that constitute or influence the agent's behaviour, including base models, adapters, plugins, libraries, data sources, and external API dependencies.
4.2. A conforming system MUST record, for each component in the AI-BOM, at minimum: component name, version identifier, cryptographic hash (SHA-256 or stronger), supplier or source, licence type, and the date the component was added or last updated.
4.3. A conforming system MUST version the AI-BOM as an atomic artefact such that any change to any component produces a new AI-BOM version with a unique identifier and timestamp.
4.4. A conforming system MUST update the AI-BOM within 24 hours of any component change in a deployed agent, including transitive dependency updates, model version changes, and plugin additions or removals.
4.5. A conforming system MUST block deployment of any agent whose AI-BOM contains components that have not passed the organisation's admission review process (see AG-088).
4.6. A conforming system MUST support automated queries against the AI-BOM to identify all agents affected by a specific component, version, or vulnerability identifier (e.g., CVE) and return results within 5 minutes for inventories of up to 10,000 components.
4.7. A conforming system SHOULD sign AI-BOM artefacts cryptographically to ensure tamper evidence, consistent with AG-006 (Tamper-Evident Record Integrity).
4.8. A conforming system SHOULD include transitive dependency depth information in the AI-BOM, recording how each component entered the dependency tree (direct dependency, transitive via component X, etc.).
4.9. A conforming system SHOULD integrate AI-BOM generation into the CI/CD pipeline such that every build automatically produces an updated AI-BOM without manual intervention.
4.10. A conforming system MAY implement AI-BOM differencing to automatically highlight changes between successive versions and flag changes that alter the risk profile (e.g., new licence types, new suppliers, deprecated components).
AI agents are not monolithic software artefacts — they are assemblies of dozens to hundreds of components, each with its own provenance, licence terms, vulnerability surface, and update cadence. A single enterprise workflow agent might incorporate a base language model from one provider, a fine-tuned adapter trained internally, an orchestration framework from an open-source project, 15 tool-calling plugins from various sources, a vector database, an embedding model, and 200+ transitive library dependencies. Each of these components can introduce risk: security vulnerabilities, licence compliance obligations, behavioural changes from version updates, or supply-chain attacks through compromised dependencies.
Traditional software bill of materials (SBOM) practices, as codified in standards like SPDX and CycloneDX, address conventional software dependencies but do not cover the AI-specific components that dominate an agent's risk profile: models, adapters, training data provenance, embedding configurations, and prompt templates. AG-087 extends the SBOM concept to encompass the full range of components that influence an AI agent's behaviour.
The urgency of this dimension is driven by the speed at which AI supply-chain attacks are evolving. Model poisoning through compromised training data, trojanised adapters distributed through public model registries, and dependency confusion attacks targeting AI-specific package managers have all been demonstrated in practice. Without a component inventory, an organisation cannot determine whether it is affected by a disclosed vulnerability, cannot verify that a deployed agent matches its approved configuration, and cannot respond to supply-chain incidents within the timeframes required by regulations such as DORA (72-hour notification) or the EU AI Act (post-market monitoring).
The AI-BOM also serves as the foundation for all sibling dimensions in this landscape. AG-088 (Third-Party Tool Admission) needs the AI-BOM to verify which components have been admitted. AG-089 (Third-Party Tool Revocation) needs the AI-BOM to identify which agents are affected by a revocation. AG-090 (Fine-Tune and Adapter Provenance) needs the AI-BOM to track adapter lineage. Without AG-087, these sibling dimensions operate without a single source of truth for what is deployed.
The AI-BOM should be implemented as a structured, machine-readable artefact — not a prose document or spreadsheet. Recommended formats include CycloneDX (which has native support for machine-learning model components as of version 1.5) and SPDX 3.0 (which supports AI and dataset profiles). The AI-BOM should be stored in a versioned, append-only data store where each version is immutable once committed.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. DORA Article 28 requires financial entities to maintain a register of all ICT third-party service arrangements. For AI agents, the AI-BOM extends this register to cover model and algorithm dependencies. The register must support the 72-hour incident notification requirement by enabling rapid identification of affected agents when a supply-chain vulnerability is disclosed. Integration with existing vendor risk management systems is recommended.
Healthcare. Medical device regulations (FDA 21 CFR 820, EU MDR) require documented design history files including all components. For AI agents operating in clinical contexts, the AI-BOM serves as part of the design history file and must be maintained to the same standards. Changes to any component may trigger re-validation requirements under the quality management system.
Critical Infrastructure. IEC 62443 requires asset inventory for industrial automation and control systems. AI agents operating in critical infrastructure must have AI-BOMs that integrate with existing asset management systems and support the identification of components affected by industrial control system advisories (ICS-CERT).
Basic Implementation — The organisation maintains a manually curated list of direct components for each deployed agent, including model name and version, primary libraries, and external API integrations. The list is updated during major releases but may not reflect interim changes or transitive dependencies. Vulnerability checks are performed manually when advisories are received. The component list is stored as a document or spreadsheet.
Intermediate Implementation — The AI-BOM is generated automatically during the CI/CD pipeline in a machine-readable format (CycloneDX or SPDX). Transitive dependencies are included. The AI-BOM is versioned and stored in an immutable data store. Automated vulnerability scanning correlates the AI-BOM against vulnerability databases on a daily schedule. Deployment is blocked if the AI-BOM contains components not in the approved registry. Queries against the AI-BOM return results within 5 minutes for inventories of up to 10,000 components.
Advanced Implementation — All intermediate capabilities plus: the AI-BOM is cryptographically signed and bound to the deployment manifest. Runtime verification confirms that deployed components match the AI-BOM. Automated differencing highlights risk-relevant changes between AI-BOM versions. Vulnerability correlation operates in near-real-time (within 1 hour of advisory publication). The AI-BOM integrates with the organisation's CMDB and vendor risk management systems. Historical AI-BOM versions support point-in-time compliance queries for regulatory audit. The AI-BOM covers all component layers including model weights, adapter parameters, embedding models, prompt templates, and infrastructure dependencies.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-087 compliance requires verifying both the completeness of the AI-BOM and its operational utility during incident response.
Test 8.1: AI-BOM Completeness Verification
Test 8.2: AI-BOM Update Timeliness
Test 8.3: Vulnerability Query Response
Test 8.4: Deployment Blocking for Unadmitted Components
Test 8.5: AI-BOM Versioning and Immutability
Test 8.6: Cryptographic Hash Accuracy
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 11 (Technical Documentation) | Direct requirement |
| EU AI Act | Article 61 (Post-Market Monitoring) | Supports compliance |
| DORA | Article 28 (Register of ICT Third-Party Arrangements) | Direct requirement |
| DORA | Article 9 (ICT Risk Management Framework) | Supports compliance |
| US Executive Order 14028 | Section 4 (Software Supply Chain Security) | Direct requirement |
| NIST AI RMF | MAP 3.2, GOVERN 1.4, MANAGE 2.3 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
| FDA 21 CFR 820 | Quality Management System — Design History File | Supports compliance |
Article 11 requires providers of high-risk AI systems to draw up technical documentation demonstrating compliance, including a general description of the AI system including "the elements of the AI system and of the process for its development." The AI-BOM directly satisfies this requirement by providing a complete, machine-readable enumeration of all components. Without it, the technical documentation is incomplete and cannot demonstrate that the deployed system matches its approved configuration. The requirement for post-market monitoring under Article 61 further reinforces the need for a live AI-BOM that supports ongoing compliance verification throughout the system's lifecycle.
Article 28 requires financial entities to maintain and update a register of information on all contractual arrangements on the use of ICT services provided by third-party providers. For AI agents, every model provider, API integration, and open-source component maintainer is a de facto ICT third-party provider. The AI-BOM extends the Article 28 register to cover the full supply chain of AI-specific components. The register must support the 72-hour incident notification requirement under Article 19 by enabling rapid identification of affected agents.
Section 4 of EO 14028 requires SBOM provision for software sold to the federal government. For AI agents deployed in public sector or government-adjacent contexts, the AI-BOM satisfies and extends the SBOM requirement to cover AI-specific components. NIST guidance under this EO (NIST SP 800-218, SSDF) establishes minimum SBOM requirements that the AI-BOM must meet or exceed.
MAP 3.2 addresses understanding the AI system's context and dependencies; GOVERN 1.4 addresses organisational governance of AI systems including supply-chain considerations; MANAGE 2.3 addresses ongoing monitoring and management of AI system risks. The AI-BOM supports all three by providing the authoritative inventory of components against which risk assessments, governance decisions, and monitoring activities operate.
The AI management system requires organisations to identify and assess risks related to AI systems. Component supply-chain risk is a primary risk category that can only be assessed if the components are known. The AI-BOM provides the input to the risk assessment process required by Clause 8.2.
For AI agents operating in clinical or medical device contexts, the AI-BOM forms part of the design history file required by the quality management system. Any component change may constitute a design change requiring formal change control and potentially re-validation under 21 CFR 820.30.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents share supply-chain components |
Consequence chain: Without a component inventory, the organisation operates blind to its AI supply-chain risk. When a vulnerability is disclosed in a widely-used component (e.g., a serialisation library, an embedding model, or an orchestration framework), the organisation cannot determine which agents are affected, cannot prioritise remediation, and cannot provide regulators with the required incident notifications within mandated timeframes. The failure mode is not a single incident but a persistent inability to respond to supply-chain events. Each day without an accurate AI-BOM is a day where a disclosed vulnerability could be present in production agents without the organisation's knowledge. The blast radius extends across all agents that share affected components — a single vulnerable library present in 50 agents means 50 agents operating with known risk. For financial services organisations, the inability to maintain the Article 28 register and respond within DORA's 72-hour notification window creates direct regulatory exposure. For healthcare organisations, the inability to demonstrate design history compliance creates product liability exposure. The severity compounds over time: as the AI-BOM drifts further from reality, the organisation's ability to govern its supply chain degrades progressively until a supply-chain incident reveals the gap under the worst possible circumstances.
Cross-references: AG-014 (External Dependency Integrity) establishes integrity verification for external dependencies that the AI-BOM inventories. AG-048 (AI Model Provenance and Integrity) provides model-specific provenance requirements that complement the AI-BOM's model layer. AG-006 (Tamper-Evident Record Integrity) governs the integrity of the AI-BOM artefact itself. AG-007 (Governance Configuration Control) governs changes to AI-BOM policies and thresholds. AG-088 (Third-Party Tool Admission Governance) depends on the AI-BOM to verify admitted components. AG-089 (Third-Party Tool Revocation Governance) depends on the AI-BOM to identify agents affected by revocation. AG-090 (Fine-Tune and Adapter Provenance Governance) depends on the AI-BOM to track adapter lineage.