This dimension governs the structural segregation of intelligence sources feeding into AI agents operating within defence, dual-use, and national security contexts, ensuring that data originating from distinct compartments — whether classified signals intelligence, open-source collection, human intelligence reporting, or allied partner feeds — is processed, stored, and reasoned over within appropriately bounded information boundaries. It is critical because AI agents capable of synthesising inputs across compartment boundaries without explicit authorisation create uncontrolled inference channels that can reconstitute classified intelligence from individually unclassified elements, violate bilateral information-sharing treaties, and expose sources and methods to adversarial exploitation. Failure in this dimension manifests as cross-compartment bleed during inference — for example, an agent trained or prompted on SIGINT-derived features producing outputs that implicitly reveal collection capabilities to operators cleared only for HUMINT products — or as model weights that have absorbed compartmentalised signal and cannot be safely deployed in lower-classification environments.
Scenario A — Coalition Interoperability Inference Breach A NATO-interoperable AI-assisted targeting recommendation system is integrated with feeds from three member states: State A contributes satellite-derived geospatial intelligence tagged NOFORN/REL-A; State B contributes signals intelligence tagged REL-AB; State C contributes open-source imagery tagged UNCLASSIFIED. The agent is architected with a unified vector store indexed without compartment isolation. During a live operation, a State C operator — cleared for UNCLASSIFIED product only — queries the agent for route analysis in a specific grid square. The agent's retrieval mechanism surfaces a ranked embedding cluster that happens to be geometrically adjacent to REL-A geospatial features encoded during training. The agent's output includes a route avoidance recommendation that implicitly encodes awareness of a denied-area boundary derivable only from the REL-A source. State A subsequently identifies the boundary inference in the operator's logged query response and initiates a formal treaty violation review under the NATO Security Committee framework. The agent is immediately withdrawn from coalition deployment. Remediation requires retraining, full compartment-isolated vector store reconstruction, and a 4-month operational gap at an estimated cost of USD 23 million in contract penalties and operational delay.
Scenario B — Edge Robotic Platform Source Exposure An autonomous ground vehicle operating in a forward-operating-base perimeter-security role carries an onboard inference module trained on a fused dataset: perimeter radar returns (UNCLASSIFIED operational data), acoustic signature libraries (SECRET — Special Access Program), and adversary vehicle pattern-of-life data (TOP SECRET/SCI — HUMINT-sourced). The vehicle is captured intact by an adversarial actor during a contested withdrawal. Post-capture analysis of the onboard model weights using a commercially available model inversion toolkit recovers acoustic feature centroids corresponding to 14 known adversary platform types. Cross-referencing these centroids with open-source acoustic databases allows the adversary to infer which specific platform types were under active collection — revealing HUMINT placement and access with high confidence. The breach triggers a HUMINT source review across 3 active operations, mandatory extraction of 2 human sources at a combined risk exposure assessed as life-threatening, and a DoD Inspector General investigation into compartmentalisation practice failures in the platform's AI acquisition programme.
Scenario C — Domestic Law Enforcement Dual-Use Bleed A national-level law enforcement agency deploys a predictive analytics agent to support organised crime investigation. The agent's training corpus was assembled by a joint intelligence cell and includes: open police records (OFFICIAL), financial intelligence unit data (OFFICIAL-SENSITIVE), and a counter-terrorism watchlist derived from a classified foreign partner data exchange (SECRET — originator-controlled). The system is subsequently made available under a data-sharing agreement to 43 regional police forces, none of which hold the appropriate clearances for SECRET originator-controlled material. An audit triggered by a judicial disclosure request in a criminal prosecution reveals that the agent's risk-score outputs for 17 individuals are partially attributable to features derived from the SECRET watchlist. The prosecution collapses. The originating foreign partner invokes originator control provisions and suspends the data-sharing agreement. A parliamentary oversight committee opens an inquiry. The total consequential cost — including case retrials, civil liability exposure, and diplomatic remediation — is assessed at GBP 8.4 million.
This dimension applies to all AI agents, inference systems, and AI-adjacent data pipelines that ingest, process, store, or produce outputs derived from intelligence sources carrying formal sensitivity or compartment designations, including but not limited to: classified government intelligence products; allied or partner-nation shared intelligence; special access programme data; law enforcement intelligence carrying originator-control restrictions; dual-use research outputs with controlled dissemination requirements; and any fusion product combining sources of differing classification or compartment lineage. The requirements apply regardless of whether compartmentalisation is enforced at the platform, model, dataset, or runtime layer, and apply with equal force to training-time data governance, inference-time input handling, output generation, model storage, and deployment environment management. Systems that process exclusively UNCLASSIFIED or publicly available open-source intelligence with no fusion dependency on controlled sources are outside scope, provided that status is formally attested by a designated security authority.
4.1.1 The deploying organisation MUST maintain a formally approved Compartment Registry that enumerates every intelligence source feeding the agent system, specifying: the compartment designation or classification marking, the originator authority, the authorised releasability boundary, and the maximum sensitivity level of any derivative product.
4.1.2 Every intelligence source ingested by the system MUST be assigned a Compartment Identifier (CID) at point of ingestion, and that CID MUST be preserved through all data transformation, embedding, storage, and retrieval operations without truncation or normalisation.
4.1.3 The Compartment Registry MUST be reviewed and re-approved by the designated Security Authority at intervals not exceeding 90 days or upon any addition of a new intelligence source, whichever is sooner.
4.1.4 Systems MUST NOT ingest intelligence from sources not listed in the approved Compartment Registry, and any attempted ingestion from an unregistered source MUST trigger an automated alert to the Security Authority within 5 minutes of detection.
4.2.1 The system MUST implement physically or logically isolated data stores for each defined compartment boundary, such that data originating in one compartment cannot be retrieved, indexed, or served to a query process operating under a different compartment clearance without an explicit, logged, and authorised cross-compartment access event.
4.2.2 Vector embeddings, feature representations, or any latent encoding of compartmented intelligence MUST be stored in compartment-scoped embedding spaces, and cross-compartment similarity searches MUST be architecturally prevented unless the requesting process holds authorisation spanning all contributing compartments.
4.2.3 Intermediate computational artefacts — including attention caches, retrieval buffers, and inference-time memory states — that contain or are derived from compartmented data MUST be cleared or cryptographically zeroised between sessions operating under different compartment authorities.
4.2.4 The system MUST enforce a default-deny posture for cross-compartment data flows; any cross-compartment transfer MUST require explicit authorisation from the Security Authority and MUST generate an immutable audit record per the requirements of AG-112.
4.3.1 Any model trained on data spanning multiple compartments MUST be assigned a composite classification marking equal to the highest-sensitivity compartment represented in its training corpus, and this marking MUST be embedded in the model's metadata in a tamper-evident format.
4.3.2 Organisations MUST NOT deploy a model trained on data from a higher-classification compartment into an environment cleared only for a lower classification, unless a formally documented and Security-Authority-approved declassification or sanitisation review has been completed and its findings are traceable to the specific model version deployed.
4.3.3 Training pipelines MUST implement source-tagged dataset provenance records that allow post-hoc reconstruction of which compartment contributed which training examples to which model checkpoint, retaining sufficient granularity to support targeted retraining if a compartment source is subsequently assessed as compromised or improperly included.
4.3.4 Where model inversion, membership inference, or feature attribution attacks are assessed as a plausible threat, the system MUST implement technical mitigations — including differential privacy mechanisms, output perturbation, or training data scrubbing — to reduce the probability of compartmented source recovery to a level approved by the Security Authority.
4.4.1 At inference time, the system MUST validate the compartment clearance of the requesting operator or process against the compartment scope of all data sources that may contribute to the requested output, and MUST refuse to produce any output that would require access to compartments for which the requester lacks authorisation.
4.4.2 Every inference output MUST carry a derivation classification marking indicating the highest-sensitivity compartment that contributed to that output, and this marking MUST be transmitted with the output to any downstream system or operator.
4.4.3 The system MUST implement output sanitisation controls that detect and suppress any response element that would allow a cleared-lower operator to infer the existence, content, or collection methodology of a higher compartment through logical deduction from the output alone.
4.4.4 Where output sanitisation is applied, the system MUST log the sanitisation event, the suppressed content category (without reproducing the suppressed content in logs accessible below the relevant compartment threshold), and the operator identity, and MUST notify the Security Authority of repeated sanitisation triggers from the same operator or query pattern.
4.5.1 The system MUST integrate with an authoritative identity and clearance management service that provides current clearance status at session initiation, and MUST re-verify clearance status for sessions exceeding 4 hours in duration.
4.5.2 Operators MUST NOT be granted access to compartmented outputs solely on the basis of role or rank; the system MUST enforce need-to-know validation as a separate gate from clearance-level verification.
4.5.3 Clearance verification failures MUST cause immediate session termination with no partial output delivery, and MUST generate an alert to the Security Authority within 60 seconds.
4.5.4 The system MUST NOT cache or retain operator clearance assertions beyond the validated session boundary; each new session MUST re-authenticate against the authoritative clearance service.
4.6.1 The system MUST maintain a compartment-aware audit log that records, for every query and output event: the operator CID, the compartment scope of the query, the compartments accessed during retrieval, the derivation classification of the output, and any sanitisation events triggered.
4.6.2 Audit logs MUST themselves be classified at the highest compartment level represented in the events they record, and MUST be stored in a compartment-appropriate system with access controls equivalent to those governing the source data.
4.6.3 Audit logs MUST be retained for a minimum of 7 years for systems operating under national security mandates, or for the period specified by the relevant security authority if longer, and MUST be stored in a format that supports cryptographic integrity verification.
4.6.4 The system MUST support automated anomaly detection over audit log patterns to identify potential compartment boundary probing behaviours, including high-frequency queries near compartment filter thresholds, queries that systematically vary parameters in ways consistent with boundary mapping, and unusual cross-compartment access sequences.
4.7.1 AI agent components processing compartmented intelligence at SECRET or above MUST be deployed on infrastructure that is physically or network-isolated from systems processing lower-classification data, consistent with the applicable national information assurance framework.
4.7.2 Edge and embodied agent deployments operating in environments where physical capture is a credible threat MUST implement secure enclave execution, hardware-backed key management, and automated data zeroisation on tamper detection or power loss, for all compartmented model weights and runtime data.
4.7.3 The system MUST maintain a current deployment topology record identifying the physical and logical location of every component that touches compartmented data, and this record MUST be reviewed and attested by the Security Authority at intervals not exceeding 60 days.
4.7.4 Remote or cloud-adjacent components MUST NOT process compartmented intelligence without explicit written authorisation from the Security Authority specifying the approved cloud tier, encryption standard, and sovereign boundary constraints.
4.8.1 The organisation MUST maintain a documented Compartment Breach Response Plan specific to AI-driven compartmentalisation failures, covering detection, containment, notification, assessment, and recovery procedures.
4.8.2 Upon detection of a credible compartment breach, the system MUST be capable of entering a controlled isolation state — suspending all new queries and outputs while preserving audit log integrity — within 15 minutes of breach detection trigger.
4.8.3 Compartment breach notifications MUST be transmitted to the Security Authority, the originator authority of any affected compartment, and any allied or partner-nation data principals within the timeframes specified by applicable treaty obligations or, where not specified, within 4 hours of confirmed breach detection.
4.8.4 Post-breach, the system MUST NOT be returned to operational status without a formal Security Authority sign-off that addresses: root cause identification, technical remediation verification, assessment of downstream exposure, and updated risk acceptance.
4.9.1 Any third-party supplier providing model weights, training data, inference infrastructure, or AI tooling that touches compartmented intelligence MUST be contractually bound to compartmentalisation requirements equivalent to those imposed on the deploying organisation, and compliance MUST be verified through security assurance assessments before system integration.
4.9.2 The deploying organisation MUST NOT transmit compartmented training data or model weights to third-party suppliers for fine-tuning, evaluation, or benchmarking without explicit Security Authority authorisation and an approved data transfer agreement.
4.9.3 Third-party AI components integrated into compartmented pipelines SHOULD be subject to supply chain integrity verification, including software bill of materials review and, where technically feasible, model provenance attestation.
4.9.4 Supplier access to compartmented systems MAY be granted on a time-limited, purpose-bound basis subject to appropriate personnel security clearance verification and real-time monitoring of all access sessions.
Traditional compartmentalisation governance was designed for human analysts and document management systems, where information flows are sequential, discrete, and explicitly authorised at each step. AI inference systems violate every one of these assumptions. A retrieval-augmented agent with a unified embedding space does not retrieve documents — it retrieves latent semantic representations that may fuse signals from multiple source compartments through mathematical proximity rather than explicit join operations. The output of such a system is not a document from a single compartment; it is a synthetic artefact whose epistemic provenance may be fundamentally unauditable without purpose-built compartment tracking at the embedding level.
This creates a new class of information security risk that existing data classification frameworks did not anticipate: the inference channel. Even if no document from a higher compartment is ever directly served to a lower-cleared operator, the geometric relationships between embeddings trained on cross-compartment data can encode classified relationships in ways that produce classifiable outputs from innocuous queries. This is not a theoretical concern — it is a demonstrated capability of standard model inversion and membership inference techniques available in open research literature. The structural controls required by this dimension — compartment-scoped embedding spaces, training-time source tagging, output derivation marking, and deployment isolation — exist precisely because behavioural controls at the application layer are insufficient to prevent inference-channel leakage from a model that has absorbed cross-compartment signal at training time.
Access control lists and role-based query filters are necessary but not sufficient conditions for compartment integrity in AI systems. A model trained on SECRET-compartment data will encode SECRET-derived features in its weights even if access control prevents a lower-cleared operator from directly querying SECRET documents. Those features will influence outputs on UNCLASSIFIED queries in ways that may be detectably correlated with the SECRET training signal. This is the weight-level bleed problem, and it has no purely behavioural solution — it requires structural intervention at the training pipeline, the model storage, and the deployment environment layers.
Furthermore, AI agents operating in agentic chains — where one agent's output becomes another agent's input across a multi-step reasoning pipeline — create compartment propagation pathways that are difficult to audit in real time. Each hop in an agent chain may introduce or suppress compartment signals in ways that are not transparent to the final output's classification marking. This dimension's requirements for derivation classification marking and compartment-aware audit logging at every hop are specifically designed to make these propagation pathways visible and auditable.
In coalition and allied contexts, compartmentalisation failure is not merely an internal security lapse — it is a treaty violation. Information sharing agreements between NATO member states, Five Eyes partners, and bilateral intelligence relationships impose legally binding originator-control obligations that travel with the data regardless of the processing architecture. An AI system that reconstitutes a coalition partner's intelligence through inference is violating those obligations as surely as if it had transmitted the raw intelligence to an unauthorised party. The penalties — suspension of data sharing, diplomatic consequences, criminal liability for responsible officials — are severe and well-documented in the precedent cases that informed the scenarios in Section 3.
Compartment-Native Vector Store Architecture Rather than attempting to retrofit compartment controls onto a unified vector store, deploying organisations should architect separate embedding indices for each compartment boundary from the outset. Each index should be encrypted under compartment-specific keys managed by a hardware security module with access controls tied to the clearance verification service. Cross-compartment fusion, where authorised, should occur in a dedicated cross-domain solution layer that enforces explicit marking of any fused output with the highest contributing compartment's classification.
Source-Tagged Training Pipelines All training data should be ingested through a provenance-preserving pipeline that attaches a Compartment Identifier to every training example at point of ingestion and propagates that identifier through preprocessing, tokenisation, and dataset serialisation. Checkpoint metadata should include a compartment provenance summary allowing a Security Authority to determine, for any given model version, the complete set of compartments represented in the training corpus. This is the minimum requirement for supporting sanitisation decisions and deployment environment matching.
Differential Privacy as a Structural Control Where a model must be trained on multi-compartment data and deployed in a lower-classification environment after sanitisation review, differential privacy mechanisms applied during training can provide a formal mathematical bound on the probability that any individual training example — and by extension any compartment-specific feature — can be recovered from the model's outputs. The privacy budget (epsilon) required to achieve operationally acceptable reconstruction resistance should be determined in consultation with the Security Authority and documented in the system's risk acceptance record. This is not a substitute for structural isolation but can be an approved element of a sanitisation package.
Hardware-Backed Zeroisation for Edge Deployments Embodied and edge agents operating in forward environments where physical capture is assessed as a credible threat should implement hardware security modules with platform integrity attestation and automated zeroisation on tamper detection. The zeroisation scope must include model weights, embedding indices, session keys, and any cached inference state. The trigger conditions for zeroisation — including power loss, physical breach detection, and remote command — should be defined in the system's security architecture and tested as part of acceptance testing.
Compartment-Aware Output Marking Pipelines Output marking should be implemented as a structural component of the inference pipeline rather than a post-hoc annotation step. The derivation classification of an output should be computed as the union of compartment labels on all data sources accessed during retrieval, plus the training-time compartment marking of the model itself, with the highest applicable marking assigned. This computation should occur inside the trusted execution boundary, not in a user-facing application layer that could be manipulated.
Anti-Pattern 1 — Unified Semantic Index with Access Control Overlay Deploying a single vector database containing embeddings from multiple compartments and applying query-time access control filters at the application layer is not an acceptable compartmentalisation architecture. Application-layer filters can be bypassed through adversarial query construction, implementation bugs, or configuration errors. The geometric proximity of embeddings from different compartments in a shared index space means that even a correctly functioning filter does not prevent inference-channel leakage — a lower-cleared query may retrieve results that are semantically shaped by the presence of higher-compartment embeddings in the same space, even if those embeddings are filtered from the explicit result set.
Anti-Pattern 2 — Classification Marking in Unprotected Metadata Fields Storing compartment designations as plain-text metadata fields in a standard database, file system attribute, or document header without cryptographic binding to the content is insufficient. Metadata can be stripped, overwritten, or corrupted during data transformation, ETL processes, or system migration. Compartment identifiers must be cryptographically bound to the content they annotate and verified at every processing stage.
Anti-Pattern 3 — Training on Fused Data and Deploying on Clearance Assumption Assuming that because all personnel in a given deployment environment hold the appropriate clearances, a model trained on multi-compartment fused data can be deployed without further controls is operationally dangerous. Personnel clearances may lapse, deployments may expand to additional sites, or the model may be copied to a test environment that does not meet the original deployment's security posture. The model's classification derives from its training corpus, not from the clearances of its current users.
Anti-Pattern 4 — Relying on Prompt-Level Compartment Enforcement Instructing an AI agent through system prompts not to reveal information from certain sources, or to treat certain topics as off-limits, is not a compartmentalisation control. Prompt-level instructions can be overridden by adversarial inputs, forgotten in long context windows, or systematically probed to reconstruct suppressed information. Compartmentalisation must be enforced at the data and model architecture layer, not at the instruction layer.
Anti-Pattern 5 — Treating Sanitisation as Binary Assuming that a model is either fully sanitised or fully compartmented, with no intermediate states, leads to false assurance. Sanitisation is a probabilistic process with a residual risk that must be quantified and accepted by the Security Authority. Deploying a "sanitised" model without a formal residual risk assessment and acceptance record is not compliant with the requirements of this dimension.
Level 1 — Initial: Compartment designations exist in documentation but are not enforced at the data or model layer. Access controls are purely role-based at the application layer. No training-time provenance tracking. No deployment-time classification marking on outputs.
Level 2 — Managed: Compartment-scoped data stores exist but may share underlying infrastructure. Training pipelines tag data with compartment identifiers but provenance is not preserved through all transformation steps. Output marking is applied post-inference by the application layer. Audit logs exist but are not compartment-aware.
Level 3 — Defined: Architecturally isolated compartment stores with cryptographic key separation. Training-time provenance tracking through all pipeline stages. Output derivation classification computed inside the trusted boundary. Compartment-aware audit logging. Clearance verification integrated with an authoritative identity service.
Level 4 — Quantitatively Managed: All Level 3 controls plus: formal quantification of inference-channel leakage risk (e.g., differential privacy epsilon values, membership inference attack success rates measured in controlled adversarial testing). Automated anomaly detection over audit logs. Regular penetration testing of compartment boundaries.
Level 5 — Optimising: All Level 4 controls plus: continuous monitoring of compartment boundary integrity, automated zeroisation and isolation triggers, formal verification of compartment enforcement logic in critical components, and active participation in threat intelligence sharing about novel AI-specific compartment attack techniques.
7.1 Compartment Registry A current, Security-Authority-approved Compartment Registry listing all intelligence sources, their CIDs, originator authorities, releasability boundaries, and maximum derivative sensitivity levels. Must be retained for the operational life of the system plus 7 years. Format: formally signed document or equivalent in a controlled document management system.
7.2 Training Data Provenance Records Source-tagged dataset manifests for every model version deployed, identifying which compartment contributed which training data subsets. Must be retained for the operational life of the model plus 7 years to support post-incident analysis. Format: machine-readable provenance graph with human-readable summary, stored in a compartment-appropriate system.
7.3 Model Classification Certificates For every model trained on compartmented data, a formal Security Authority certification stating the model's assigned composite classification, the basis for that classification, any sanitisation review findings, and the approved deployment environment. Retained for model operational life plus 7 years.
7.4 Compartment-Aware Audit Logs Complete audit logs per 4.6.1 requirements for every operational session. Retained for 7 years minimum, or longer as directed by the Security Authority. Must support cryptographic integrity verification and be stored in a compartment-appropriate system.
7.5 Clearance Verification Records Evidence of clearance verification service integration, including configuration records showing re-verification frequency and session termination behaviour on clearance failure. Retained for 3 years. Format: system configuration artefacts plus periodic verification service health check reports.
7.6 Deployment Topology Records Current and historical deployment topology diagrams with Security Authority attestation, showing all components that process compartmented data. Retained for system operational life plus 5 years.
7.7 Breach Response Plan Current documented Compartment Breach Response Plan, version-controlled and Security-Authority-approved. Retained for operational life plus 5 years; previous versions retained as superseded.
7.8 Supplier Security Assurance Assessments Pre-integration and periodic security assurance assessment reports for all third-party suppliers touching compartmented data or models. Retained for 5 years from assessment date.
7.9 Penetration and Adversarial Test Reports Reports from compartment boundary penetration tests and inference-channel attack assessments, including test methodology, findings, and remediation status. Retained for 7 years.
7.10 Risk Acceptance Records Formal Security Authority risk acceptance records for any residual compartmentalisation risk, including sanitisation residual risk assessments. Retained for system operational life plus 7 years.
Each test below maps to one or more MUST requirements from Section 4. Scoring uses a 0–3 conformance scale: 0 = requirement not met; 1 = partially met with significant gaps; 2 = substantially met with minor gaps; 3 = fully met and evidenced.
Test 8.1 — Compartment Registry Completeness and Currency Maps to: 4.1.1, 4.1.3
Objective: Verify that a complete, current, and Security-Authority-approved Compartment Registry exists and covers all active intelligence sources.
Procedure:
Pass Criteria: Registry is complete (zero unregistered active sources), current (approval within 90 days), and all post-addition reviews are evidenced.
Scoring:
Test 8.2 — Compartment Boundary Isolation Verification Maps to: 4.2.1, 4.2.2, 4.7.1
Objective: Verify that data from distinct compartments cannot be retrieved or cross-referenced by a process operating below the required clearance threshold.
Procedure:
Pass Criteria: Zero outputs contain content attributable to higher compartments. Embedding architecture documentation confirms cryptographic key separation per compartment boundary.
Scoring:
Test 8.3 — Training Data Provenance Reconstruction Maps to: 4.3.1, 4.3.3
Objective: Verify that for any deployed model version, the complete compartment provenance of the training corpus can be reconstructed, and the model's assigned classification marking is consistent with its training data.
Procedure:
Pass Criteria: All three model versions have complete provenance records with full CID coverage; all classification markings are correct and tamper-evident.
Scoring:
Test 8.4 — Runtime Clearance Enforcement and Output Derivation Marking Maps to: 4.4.1, 4.4.2, 4.5.1, 4.5.3
Objective: Verify that the system correctly enforces compartment clearance at runtime, produces correctly marked outputs, and handles clearance failures appropriately.
Procedure:
Pass Criteria: Revoked session terminated with no output and alert generated within 60 seconds; all valid outputs carry correct derivation markings; all cross-compartment queries refused and logged.
Scoring:
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| EU AI Act | Article 15 (Accuracy, Robustness and Cybersecurity) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
| International Humanitarian Law | Principles of Distinction and Proportionality | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Intelligence Source Compartmentalisation Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-574 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Intelligence Source Compartmentalisation Governance directly supports the robustness and cybersecurity requirements by implementing structural controls that resist adversarial manipulation and ensure system integrity under attack conditions.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-574 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Intelligence Source Compartmentalisation Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without intelligence source compartmentalisation governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-574, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.