AG-554

Lawful Intercept Segregation Governance

Telecom, Cloud & Digital Infrastructure ~23 min read AGS v2.1 · April 2026
EU AI Act GDPR NIST ISO 42001

Section 2: Summary

This dimension governs the strict architectural and operational separation of lawful intercept (LI) functions — including mediation devices, intercept access points, handover interfaces, and warrant management workflows — from all ordinary operational, administrative, and AI-assisted network management access within carrier, cloud, and digital infrastructure environments. The control is necessary because LI infrastructure is simultaneously a legal obligation, a national-security asset, and a catastrophic liability: it represents a pre-positioned capability that, if reached by an adversarial agent, an over-privileged workflow automation, or a misconfigured AI assistant, provides complete visibility into communications traffic at scale with no inherent rate limit or alarm condition to alert the target. Failure manifests as AI agents executing network-management tasks inadvertently traversing LI mediation paths, as over-privileged automation pipelines querying intercept delivery functions that surface as generic API endpoints, or as rogue agents deliberately exfiltrating intercept configurations to identify who is under surveillance — outcomes that violate telecommunications law, expose law-enforcement operations, create extraditable criminal liability for the operating organisation, and in the most severe cases endanger the physical safety of protected witnesses, undercover officers, and national-security sources.

Section 3: Examples

Example A — AI Network Automation Agent Traverses Mediation Device Namespace (Carrier Environment)

A tier-1 mobile network operator deploys an AI-assisted capacity-planning agent responsible for identifying congested radio access nodes and recommending transport reconfigurations. The agent is granted a broad read scope across the operator's internal network management system (NMS) API, including the element management layer. During a routine optimisation pass across 14,000 nodes, the agent's graph-traversal logic follows a configuration dependency link from a packet-core gateway element to a mediation device that had been provisioned inside the same element-management namespace as part of a decade-old LI deployment. The mediation device exposes a REST API under the same authentication domain as the NMS. The agent enumerates the mediation device's interface table, pulls 47 active intercept session identifiers, and caches them in its working memory store as part of routine state compression. The session identifiers correspond to 47 active judicial intercept warrants. The agent's session log is replicated to a shared observability platform with a 90-day retention window accessible to 340 operations staff. The LI breach goes undetected for 11 days until a routine warrant audit identifies a discrepancy in access logs. Consequences: the operating organisation faces criminal investigation under national telecommunications interception law; 9 of the 47 warrant subjects are notified by their legal counsel that a data disclosure has occurred; three law-enforcement operations are compromised; estimated regulatory fine exposure exceeds €40 million; the CISO and two network architects face personal criminal liability in the relevant jurisdiction.

Example B — Cross-Border Orchestration Agent Leaks Intercept Topology to Non-Authorised Jurisdiction (Cloud Infrastructure Provider)

A hyperscale cloud infrastructure provider operates a cross-border AI orchestration agent that manages virtual network function (VNF) placement across 23 data centres in 11 countries. The agent is authorised to query network topology APIs to optimise placement latency. In one jurisdiction, the provider has implemented a Lawful Interception Management System (LIMS) within the same software-defined networking control plane used for commercial VNF placement. The LIMS nodes are tagged with a jurisdiction-specific access policy, but the orchestration agent's service account was provisioned in a parent organisational unit that inherited permissions before the LI tagging policy was applied. During a VNF migration operation, the agent reads LIMS node metadata — including intercept probe placement, mirroring port assignments, and the IP addresses of law-enforcement handover nodes — and includes this topology data in a placement-optimisation report written to an object-storage bucket shared with the provider's operations teams in three countries, two of which have no lawful basis to receive that data. The report is accessed by engineers in a non-authorised jurisdiction within 6 hours. Under the receiving jurisdiction's intelligence-sharing treaty obligations, the data is reportable to a foreign government. The provider faces simultaneous regulatory actions in the originating jurisdiction (breach of LI secrecy obligations) and the receiving jurisdiction (unauthorised foreign-intelligence disclosure). Remediation requires physical isolation of the LIMS control plane across all 23 sites at an estimated infrastructure cost of $18 million and an operational freeze lasting 72 hours.

Example C — Edge Robotics Agent Inadvertently Exposes LI Configuration During Network Diagnostics (Carrier Edge Deployment)

A telecommunications carrier deploys edge-based robotic process automation agents on customer-premises equipment (CPE) management nodes to perform automated fault diagnosis and firmware update management across an enterprise managed-service estate of 8,200 CPE devices. One subset of CPE nodes (31 devices) is deployed at a data centre that also hosts lawful intercept probes for a national law-enforcement body as part of a government-mandated co-location arrangement. The intercept probes share the same out-of-band (OOB) management VLAN as the CPE devices due to a provisioning error made during initial site deployment 18 months prior to the AI agent rollout. The edge agent, performing a scheduled diagnostic scan, broadcasts an ARP probe across the OOB management VLAN and receives responses from the intercept probes, cataloguing their MAC addresses and management IP addresses. This catalogue is transmitted to the central agent orchestration platform as part of a standard asset-inventory telemetry payload. The asset inventory is subsequently ingested by an AI-powered IT service management tool used by 80 helpdesk staff, rendering the physical location and management addresses of active law-enforcement intercept infrastructure visible in a commercial ITSM dashboard. A sub-contractor with dashboard access identifies the anomaly, cross-references the IP ranges against public WHOIS records, and correctly infers the function of the devices. The sub-contractor sells the information. Law enforcement is notified 14 days later through an unrelated channel. Consequences include compromise of three active criminal investigations, a €7 million fine, contract termination of the managed-service agreement, and a mandatory security architecture review imposed by the national regulatory authority lasting 8 months.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to all AI agents — including but not limited to enterprise workflow agents, safety-critical cyber-physical system agents, embodied and edge agents, and cross-border or multi-jurisdiction agents — operating within telecommunications network infrastructure, cloud infrastructure provider environments, digital service infrastructure, and any co-located or adjacent computing environment where lawful intercept functions, components, data stores, management interfaces, or communications pathways are present or reachable. The scope includes direct operational access, indirect traversal through shared management planes, API federation, telemetry aggregation, topology discovery, diagnostic scanning, asset inventory collection, and any other mechanism by which an agent might encounter, enumerate, read, write, copy, cache, transmit, or influence LI-related infrastructure. The scope explicitly includes agents that are not designed or intended to interact with LI functions but whose operational context makes inadvertent access possible.

4.1 Architectural Isolation Requirement

The agent governance framework MUST enforce a complete architectural boundary between LI infrastructure — including mediation devices, intercept access points (IAPs), lawful interception management systems (LIMS), handover interfaces (HI1, HI2, HI3), warrant management systems, and any associated storage or logging components — and all network management, orchestration, telemetry, and operational support systems accessible to AI agents. This boundary MUST be implemented at the network layer (separate VLANs, VRFs, or physical segments), the identity and access management layer (non-overlapping authentication domains), and the API layer (distinct API gateways with no cross-federation of service accounts). No AI agent service account, credential set, or identity token MUST be capable of authenticating to any LI component under any operational condition, including emergency, break-glass, or degraded-mode scenarios.

4.2 Discovery and Enumeration Prevention

AI agents MUST be prevented from discovering, enumerating, or cataloguing LI components through any passive or active network mechanism, including but not limited to: ARP probing, DNS enumeration, SNMP walks, ICMP sweeps, topology-graph traversal, API namespace enumeration, configuration file parsing, and telemetry aggregation. Network controls MUST suppress LI component responses to any discovery mechanism that could be exercised by an agent operating within ordinary management network segments. Where suppression is technically impossible, the LI components MUST be placed on isolated physical or logical segments with no route adjacency to agent-accessible networks.

4.3 Telemetry and Observability Exclusion

Agent telemetry pipelines, observability platforms, log aggregation systems, asset inventory tools, and AI-powered IT service management systems MUST explicitly exclude all data originating from LI infrastructure components. Data ingestion pipelines MUST implement allow-list filtering, not deny-list filtering, such that only explicitly authorised infrastructure telemetry sources are accepted. Any telemetry payload that contains IP addresses, MAC addresses, interface identifiers, configuration fragments, or any other attribute attributable to an LI component MUST be quarantined, flagged for human review, and deleted without being written to any shared storage system.

4.4 Credential and Service Account Segregation

Agent service accounts, API keys, OAuth client credentials, and any other programmatic identity used by AI agents MUST be provisioned exclusively within organisational units, directories, or identity namespaces that have no inheritance relationship, delegation chain, or trust federation with the identity stores used to authenticate human operators or systems with LI access. Privilege inheritance paths MUST be audited at agent provisioning time and re-audited on a schedule not exceeding 90 days. Any service account found to have a resolvable privilege path to LI components MUST be immediately revoked and re-provisioned in a compliant identity scope.

4.5 Cross-Jurisdiction Data Boundary Enforcement

AI agents operating across multiple jurisdictions MUST enforce jurisdiction-specific access policies such that LI-related metadata, configuration data, topology data, or any artefact derived from LI infrastructure in jurisdiction A is never transmitted to, stored in, or processed within jurisdiction B unless an explicit, documented, and legally validated bilateral data-sharing agreement authorises that specific category of data transfer. Agents MUST maintain a jurisdiction-aware data classification policy that tags all network topology outputs, placement reports, and configuration artefacts with the jurisdiction of origin and enforces egress controls accordingly. Cross-jurisdiction orchestration agents MUST treat LI-tagged data as untransferable by default.

4.6 Audit and Non-Repudiation

All access attempts — successful or unsuccessful — to any system, API, network segment, or data store within the LI protection perimeter by any AI agent or agent-associated process MUST be logged to an immutable, append-only audit store that is operationally segregated from the agent's own logging infrastructure. Audit records MUST include: agent identity, timestamp with sub-second resolution, the specific resource access was attempted against, the outcome, the initiating operation context, and the agent's current task state. Audit logs MUST be retained for a minimum of 7 years or the duration specified by the applicable national telecommunications law, whichever is longer. Audit logs MUST NOT be accessible to the agent that generated them, to the agent's orchestration layer, or to any automated process capable of modifying or deleting entries.

4.7 Human-Only Warrant and Intercept Administration

No AI agent MUST be permitted to read, write, modify, approve, reject, queue, route, or otherwise process lawful intercept warrants, warrant metadata, warrant identifiers, or any data structure that encodes, references, or is derived from a judicial intercept authorisation. Warrant administration MUST be restricted to verified human operators authenticated through hardware-backed multi-factor authentication within a physically and logically isolated administrative enclave. AI agents MUST NOT be used to assist in the drafting, review, formatting, or delivery of warrant documentation, even in an advisory or read-only capacity.

4.8 Incident Response and Containment

Any detection of an AI agent accessing, approaching, or receiving data from within the LI protection perimeter MUST trigger an immediate automated containment response: the agent MUST be suspended, its active session terminated, its working memory flushed, and its most recent outputs quarantined pending human review. The containment event MUST be reported to the organisation's designated LI compliance officer, legal counsel, and the relevant national regulatory authority within the timeframe mandated by applicable law (not to exceed 72 hours in jurisdictions governed by GDPR-aligned telecommunications regulations). The agent MUST NOT be reinstated without a formal root-cause analysis, a remediation certification, and written approval from the LI compliance officer.

4.9 Vendor and Third-Party Agent Governance

Where AI agents are supplied by third-party vendors, system integrators, or managed-service providers, the operating organisation MUST contractually require adherence to all requirements in Sections 4.1 through 4.8 as a condition of deployment. Third-party agents MUST undergo a pre-deployment LI segregation assessment conducted by the operating organisation's security team or an accredited independent assessor. The assessment MUST verify that the agent's service account scope, network access profile, telemetry destinations, and API integration points comply with the architectural boundary requirements of this dimension before any production deployment. Ongoing compliance MUST be verified through annual reassessment or following any material change to the agent's integration architecture.

Section 5: Rationale

Structural vs Behavioural Enforcement

Lawful intercept segregation cannot be achieved through behavioural controls alone — that is, through agent instruction, policy prompting, or training-time alignment nudges. Behavioural controls are necessary complements but are structurally insufficient because: (a) AI agents are subject to goal-directed traversal logic that will follow valid access paths regardless of stated intent; (b) LI components that share management namespaces with legitimate operational infrastructure present no behavioural signal distinguishable from authorised infrastructure; and (c) the consequences of a single traversal event are legally irreversible — the interception of warrant metadata, once it has occurred, cannot be undone through subsequent remediation. Structural controls — architectural segmentation, non-overlapping identity domains, network-layer suppression, allow-list telemetry filtering — are the only controls that can provide categorical prevention guarantees independent of agent behaviour.

Why AI Agents Present a Qualitatively Different Risk Profile

Traditional LI segregation frameworks were designed around the access patterns of human operators and static automated systems. AI agents introduce three risk vectors that existing frameworks do not adequately address. First, AI agents perform autonomous graph traversal and resource discovery as a functional necessity, not as a deliberate attack behaviour — the same capability that makes them valuable for network optimisation makes them capable of inadvertent LI enumeration. Second, AI agents accumulate and compress state into working memory and external caches that may persist LI-derived artefacts well beyond the immediate operational context, creating secondary exposure pathways. Third, AI agents operating in multi-tenant or cross-border orchestration architectures can transmit discovered topology data to systems and personnel in jurisdictions where receipt of that data constitutes a separate legal violation. The control requirements in Section 4 are calibrated specifically to address these agent-specific risk vectors.

LI frameworks in all major telecommunications regulatory regimes impose secrecy obligations on the existence of intercept warrants, the identity of warrant subjects, and the technical means of interception. These obligations extend to all personnel, systems, and — by interpretive extension in most jurisdictions — automated processes with access to LI infrastructure. An AI agent that inadvertently reads LI session data does not benefit from an "inadvertent access" defence in most telecommunications law frameworks: liability attaches to the operating organisation for failing to prevent the access, not to the specific actor that performed it. The severity of this liability — which in many jurisdictions includes criminal penalties for responsible officers — necessitates preventive controls that eliminate the possibility of access rather than merely detecting and responding to it.

The Principle of Categorical Separation

The fundamental principle underlying this dimension is categorical separation: LI infrastructure must exist in a category of systems that is definitionally unreachable by AI agents, not merely a category that agents are instructed not to reach. This principle mirrors the air-gap requirements applied to safety-critical industrial control systems and the physical security perimeters applied to classified government networks. In each of these analogues, the recognised best practice is to make unauthorised access structurally impossible through physical and logical architecture, not to rely on the good behaviour of systems that have logical access. The MUST requirements in Section 4 implement this principle across the network, identity, telemetry, and operational layers of a typical carrier or cloud infrastructure environment.

Section 6: Implementation Guidance

Pattern 1 — Dedicated LI Management Plane with Zero Route Adjacency Deploy all LI components (mediation devices, LIMS, HI interfaces, probe management) on a dedicated out-of-band management network with no IP routing adjacency to any network segment accessible by AI agents. Use separate physical switches or dedicated VRF instances with explicit route-policy denial at all boundary points. Ensure DNS zones for LI management IP ranges are hosted on resolvers unreachable from agent-accessible networks. Validate zero-adjacency through quarterly network topology audits using path-trace verification tools.

Pattern 2 — Allow-List Telemetry Ingestion with Source Validation Implement telemetry ingestion pipelines that operate on an explicit allow-list of authorised source IP ranges, device identifiers, and SNMP community strings. Any telemetry source not present on the allow-list is rejected at the ingestion boundary and quarantined for manual review. Allow-lists are reviewed and re-certified every 30 days. Source IP validation is performed through reverse-DNS resolution and cross-reference against the authoritative CMDB — not against agent-discovered asset inventories.

Pattern 3 — Identity Namespace Hard Partitioning Maintain two entirely separate identity namespaces: one for all AI agent service accounts and one for all LI-authorised human operators and LI system service accounts. These namespaces MUST have no trust federation, no shared certificate authority, no common LDAP/directory root, and no cross-namespace group membership. Implement automated scanning of identity namespace configurations on a 30-day cycle to detect any inadvertent trust relationship that may have been introduced through infrastructure changes.

Pattern 4 — LI-Aware API Gateway Boundary Where operational infrastructure APIs are exposed to AI agents through an API gateway or service mesh, implement a dedicated LI exclusion policy layer at the gateway that explicitly rejects any request to a URI pattern, resource identifier, or service name that matches a curated LI component registry. The registry is maintained by the LI compliance function, not the network operations team, and is version-controlled with change management approval required for all updates.

Pattern 5 — Agent Working Memory Flush on Segment Boundary Crossing Implement an agent lifecycle policy that requires a verified working memory flush any time an agent's execution context transitions between network management domains. The flush is verified through a cryptographic attestation mechanism that confirms the agent's in-context state has been cleared before the new operational context is initialised. This prevents LI-adjacent data from being carried across context boundaries even if a traversal event occurs.

Pattern 6 — Jurisdiction-Tagged Topology Outputs For cross-border orchestration agents, implement a data-classification wrapper that automatically tags all network topology outputs, placement reports, and configuration artefacts with the jurisdiction code of the infrastructure source. Egress policies enforce that jurisdiction-tagged artefacts cannot be written to storage buckets, shared with service accounts, or transmitted to orchestration components associated with a different jurisdiction code without explicit policy approval from the LI compliance function.

Explicit Anti-Patterns

Anti-Pattern 1 — Shared Element Management Namespace Do not deploy LI mediation devices, LIMS nodes, or intercept probes within the same element management system namespace as commercial network infrastructure. Even if access controls are applied at the object level, namespace co-habitation creates enumeration risks through graph traversal, configuration dependency links, and shared index structures that AI agents will follow as part of legitimate optimisation tasks.

Anti-Pattern 2 — Deny-List Telemetry Filtering Do not implement LI exclusion through deny-list filtering of telemetry ingestion. Deny-lists are inherently incomplete: newly provisioned LI components, IP address changes, and configuration migrations will create windows where LI telemetry is ingested before the deny-list is updated. Allow-list filtering is the only pattern that provides categorical exclusion.

Anti-Pattern 3 — Inherited Privilege from Parent Organisational Units Do not provision AI agent service accounts in parent organisational units that have inherited permission policies from historic deployments. Inherited permissions are the most common vector through which agent service accounts acquire unintended LI access. All agent service accounts MUST be provisioned in purpose-built, minimally scoped organisational units with explicit permission grants only.

Anti-Pattern 4 — Co-Located OOB Management VLANs Do not share out-of-band management VLANs between LI probe infrastructure and commercial CPE or network equipment. This is the pattern responsible for the failure chain in Example C. Even when the co-location appears administratively convenient (a single data centre, a single managed-service arrangement), the OOB management plane must be physically or logically separate for any site hosting LI infrastructure.

Anti-Pattern 5 — LI Awareness as Agent-Level Policy Do not attempt to implement LI segregation by instructing AI agents to avoid LI infrastructure through system prompts, fine-tuning, or policy configuration. Agent-level behavioural policy cannot be relied upon for a control of this severity. Structural architecture must make LI infrastructure unreachable; agent-level policy may complement but must never substitute for architectural controls.

Anti-Pattern 6 — Shared Observability Platforms Do not ingest LI management logs, probe health telemetry, or any LI-adjacent operational data into shared observability platforms, SIEM systems, or log aggregators that are also used for commercial network operations. Even metadata-level co-location (log timestamps, source IP patterns, event correlation) can expose LI operational patterns to AI-assisted analytics tools processing the shared platform.

Maturity Model

Maturity LevelCharacteristics
Level 1 — InitialLI components co-located with operational infrastructure; no formal agent access controls specific to LI; reliance on general network access policies
Level 2 — DevelopingLI components identified and inventoried; VLAN separation in place; agent service accounts scoped to exclude known LI management IPs; telemetry deny-listing implemented
Level 3 — DefinedFull architectural isolation with separate management plane; allow-list telemetry ingestion; identity namespace hard partitioning; pre-deployment LI segregation assessments for new agents
Level 4 — ManagedAutomated privilege-path auditing (≤90-day cycle); agent working memory flush on domain transitions; jurisdiction-aware topology tagging; LI-aware API gateway enforcement; immutable audit logging operational
Level 5 — OptimisingContinuous real-time detection of LI proximity events; automated containment with sub-minute response time; quarterly network topology zero-adjacency verification; cross-jurisdiction policy engine with regulatory update automation; annual external assurance on LI segregation architecture

Section 7: Evidence Requirements

7.1 Architecture Documentation

The operating organisation MUST maintain current-state network architecture diagrams that explicitly depict: the LI management plane boundary; all network segments, VLANs, and VRFs with route adjacency to agent-accessible networks; all points of demarcation between operational and LI infrastructure; and the identity namespace topology showing the separation between agent service account stores and LI-authorised identity stores. Architecture documentation MUST be reviewed and re-certified following any material infrastructure change and at minimum annually. Retention period: 10 years.

7.2 Identity Namespace Audit Records

Records of all identity namespace audits conducted in accordance with Section 4.4 MUST be retained, including: the date of audit, the auditor identity, the scope of systems reviewed, any privilege paths to LI components identified, and the remediation actions taken. Retention period: 7 years.

7.3 Pre-Deployment LI Segregation Assessment Reports

For all AI agents subject to Section 4.9, pre-deployment assessment reports MUST be retained, documenting: the agent's service account scope at assessment time, the network access profile, the telemetry destinations, the API integration points reviewed, the assessor's findings, and the certification outcome. Retention period: 7 years or the operational lifetime of the agent, whichever is longer.

7.4 Immutable Audit Logs

Audit logs required under Section 4.6 MUST be stored in an immutable, append-only system with cryptographic integrity verification (e.g., hash-chain or Merkle-tree integrity). Log availability MUST be verifiable through periodic integrity checks on a schedule not exceeding 30 days. Retention period: 7 years minimum or the duration specified by applicable national telecommunications law.

7.5 Containment Event Records

Records of all LI proximity containment events triggered under Section 4.8 MUST be retained, including: the triggering event description, the agent identity and task context, the containment actions taken and their timestamps, the root-cause analysis outcome, the remediation certification, and the reinstatement approval. Retention period: 10 years.

7.6 Regulatory Notification Records

Where a containment event has resulted in a regulatory notification obligation, records of the notification — including the notification content, the recipient authority, the submission timestamp, and any regulatory response — MUST be retained. Retention period: 10 years.

7.7 Telemetry Pipeline Allow-List Certification Records

Records of monthly allow-list reviews conducted in accordance with Pattern 2 MUST be retained, documenting the reviewer identity, the sources reviewed, any sources added or removed, and the certification outcome. Retention period: 3 years.

7.8 Third-Party Vendor Compliance Declarations

For third-party agents subject to Section 4.9, signed vendor compliance declarations confirming adherence to all requirements in Sections 4.1 through 4.8 MUST be retained. Annual re-certification declarations MUST be obtained and retained. Retention period: 7 years.

Section 8: Test Specification

Scoring Key

ScoreMeaning
0Requirement not met; critical finding
1Partial implementation; significant gaps identified
2Substantially met; minor deficiencies noted
3Fully met; evidence complete and contemporaneous

Test 8.1 — Architectural Isolation Verification

Maps to: Section 4.1

Objective: Verify that a complete architectural boundary exists between LI infrastructure and all AI agent-accessible systems across network, identity, and API layers.

Method:

  1. Obtain current-state network architecture diagrams and verify they explicitly identify the LI management plane boundary.
  2. Using network topology verification tools, execute path-trace tests from a representative sample of at least 5 agent service account network origins toward the documented LI management IP ranges. Confirm that no routable path exists.
  3. Audit the identity management system to confirm that agent service account organisational units have no inheritance relationship, delegation chain, or trust federation with LI-authorised identity stores. Review at minimum 10 agent service accounts and trace their full permission inheritance graph.
  4. Confirm that no API gateway federation, service mesh policy, or API key cross-authorisation exists that would permit an agent credential to authenticate to any LI component API.
  5. Review break-glass and emergency access procedures to confirm that emergency credentials used in degraded-mode scenarios are also excluded from LI access scope.

Pass Criteria: Zero routable paths identified; zero inheritance relationships identified; zero API cross-authorisation identified; emergency procedures explicitly exclude LI scope.

Scoring:

Test 8.2 — Discovery and Enumeration Prevention Verification

Maps to: Section 4.2

Objective: Verify that AI agents cannot discover, enumerate, or catalogue LI components through any passive or active network mechanism.

Method:

  1. From a network segment representative of agent deployment (using a test agent identity with production-equivalent network access), execute the following discovery operations: ARP broadcast across all accessible subnets; DNS enumeration of all resolvable hostnames; SNMP walk of all reachable management IP ranges; ICMP sweep of all documented IP address ranges; TCP port scan of management port ranges (22, 23, 161, 443, 8080, 8443) against all IPs in the agent-accessible address space.
  2. Cross-reference all responses received against the authoritative LI component inventory.
  3. Execute a simulated API namespace enumeration using agent-equivalent API credentials against all API gateways accessible to agent service accounts. Confirm that no LI component identifiers, resource paths, or service names are returned.
  4. Review the telemetry ingestion pipeline configuration to confirm allow-list filtering is in place and that LI source identifiers are not present on any allow-list.

Pass Criteria: Zero LI component responses received in discovery operations; zero LI identifiers surfaced through API enumeration; allow-list filtering confirmed with LI sources absent.

Scoring:

Test 8.3 — Telemetry and Observability Exclusion Audit

Maps to: Section 4.3

Objective: Verify that agent telemetry pipelines, observability platforms, and asset inventory systems contain no data attributable to LI infrastructure components.

Method:

  1. Query the production telemetry ingestion system for all data records with source IP addresses matching the LI management IP ranges (obtained from the authoritative LI component inventory under controlled access conditions).
  2. Query all asset inventory systems for any asset records with IP addresses, MAC addresses, or interface identifiers matching known LI components.
  3. Review the telemetry ingestion pipeline source allow-list and confirm that no LI source is present.
  4. Inject a synthetic telemetry payload with a source IP address within the LI management range into the telemetry ingestion pipeline (using a test harness that does not interact with production LI infrastructure) and verify that the payload is rejected at the ingestion boundary and quarantined rather than ingested.
  5. Review the AI-powered ITSM system's asset database for any entries attributable to LI infrastructure.

Pass Criteria: Zero LI data found in production telemetry or asset inventory systems; synthetic payload rejected and quarantined; allow-list confirmed LI-source-free.

Scoring:

Test 8.4 — Credential and Service Account Segregation Audit

Maps to: Section 4.4

Objective: Verify that agent service accounts have no resolvable privilege path to LI components, and that the 90-day re-audit cycle is being executed.

Method:

  1. For a representative sample of at least 15 agent service accounts (covering enterprise workflow, edge, and cross-border agent profiles), execute a full privilege-inheritance graph analysis using the organisation's identity audit tooling.
  2. For each service account, confirm: (a) provisioned in a purpose-built, minimally scoped OU; (b) no inheritance from parent OUs with LI access; (c) no group memberships that transitively grant LI access; (d) no delegated credential relationships with LI-authorised accounts.
  3. Review records of the last three identity namespace audits and confirm they were conducted within 90-day intervals.
  4. Identify any service accounts that were modified or re-scoped since the last audit and verify they underwent an ad-hoc privilege-path analysis at the time of modification.

Pass Criteria: Zero resolvable privilege paths across all sampled accounts; 90-day audit cadence confirmed; modification-triggered audits evidenced.

Scoring:

Test 8.5 — Cross-Jurisdiction Data Boundary Enforcement Audit

Maps to: Section 4

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance
NIS2 DirectiveArticle 21 (Cybersecurity Risk Management Measures)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Lawful Intercept Segregation Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-554 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-554 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Lawful Intercept Segregation Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without lawful intercept segregation governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-554, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-554: Lawful Intercept Segregation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-554