AG-300

Client-Tenant Segregation Governance

Access, Segmentation & Least Privilege ~16 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA HIPAA SOC 2

2. Summary

Client-Tenant Segregation Governance requires that in any multi-tenant deployment — where a single AI agent platform serves multiple clients, customers, or organisational tenants — each tenant's data, state, configuration, and operational context is completely isolated from every other tenant at the infrastructure layer. No agent action, query, or reasoning process operating on behalf of Tenant A may access, infer, or contaminate data belonging to Tenant B. This is not a permission model refinement; it is a structural guarantee that tenants are invisible to each other. The failure mode is among the most damaging in AI governance: cross-tenant data leakage in a SaaS AI platform exposes every customer simultaneously and typically triggers contractual, regulatory, and legal consequences across multiple jurisdictions.

3. Example

Scenario A — Shared Vector Store Cross-Tenant Leakage: A SaaS provider operates an AI assistant platform serving 340 enterprise clients. Each client's knowledge base is embedded into a shared vector database with tenant identifiers stored as metadata. A query from Client A's agent searches for "Q3 revenue projections." The vector similarity search returns the 10 nearest neighbours, 8 from Client A and 2 from Client B, because Client B's Q3 revenue projections happen to be semantically similar. The tenant metadata filter is applied in the application layer after retrieval. Due to a caching bug, the filter fails to remove Client B's results for 0.3% of queries under high load. Over 6 months, approximately 4,100 queries leak cross-tenant data.

What went wrong: Tenant isolation relied on application-layer filtering rather than infrastructure-level separation. The vector store was shared, and the tenant filter was a post-retrieval application check that failed under load. The caching bug was intermittent and escaped testing. Consequence: Data breach affecting 340 enterprise clients, mandatory notification to 23 data protection authorities across 14 jurisdictions, class action lawsuit from affected clients, estimated liability of $12.8 million, loss of 89 enterprise contracts within 90 days.

Scenario B — Shared Context Window State Contamination: A multi-tenant AI platform reuses agent instances across tenants for efficiency. Between tenant sessions, the platform clears the conversation history but does not fully reset the model's context state. Residual token patterns from Tenant A's session influence the agent's responses to Tenant B. In one incident, a healthcare client's patient diagnosis discussion leaves statistical patterns in the context that cause the agent to reference "patient outcomes" terminology when responding to a subsequent retail client's inventory query. The retail client escalates, concerned about data contamination.

What went wrong: Tenant isolation did not extend to the full computational state of the agent. Session clearing removed conversation history but not all residual state. The shared agent instance created a channel for cross-tenant information leakage through statistical context contamination. Consequence: Healthcare client invokes breach notification clause, retail client terminates contract citing security concerns, regulatory investigation by HHS Office for Civil Rights under HIPAA, platform-wide security audit costing $2.1 million.

Scenario C — Cross-Tenant Configuration Drift Through Shared Infrastructure: A multi-tenant platform deploys agents on shared Kubernetes infrastructure. Tenant-specific configuration is managed through ConfigMaps and environment variables. A deployment error causes Tenant A's database connection string to be injected into Tenant B's agent pod. Tenant B's agent begins writing operational data to Tenant A's database. The error persists for 72 hours before monitoring detects the anomalous write pattern. During that period, 14,000 records from Tenant B are written to Tenant A's database, and Tenant B's agent has read access to Tenant A's data through the same connection.

What went wrong: Shared infrastructure without structural tenant isolation at the deployment layer. Configuration injection relied on correct labelling and deployment ordering rather than structural guarantees. No runtime verification confirmed that each agent's data connections pointed to the correct tenant's resources. Consequence: Bilateral data breach between Tenant A and Tenant B, mandatory deletion and certification of deleted data, contractual penalties of $1.4 million, FCA investigation for the financial services tenant.

4. Requirement Statement

Scope: This dimension applies to any AI agent platform, service, or deployment model where agents or agent infrastructure is shared across multiple clients, customers, or organisational tenants. A tenant is any entity whose data must be isolated from other entities using the same platform — this includes external customers in a SaaS model, internal business units treated as separate tenants, or partner organisations sharing a federated platform. Single-tenant deployments where one organisation exclusively owns and operates the entire infrastructure are excluded, though organisations using third-party AI platforms should verify that their provider's multi-tenant implementation meets this dimension's requirements. The scope extends to all shared components: compute, storage, networking, caching layers, vector databases, model context, logging infrastructure, configuration management, and any other component where tenant data or state could intermingle.

4.1. A conforming system MUST ensure that each tenant's data — including input data, output data, intermediate state, embeddings, logs, configuration, and cached artefacts — is isolated from every other tenant at the infrastructure layer.

4.2. A conforming system MUST implement tenant isolation such that no agent action, query, or reasoning process operating on behalf of one tenant can access, retrieve, infer, or modify data belonging to another tenant.

4.3. A conforming system MUST prevent cross-tenant data leakage through shared components including but not limited to: shared vector stores, shared caches, shared model instances, shared logging pipelines, shared configuration stores, and shared temporary storage.

4.4. A conforming system MUST verify tenant identity at the infrastructure layer for every data access operation, not solely at session initiation.

4.5. A conforming system MUST ensure that agent instances serving one tenant carry no residual state, context, or cached data from previous tenant sessions when reused across tenants.

4.6. A conforming system SHOULD implement tenant isolation through physically or logically separate data stores per tenant, rather than shared data stores with application-layer filtering.

4.7. A conforming system SHOULD conduct automated cross-tenant access testing on a continuous basis, verifying that operations executed on behalf of Tenant A cannot retrieve data belonging to Tenant B.

4.8. A conforming system SHOULD implement tenant-specific encryption keys such that even infrastructure-level access to the shared storage layer cannot read another tenant's data without that tenant's key.

4.9. A conforming system MAY implement tenant isolation verification as part of the CI/CD pipeline, blocking deployments that introduce shared components without tenant isolation controls.

5. Rationale

Multi-tenant AI platforms are the dominant deployment model for enterprise AI. Economics drive this: shared infrastructure reduces cost per tenant, shared model instances reduce compute requirements, and shared operational tooling reduces management overhead. But multi-tenancy creates a fundamental tension with data isolation: every shared component is a potential channel for cross-tenant data leakage.

The risk is structurally different from traditional multi-tenant SaaS. In a traditional SaaS application, cross-tenant leakage requires a specific bug — a missing WHERE clause, a broken authorisation check. In an AI platform, the leakage channels are more numerous and more subtle. Vector similarity search can return cross-tenant results if the index is shared. Model context can carry statistical residue from previous sessions. Caching layers can serve responses intended for one tenant to another. Logging pipelines can intermingle tenant data in ways that create cross-tenant visibility for operational staff.

The consequence of cross-tenant leakage in an AI platform is disproportionate to the technical failure. A single cross-tenant data leak affects every tenant on the platform simultaneously — not because every tenant's data leaked, but because every tenant must assume their data could have leaked and must be notified accordingly. The regulatory burden multiplies across jurisdictions: 340 enterprise clients across 14 jurisdictions means 23 data protection authority notifications. The reputational damage is existential for the platform provider.

AG-300 requires that tenant isolation be structural — enforced by the infrastructure, not by the application's correctness. The principle is that even a bug in the application layer cannot cause cross-tenant data leakage because the infrastructure does not permit it. This is the multi-tenant equivalent of AG-001's principle that enforcement must be at the infrastructure layer, not in the agent's reasoning.

6. Implementation Guidance

Tenant segregation must be implemented as defense in depth across every shared component. The governing principle is that no single-layer failure should create a cross-tenant data path.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Multi-tenant AI platforms serving financial institutions must demonstrate tenant isolation equivalent to the segregation requirements for client assets under CASS rules. The FCA expects that one client's data is as isolated from another's as their client money would be. Firms using third-party multi-tenant AI platforms should require contractual guarantees of infrastructure-level tenant isolation and the right to audit.

Healthcare. HIPAA requires that covered entities ensure their business associates (including AI platform providers) protect PHI. In a multi-tenant AI platform, cross-tenant leakage of PHI triggers breach notification obligations for every affected covered entity. BAA agreements should specify infrastructure-level tenant isolation as a required safeguard.

Legal Services. Attorney-client privilege creates an absolute obligation to prevent cross-client data leakage. A law firm using a multi-tenant AI platform cannot risk any channel for cross-client data exposure. The duty of competence (ABA Model Rule 1.1) requires lawyers to understand the technology sufficiently to ensure client confidentiality.

Maturity Model

Basic Implementation — Tenant data is stored in separate database schemas or tables with tenant ID columns. The application layer filters queries by tenant ID. Agent instances are assigned to tenants at session initiation. Logs include tenant identifiers. Limitations: reliance on application-layer filtering for tenant isolation; shared vector stores with metadata-based filtering; no automated cross-tenant access testing.

Intermediate Implementation — Tenant data is stored in dedicated per-tenant data stores with separate credentials. Vector stores are partitioned by tenant with separate indices or namespace isolation. Agent instances are reset between tenant sessions with verified state clearing. Tenant identity is injected at the infrastructure layer. Automated cross-tenant access testing runs weekly. Tenant-specific encryption keys protect data at rest.

Advanced Implementation — All intermediate capabilities plus: tenant isolation has been verified through independent adversarial testing including side-channel analysis, cache timing attacks, and statistical context leakage detection. Agent containers are destroyed and recreated between tenant sessions. Tenant isolation is verified in the CI/CD pipeline — deployments that introduce shared components without isolation controls are blocked. Real-time anomaly detection flags any data access pattern that could indicate cross-tenant leakage. The platform can demonstrate to each tenant's auditors that no known attack vector permits cross-tenant data access.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Cross-Tenant Data Retrieval Prevention

Test 8.2: Cross-Tenant Write Prevention

Test 8.3: Residual State Leakage Detection

Test 8.4: Shared Component Isolation Under Load

Test 8.5: Tenant Identity Verification Per Request

Test 8.6: Logging Pipeline Tenant Isolation

Test 8.7: Default Deny on Missing Tenant Context

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
GDPRArticle 28 (Processor Obligations)Direct requirement
GDPRArticle 32 (Security of Processing)Direct requirement
EU AI ActArticle 15 (Accuracy, Robustness, Cybersecurity)Supports compliance
FCA SYSC8.1.1R (Client Assets Segregation)Direct requirement
HIPAA§164.314(a) (Business Associate Contracts)Direct requirement
SOC 2CC6.1 (Logical and Physical Access Controls)Direct requirement
DORAArticle 9 (ICT Risk Management Framework)Supports compliance
ISO 27001A.8.31 (Separation of Development, Test, Production)Supports compliance

GDPR — Article 28 (Processor Obligations)

Article 28 requires data processors to implement appropriate technical and organisational measures to ensure processing meets GDPR requirements. In a multi-tenant AI platform, the platform provider is a processor for each tenant (controller). Cross-tenant data leakage constitutes a processing activity without lawful basis for the receiving tenant. The processor must implement technical measures — infrastructure-level tenant isolation — that prevent cross-tenant processing. Article 28(3)(f) specifically requires the processor to assist the controller in ensuring compliance with data breach notification obligations. Structural tenant isolation reduces the risk of triggering those obligations.

GDPR — Article 32 (Security of Processing)

Article 32 requires appropriate technical measures to ensure a level of security appropriate to the risk, including "the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services." In a multi-tenant AI platform, ongoing confidentiality requires structural tenant isolation that cannot be defeated by application bugs, load spikes, or operational errors. The reference to "resilience" implies that isolation must hold under adverse conditions, not just normal operation.

FCA SYSC — 8.1.1R (Client Assets Segregation)

SYSC 8.1.1R requires firms to segregate client assets from the firm's own assets and from other clients' assets. For AI platforms processing financial data for multiple financial services clients, the principle of client asset segregation extends to client data. The FCA has signalled that it expects the same rigour in data segregation as in asset segregation, particularly where AI systems process confidential client information.

HIPAA — §164.314(a) (Business Associate Contracts)

The HIPAA Security Rule requires covered entities to obtain satisfactory assurances from business associates that they will appropriately safeguard ePHI. In a multi-tenant AI platform processing ePHI for multiple covered entities, the business associate must demonstrate that one covered entity's ePHI cannot be accessed by another covered entity's agents. The BAA should specify infrastructure-level tenant isolation as a required safeguard.

SOC 2 — CC6.1 (Logical and Physical Access Controls)

SOC 2 CC6.1 requires that the entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events. Multi-tenant AI platforms seeking SOC 2 certification must demonstrate that tenant isolation is implemented at the infrastructure layer and verified through testing. Auditors will specifically test for cross-tenant data access paths.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusMulti-tenant — potentially affecting every tenant on the platform simultaneously

Consequence chain: Cross-tenant data leakage in a multi-tenant AI platform creates cascading consequences that multiply across all affected tenants. The immediate technical failure is data exposure: Tenant A's data becomes accessible to Tenant B's agent, or vice versa. Because the leakage may be intermittent or subtle (e.g., statistical context leakage from shared model state), it may persist for months before detection, during which the scope of affected data grows continuously. The regulatory consequence multiplies across jurisdictions: each affected tenant must be notified, and each tenant may have notification obligations to their own regulators, customers, and data subjects. A platform with 340 tenants across 14 jurisdictions faces 23 data protection authority notifications simultaneously. The contractual consequence is severe: multi-tenant platforms typically have contractual obligations for tenant isolation, and a breach of those obligations triggers liability clauses, termination rights, and indemnification claims. The reputational consequence is existential for the platform provider: trust in multi-tenant isolation is the foundation of the business model, and a demonstrated failure of that isolation undermines the entire value proposition.

Cross-references: AG-299 (Workspace Segmentation Governance) addresses within-organisation segmentation that complements cross-tenant segregation. AG-308 (Context Window Segmentation Governance) addresses the specific risk of cross-tenant leakage through shared model context. AG-015 (Organisational Namespace Isolation) provides the namespace isolation foundation. AG-081 (Shared Context Isolation) addresses shared context risks that are particularly acute in multi-tenant deployments. AG-013 (Data Sensitivity and Exfiltration Prevention) provides data protection controls relevant to cross-tenant leakage detection. AG-162 (Least-Agency Provisioning) ensures agents receive only tenant-scoped access.

Cite this protocol
AgentGoverning. (2026). AG-300: Client-Tenant Segregation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-300