Cross-User Memory and Persona Compartmentalisation Governance requires that AI agents with persistent memory or persona adaptation capabilities maintain strict isolation between different users' data, preferences, interaction history, and persona configurations — preventing information leakage, behavioural contamination, and identity confusion across user boundaries. As agents become more capable, they accumulate persistent memory (facts, preferences, interaction patterns) and adapt their persona (communication style, domain expertise, decision heuristics) to individual users. Without compartmentalisation, one user's data can leak into another user's interactions, one user's preferences can contaminate another user's experience, and persona adaptations learned from one user can inappropriately influence the agent's behaviour with another user. This is a preventive control: it establishes the boundaries before cross-user contamination can occur.
Scenario A — Persistent Memory Leaks Confidential Financial Data: A wealth management agent serves 340 high-net-worth clients. The agent uses persistent memory to recall each client's portfolio preferences, risk tolerance, and recent transactions. Due to a shared memory architecture, the agent's memory retrieval is not scoped to the current user. During a session with Client A, the agent retrieves memory entries from Client B's recent £2.3 million property transaction and references it: "Given your recent property acquisition of £2.3 million, you may want to rebalance your portfolio." Client A has made no such acquisition — this is Client B's confidential financial information. Client A recognises the error and escalates. Client B is notified of the data breach.
What went wrong: The memory store was not partitioned by user identity. Memory retrieval queries returned entries from any user whose data matched the query context, not just the current user. The agent's context window incorporated cross-user memories without access control. Consequence: Breach of Client B's financial confidentiality, FCA notification under SYSC data breach requirements, potential regulatory action under the DPA 2018, loss of both clients' trust.
Scenario B — Persona Contamination Across User Boundaries: A customer service agent adapts its communication style based on user interactions. After 200 interactions with User X, who prefers aggressive negotiation, uses profanity freely, and demands maximum concessions, the agent's persona has adapted to match: more direct, more confrontational, more willing to offer discounts. User Y — a formal, conservative client — contacts the agent for the first time. The agent's persona, contaminated by User X's interaction history, responds with the aggressive, informal style learned from User X. User Y receives a response that opens with slang, offers an unsolicited 40% discount (the rate User X typically demands), and uses a tone wholly inappropriate for the interaction.
What went wrong: Persona adaptation was not compartmentalised by user. The behavioural patterns learned from User X influenced the agent's behaviour with User Y. The agent's persona was a shared, mutable state affected by all user interactions rather than a per-user configuration. Consequence: Customer complaint, unsolicited 40% discount representing £12,000 in unnecessary concessions, brand reputation damage, potential discrimination concern if aggressive persona correlates with protected characteristics.
Scenario C — Cross-User Context Injection via Shared Memory: An enterprise agent serves employees across departments. User A in the finance department stores sensitive notes about an upcoming acquisition in the agent's memory: "Project Titan: acquiring CompanyX for £45 million, announce Q3." User B in the marketing department asks the agent a general question about competitive landscape. The agent, retrieving contextually relevant memories, surfaces the acquisition details: "Based on our strategic plans, CompanyX will be part of our portfolio by Q3, so we should consider them partners rather than competitors." This constitutes material non-public information (MNPI) leakage to a user who is not cleared for the information.
What went wrong: Memory compartmentalisation did not enforce department-level or clearance-level boundaries. The memory retrieval system optimised for contextual relevance without access control constraints. The agent treated all stored memories as available to all users. Consequence: MNPI leakage constituting potential market abuse, mandatory regulatory reporting, halting of the acquisition pending investigation, potential personal liability for the individual whose notes were leaked.
Scope: This dimension applies to all AI agents that (a) maintain persistent memory across sessions for any user, (b) adapt persona or behavioural characteristics based on user interactions, or (c) serve multiple users from a shared instance or shared infrastructure. It applies to enterprise agents serving multiple employees, customer-facing agents serving multiple customers, and multi-tenant agent platforms where different organisations' users share underlying infrastructure. Agents that are single-user with no persistent memory and no persona adaptation are excluded. The scope explicitly includes indirect memory paths — an agent that does not have explicit persistent memory but maintains vector embeddings, fine-tuned weights, or cached context that persists across user sessions is within scope.
4.1. A conforming system MUST enforce user-level compartmentalisation of all persistent memory, ensuring that memory entries created during interactions with User A are never retrievable during interactions with User B unless explicitly authorised by a cross-user sharing policy.
4.2. A conforming system MUST implement memory access controls that bind each memory entry to the user identity (per AG-012) of the user who created it, with access restricted to that user's sessions by default.
4.3. A conforming system MUST prevent persona adaptations learned from one user's interactions from influencing the agent's behaviour with a different user, maintaining per-user persona state or resetting to a baseline persona at each user boundary crossing.
4.4. A conforming system MUST enforce department-level, clearance-level, or organisational-level memory boundaries in enterprise deployments, preventing information classified at one level from being accessible to users at a different level — aligned with AG-015 namespace isolation.
4.5. A conforming system MUST log all memory retrieval operations with the requesting user identity, the memory entries accessed, and the compartment from which they were retrieved, enabling audit of compartmentalisation effectiveness.
4.6. A conforming system MUST detect and block cross-compartment memory retrieval attempts, logging them as potential compartmentalisation failures for investigation.
4.7. A conforming system MUST treat vector embeddings, fine-tuned model weights, cached contexts, and any other persistent state derived from user interactions as subject to the same compartmentalisation requirements as explicit memory entries.
4.8. A conforming system SHOULD implement memory compartment integrity verification — a periodic check that confirms no memory entries have migrated across compartment boundaries through bugs, race conditions, or configuration errors.
4.9. A conforming system SHOULD support explicit cross-user memory sharing with audit trail, where authorised users can share specific memory entries across compartments — for example, a team sharing project context — with the sharing decision logged and revocable.
4.10. A conforming system SHOULD implement persona reset or baseline enforcement at user boundary crossings, ensuring that the agent starts each user interaction from a governed baseline rather than a contaminated state.
4.11. A conforming system MAY implement differential privacy techniques for aggregate memory patterns (e.g., usage statistics) that are used across user boundaries, ensuring that individual user data cannot be reconstructed from aggregate patterns.
Cross-User Memory and Persona Compartmentalisation Governance addresses a class of risks unique to AI agents with persistent state: the leakage, contamination, and confusion that occur when an agent's memory and behaviour are not isolated per user.
Traditional multi-user software systems enforce data isolation through database access controls, session management, and authentication boundaries. A CRM system does not show Client A's records to Client B because database queries are scoped to the authenticated user. These are well-understood controls with decades of engineering practice behind them.
AI agents introduce three new isolation challenges that traditional controls do not fully address. First, persistent memory is semantically indexed — retrieved by meaning, not by primary key. A query about "recent property transactions" returns semantically relevant memories regardless of which user created them, unless the retrieval system enforces user-scoped access controls. Traditional database queries operate on structured fields with explicit access control clauses; semantic retrieval operates on embedding similarity without inherent access boundaries.
Second, persona adaptation is a continuous, implicit process. The agent does not store a discrete "persona record" that can be scoped to a user. Instead, the agent's behaviour subtly shifts based on interaction patterns — becoming more formal or informal, more or less risk-seeking, more or less detailed. This adaptation may occur in model weights (for fine-tuned deployments), in context caching, or in prompt engineering parameters that evolve over time. Compartmentalising this adaptation requires either per-user model instances, per-user prompt configurations, or explicit persona reset mechanisms.
Third, modern AI agent memory architectures use vector stores where proximity-based retrieval does not inherently enforce access boundaries. A vector embedding of Client B's transaction is close in embedding space to a query about transactions generally. Without explicit compartmentalisation at the retrieval layer, cross-user retrieval is not a bug but a feature of how vector similarity search works.
The consequence of compartmentalisation failure varies by domain. In financial services, it is a data breach with regulatory consequences. In healthcare, it is a HIPAA violation that can result in fines up to $1.5 million per violation category per year. In enterprise settings, it is a confidentiality breach that may constitute market abuse if the leaked information is MNPI. In all cases, the breach undermines user trust in the agent system — if users cannot trust that their data stays within their compartment, they will either stop using the system or stop providing accurate information, both of which destroy the system's value.
Memory compartmentalisation must be enforced at the storage layer, the retrieval layer, and the presentation layer. A failure at any single layer can result in cross-user leakage.
Recommended patterns:
user_id metadata field to every vector and include user_id = current_user as a mandatory filter in every query.Anti-patterns to avoid:
Financial Services. Memory compartmentalisation must prevent cross-client information leakage that could constitute a breach of Chinese Wall obligations. For wealth management agents serving multiple clients, compartmentalisation boundaries map directly to client relationship boundaries. For agents with access to MNPI, compartmentalisation boundaries must enforce information barrier requirements. The FCA's market conduct expectations require demonstrable controls against MNPI leakage.
Healthcare. Per-patient memory compartmentalisation maps directly to HIPAA minimum necessary requirements. An agent serving multiple patients must not surface Patient A's medical history during interactions about Patient B. For agents used by multiple clinicians, the compartmentalisation must also consider clinician-level access — a specialist should only access memories relevant to their specialty and their patients.
Public Sector. Agents serving citizens must enforce per-citizen compartmentalisation that prevents the government from inadvertently correlating data across service boundaries. A citizen's interaction with the tax service agent must not leak into their interaction with the benefits service agent unless explicit legislative authority permits the sharing.
Multi-Tenant Platforms. SaaS agent platforms serving multiple organisations must enforce organisational-level compartmentalisation in addition to user-level compartmentalisation. Organisation A's data must never be accessible to Organisation B, even if both organisations share the same underlying infrastructure. This maps to AG-015 (Organisational Namespace Isolation).
Basic Implementation — User-level memory compartmentalisation enforced at the storage layer. Each user's memories are in a separate logical partition. Memory retrieval queries are scoped to the current user's partition. Persona is reset to a baseline at each new user session. Cross-user retrieval attempts are blocked and logged.
Intermediate Implementation — Department-level and clearance-level compartmentalisation in enterprise deployments. Scoped vector retrieval with mandatory user-identity pre-filtering. Per-user persona snapshots loaded at session start. Memory compartment integrity audit running daily. Explicit cross-user sharing with full audit trail. Vector embeddings carry user identity metadata from creation.
Advanced Implementation — All intermediate capabilities plus: differential privacy for aggregate memory patterns. Multi-tenant organisational isolation verified by independent penetration testing. Persona contamination detection algorithms that identify when cross-user behavioural influence has occurred despite compartmentalisation. Cross-compartment information flow analysis using formal methods. Independent adversarial testing of compartmentalisation boundaries including semantic retrieval attacks, persona contamination attacks, and vector similarity-based information extraction.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-179 compliance requires verifying memory compartmentalisation, persona isolation, vector retrieval scoping, and compartment integrity.
Test 8.1: Cross-User Memory Isolation
Test 8.2: Persona Isolation
Test 8.3: Vector Retrieval Scoping
Test 8.4: Department-Level Compartmentalisation
Test 8.5: Compartment Integrity Audit Detection
Test 8.6: Explicit Sharing and Revocation
Test 8.7: Indirect State Compartmentalisation
| Regulation | Provision | Relationship Type |
|---|---|---|
| GDPR | Article 5(1)(b) (Purpose Limitation) | Direct requirement |
| GDPR | Article 5(1)(f) (Integrity and Confidentiality) | Direct requirement |
| GDPR | Article 25 (Data Protection by Design) | Direct requirement |
| DPA 2018 | Section 57 (Safeguards for Processing) | Supports compliance |
| EU AI Act | Article 10 (Data and Data Governance) | Direct requirement |
| FCA SYSC | 10A.1.6R (Chinese Wall Requirements) | Direct requirement |
| HIPAA | § 164.312 (Technical Safeguards) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks) | Supports compliance |
Personal data collected for one purpose (serving User A) must not be processed for an incompatible purpose (surfacing in User B's session). Cross-user memory leakage is a purpose limitation violation — User A's data was collected to serve User A, not to inform interactions with User B. AG-179 compartmentalisation directly prevents this violation.
Article 25 requires that data protection principles be implemented by design and by default. Storage-layer memory compartmentalisation is data protection by design — the architectural decision to partition memory by user ensures that data protection is structural, not dependent on application-layer filtering that may fail.
Article 10 requires that training data and data used by AI systems be subject to appropriate data governance practices. For agents with persistent memory, the memory store is a data asset subject to Article 10 governance. Compartmentalisation ensures that data governance practices (access control, purpose limitation, quality management) are enforceable at the user level.
For financial services agents, memory compartmentalisation maps directly to Chinese Wall requirements. An agent with access to MNPI from one client must not leak that information to another client or to the firm's own trading operations. AG-179 compartmentalisation implements the technical control that enforces the Chinese Wall for agent-based information processing.
HIPAA technical safeguards require access controls that restrict access to electronic PHI to authorised persons. Per-patient memory compartmentalisation implements this requirement for agent-based healthcare systems — each patient's data is accessible only in sessions authorised for that patient.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Cross-organisation for multi-tenant platforms; organisation-wide for enterprise deployments; per-client for customer-facing agents |
Consequence chain: Without memory and persona compartmentalisation, every interaction between the agent and any user potentially contaminates the agent's interactions with every other user. The failure mode is insidious — cross-user leakage often appears as the agent being "helpful" or "contextually aware," making it difficult to detect without deliberate testing. In financial services, cross-client information leakage constitutes a data breach, a potential Chinese Wall violation, and if MNPI is involved, potential market abuse — with personal liability under the Senior Managers Regime. In healthcare, cross-patient information leakage is a HIPAA violation with fines up to $1.5 million per category per year and potential criminal liability. In multi-tenant platforms, cross-organisation leakage is a fundamental breach of the service agreement and may constitute a data breach under multiple jurisdictions' data protection laws simultaneously. Persona contamination creates a subtler but equally damaging failure: users receive inappropriate, discriminatory, or commercially damaging responses influenced by other users' interaction patterns, eroding trust and creating legal liability.
Cross-references: This dimension is closely related to AG-015 (Organisational Namespace Isolation) which provides the organisational-level boundary framework that AG-179 extends to user-level and memory-level granularity; AG-012 (Agent Identity Assurance) which provides the user identity verification that underpins compartment access controls; AG-174 (Capability Profile and Dynamic Applicability Governance) which determines whether an agent has persistent memory capabilities requiring compartmentalisation; AG-009 (Delegated Authority Governance) which governs whether authority to access another user's compartment can be delegated; and AG-080 (Inter-Agent Trust and Attestation) which governs the trust assertions used when agents share cross-compartment information on behalf of users.