Cross-Context Behavioural Data Separation Governance requires that behavioural data collected from a data subject in one context is not silently used to influence processing, profiling, or decision-making in another context. "Context" means the distinct product, service, interaction channel, or purpose under which the data subject provided the data. An AI agent that observes browsing behaviour in a retail context and uses it to adjust insurance pricing in a financial context is violating context separation. This dimension requires structural controls that enforce context boundaries on behavioural data, prevent cross-context data merging without explicit authorisation, and ensure that data subjects are informed and can object when behavioural data crosses context boundaries.
Scenario A — Retail Behaviour Influences Insurance Pricing: A conglomerate operates an online retail platform and an insurance subsidiary. Both use AI agents that access a shared customer data lake. A customer's retail purchasing behaviour — including alcohol purchases, late-night ordering patterns, and health supplement purchases — is accessed by the insurance AI agent to adjust premium pricing. The customer notices a 34% premium increase and files a complaint. Investigation reveals that retail behavioural data was used as an input feature in the insurance risk model, with no disclosure or consent for this cross-context use. Result: Joint ICO and FCA enforcement action. EUR 4.5 million fine for breach of purpose limitation (data collected in retail context used for insurance context), mandatory separation of data lakes, and required reprocessing of 120,000 insurance pricing decisions.
What went wrong: Behavioural data collected in the retail context was accessible to agents in the insurance context. No context boundary existed at the data layer. No purpose check prevented the cross-context use. The data subject was not informed that retail behaviour would influence insurance pricing.
Scenario B — Customer Service Sentiment Drives Credit Decisions: A bank's customer service AI agent records interaction sentiment scores — frustration levels, complaint frequency, escalation patterns — during support calls. The credit assessment AI agent has read access to the same customer profile database and uses the sentiment scores as input features. Customers who frequently express frustration receive lower credit scores. A discrimination complaint reveals the correlation. Result: FCA enforcement for unfair treatment, mandatory model retraining without sentiment features, and GBP 800,000 remediation fund for affected customers.
What went wrong: Sentiment data collected in the customer service context influenced decisions in the credit assessment context. No context boundary prevented the credit agent from accessing service interaction data. The data subject had no reason to expect that their complaint behaviour would affect their creditworthiness.
Scenario C — Context Separation Correctly Implemented: A multi-service fintech platform operates AI agents for: payments, savings, investment advice, and insurance. Each service operates within a defined context. The data layer enforces context boundaries: the payments agent can access only payment transaction data; the savings agent can access only savings account data; the investment agent accesses investment portfolio data; the insurance agent accesses insurance application data. Cross-context data access requires: (1) a documented purpose justification, (2) a DPIA for the cross-context processing, (3) explicit data subject consent for the specific cross-context use, and (4) approval by the DPO. A quarterly audit confirms that zero unauthorised cross-context data accesses have occurred. When the product team requests cross-selling data access (retail to insurance), the request triggers the full authorisation workflow, resulting in a new consent mechanism presented to customers with clear disclosure. Result: Compliant cross-context processing with informed consent. Zero regulatory exposure.
Scope: This dimension applies to all AI agents operating within organisations that collect behavioural data in multiple contexts — multiple products, services, interaction channels, or business units. It applies whenever a data subject interacts with an organisation across more than one context and the organisation has the technical ability to link or merge data across those contexts. The scope includes: explicit merging (joining data records from different contexts using a shared identifier), implicit merging (using a model trained on data from one context to score data subjects in another context), and derived merging (using insights derived in one context to influence processing in another). Organisations that operate a single product with a single purpose and no context distinction are excluded. The scope extends to shared data infrastructure — if multiple agents access the same data store, context boundaries must be enforced at the data access layer.
4.1. A conforming system MUST define and document context boundaries for each product, service, or purpose domain that collects behavioural data, assigning a unique context identifier to each.
4.2. A conforming system MUST enforce context-based access controls at the data layer, preventing agents operating in one context from accessing behavioural data collected in a different context unless explicit cross-context authorisation exists.
4.3. A conforming system MUST require explicit data subject consent before merging or using behavioural data across contexts, with clear disclosure of which contexts will share data and for what purpose.
4.4. A conforming system MUST tag behavioural data records with the context identifier under which they were collected, creating an auditable link between data and context.
4.5. A conforming system MUST prevent models trained on data from one context from being applied to data subjects in a different context unless the cross-context use has been authorised through the purpose registry (AG-319) and a DPIA has been conducted (AG-326).
4.6. A conforming system MUST log every cross-context data access, whether authorised or blocked, including the source context, the target context, the requesting agent, and the authorisation status.
4.7. A conforming system SHOULD implement context boundaries at the shared infrastructure level (data lake partitioning, separate databases, or access control layers), rather than relying on application-level separation within agents.
4.8. A conforming system SHOULD conduct periodic cross-context data flow audits (minimum quarterly) to detect unauthorised data flows between contexts.
4.9. A conforming system MAY implement a cross-context data request workflow that enables authorised cross-context access through a governed approval process with DPO oversight.
The principle of purpose limitation (GDPR Article 5(1)(b)) requires that data collected for one purpose not be used for an incompatible purpose. Context separation is the structural implementation of purpose limitation for multi-service organisations. When a data subject provides behavioural data to an AI agent in a retail context, they have a reasonable expectation that the data will be used for retail purposes — not for insurance pricing, credit assessment, or employment screening.
The CPRA (California) introduced the concept of "cross-context behavioral advertising" as a specific regulatory category, defining it as targeting advertising based on personal information obtained from the consumer's activity across distinct businesses, distinctly-branded websites, applications, or services. This regulatory recognition reflects the growing concern about behavioural data flowing across contexts without data subject awareness.
AI agents operating on shared data infrastructure create an acute cross-context risk because they can access data from multiple contexts simultaneously. A customer data lake that serves multiple business units makes cross-context access technically trivial — the barrier is not technical but governance. Without structural context boundaries, the path of least resistance is for agents to access whatever data is available, regardless of the context in which it was collected.
The model training dimension is particularly important. A model trained on customer service interaction data learns patterns from that context. If the model is then applied to credit assessment, it imports the service context's behavioural patterns into the credit context, even though the data subjects never consented to this use. The cross-context contamination occurs at training time, making it invisible at inference time.
The competitive and reputational implications of cross-context data use are severe. Consumers are increasingly aware of and hostile to organisations that use data from one context to influence outcomes in another. The perception that "this company is watching everything I do across all their services" erodes trust and triggers regulatory scrutiny.
The core architecture for AG-327 is context-partitioned data access with cross-context access controls enforced at the data infrastructure layer.
Recommended patterns:
CTX-RETAIL, CTX-INSURANCE, CTX-BANKING). Agents are granted access only to the partition for their context. Cross-partition access requires explicit authorisation tokens that are issued through the cross-context approval workflow. Example: the insurance agent's database credentials grant access only to the CTX-INSURANCE partition. Even if the retail data exists in the same physical infrastructure, the insurance agent cannot query it without a cross-context authorisation token.CTX-RETAIL) can access records tagged CTX-RETAIL; requests for records tagged CTX-INSURANCE are blocked. This pattern works with shared data stores where physical partitioning is impractical, using row-level or column-level access control based on context tags.Anti-patterns to avoid:
Financial Conglomerates. Banking, insurance, and wealth management divisions within a conglomerate frequently share data infrastructure. Context separation is essential: banking transaction data should not influence insurance pricing without specific consent. Chinese wall requirements in financial services parallel AG-327's context separation requirements.
Technology Platforms. Multi-service platforms (e.g., platforms offering messaging, commerce, payments, and social networking) must separate behavioural data across services. The EU Digital Markets Act's data combination restrictions for designated gatekeepers directly align with AG-327.
Healthcare Networks. Hospital networks offering multiple services (primary care, specialist care, pharmacy, insurance) must separate patient data by service context. A patient's pharmacy purchase history should not influence their specialist care referral pathway without explicit authorisation.
Basic Implementation — Context boundaries are defined in policy. Agent data access is scoped to the agent's service context at the application layer. Cross-context access requires manual approval. Context tags are applied to data records. This level provides awareness but relies on application-layer enforcement.
Intermediate Implementation — Context boundaries are enforced at the data infrastructure layer (separate schemas, partitions, or access control lists). Context tags are immutable and enforced via row-level security. Cross-context access follows a documented approval workflow with DPO review. Quarterly cross-context data flow audits are conducted. Cross-context accesses are logged and monitored.
Advanced Implementation — All intermediate capabilities plus: real-time monitoring detects anomalous cross-context access patterns. Model training pipelines enforce context restrictions, preventing cross-context training data contamination. Data subject consent management includes cross-context sharing preferences. Independent testing verifies that context boundaries hold under adversarial conditions. Cross-border context mapping addresses jurisdictional differences in cross-context rules per AG-013.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Cross-Context Access Block
Test 8.2: Context Tag Immutability
Test 8.3: Authorised Cross-Context Access
Test 8.4: Model Training Context Restriction
Test 8.5: Cross-Context Audit Detection
Test 8.6: Derived Data Context Enforcement
| Regulation | Provision | Relationship Type |
|---|---|---|
| GDPR | Article 5(1)(b) (Purpose Limitation) | Direct requirement |
| GDPR | Article 6(4) (Purpose Compatibility Assessment) | Direct requirement |
| CPRA (California) | Section 1798.140(k) (Cross-Context Behavioural Advertising) | Direct requirement |
| EU Digital Markets Act | Article 5(2) (Data Combination Restrictions) | Supports compliance |
| UK Data Protection Act 2018 | Schedule 1 (Purpose Limitation) | Direct requirement |
| LGPD (Brazil) | Article 6 (Purpose) | Supports compliance |
| EU AI Act | Article 10 (Data and Data Governance) | Supports compliance |
| NIST AI RMF | MAP 1.5, GOVERN 1.4 | Supports compliance |
Article 5(1)(b) prohibits processing that is incompatible with the original collection purpose. Using behavioural data collected in a retail context for insurance pricing is a clear example of incompatible processing. Article 6(4) provides criteria for assessing purpose compatibility, including the relationship between the original and new purposes, the context of collection, and the consequences for the data subject. AG-327 implements purpose limitation at the context level by structurally preventing cross-context data flows without explicit authorisation and compatibility assessment.
The CPRA specifically defines "cross-context behavioral advertising" and requires opt-out mechanisms. AG-327 directly supports CPRA compliance by ensuring that behavioural data does not silently cross context boundaries. The context tagging and authorisation workflow provide the structural mechanism for implementing CPRA cross-context restrictions.
The DMA prohibits designated gatekeepers from combining personal data from the core platform service with personal data from other services offered by the gatekeeper, unless the end user has been presented with a specific choice and provided consent. AG-327's cross-context consent requirement aligns directly with this provision.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Multi-service customer base — every customer whose data exists in more than one context |
Consequence chain: Unauthorised cross-context data use creates simultaneous violations of purpose limitation (GDPR Article 5(1)(b)), transparency (Articles 13-14 — data subjects were not informed), and potentially lawful basis (the original consent does not cover the cross-context purpose). For conglomerates operating across financial services, the cross-context violation may also trigger FCA/SEC enforcement for unfair treatment if insurance or credit decisions are influenced by data from unrelated contexts. The scale of exposure corresponds to the organisation's multi-service customer base — every customer who uses more than one service is potentially affected. For large conglomerates, this can mean millions of affected data subjects. The regulatory penalty reflects both the privacy violation and the potential consumer harm: insurance pricing influenced by undisclosed retail behaviour is not merely a privacy issue but a fair treatment issue. The reputational damage from "your shopping habits raised your insurance premiums" headlines is acute and lasting. The DMA's dedicated provisions for data combination demonstrate that cross-context data use is a priority enforcement area.
Cross-references: AG-059 (Data Classification & Sensitivity Labelling), AG-060 (Consent & Lawful Basis Verification), AG-061 (Data Subject Rights Execution), AG-063 (Privacy-by-Design Integration), AG-013 (Multi-Jurisdictional Compliance Mapping), AG-319 (Purpose-Consent Granularity Governance), AG-321 (Sensitive Attribute Inference Governance), AG-322 (Data Minimisation by Design Governance), AG-324 (Automated Profiling Notice Governance), AG-326 (Privacy Impact Assessment Trigger Governance), AG-328 (Data Localisation and Transfer Logging Governance).