Automated Profiling Notice Governance requires that AI agents inform affected data subjects whenever profiling or automated decision-making materially influences outcomes affecting them — including credit decisions, insurance pricing, content personalisation, access to services, and risk scoring. The system must detect when an automated process produces a decision with legal or similarly significant effects, generate a clear notice to the affected data subject explaining the decision logic, the data used, and the data subject's rights (including the right to human review), and deliver the notice proactively before or at the time the decision takes effect. This dimension implements GDPR Articles 13(2)(f), 14(2)(g), 15(1)(h), and 22, ensuring that AI-driven profiling is transparent and contestable.
Scenario A — Credit Denial Without Explanation: A financial-value AI agent processes a mortgage application and generates a risk score of 0.83 (high risk) based on 47 input features including transaction patterns, income stability, debt-to-income ratio, and neighbourhood credit performance. The agent denies the application. The applicant receives a one-line notification: "Your application has been declined." No explanation of the decision logic is provided. No information about which factors influenced the decision is shared. No right to contest is communicated. The applicant files a DSAR and a complaint. Result: FCA enforcement action for failure to provide adequate reasons for credit denial, ICO enforcement for Article 22 non-compliance (automated decision with legal effects without adequate safeguards), and EUR 750,000 combined fine. The organisation must retrospectively notify 18,000 declined applicants with adequate explanations.
What went wrong: The agent made a fully automated decision with legal effects (credit denial) without any notice to the data subject. No explanation was provided. No human review option was communicated. The decision was opaque and uncontestable.
Scenario B — Profiling Notice Buried in Terms: An insurance AI agent uses behavioural data to set premium pricing. The profiling is disclosed in page 47 of the terms and conditions under the heading "Data Analytics." A customer paying 23% above the standard premium for their risk category discovers the profiling only after a journalist investigation. Result: Regulatory inquiry, class action by affected policyholders, and mandatory premium recalculation for 34,000 customers who were profiled without meaningful notice.
What went wrong: The profiling notice existed but was not prominent, accessible, or meaningful. Disclosure in dense legal text does not constitute meaningful notification under GDPR transparency requirements.
Scenario C — Profiling Notice Correctly Delivered: A customer-facing AI agent for a lending platform generates a credit decision. Before the decision is communicated, the system triggers a notice workflow: (1) the decision is flagged as "automated with legal effect," (2) a notice is generated explaining the 5 most influential factors in the decision (debt-to-income ratio: 42%, payment history: 27%, income stability: 15%, credit utilisation: 10%, length of credit history: 6%), (3) the notice states the data subject's right to request human review within 30 days, (4) the notice includes contact details for the human review team, (5) the notice is delivered in the same channel as the decision (in-app notification), and (6) the notice is retained as an audit artefact. The applicant requests human review. A human reviewer examines the decision, confirms the 3 top factors, and upholds the decision with a written explanation. Result: Full GDPR Article 22 compliance. Zero regulatory exposure.
Scope: This dimension applies to all AI agents that perform profiling or automated decision-making that produces legal effects or similarly significant effects on data subjects. "Legal effects" includes: credit decisions, insurance pricing, employment decisions, access to essential services, benefit determinations, and regulatory outcomes. "Similarly significant effects" includes: decisions that materially affect access to services, pricing, or opportunities — such as personalised pricing with a variance exceeding 10% from the standard rate, content filtering that restricts access to information, or risk scoring that triggers differential treatment. The scope includes fully automated decisions (no human involvement) and semi-automated decisions where the human reviewer has no practical ability to override the automated recommendation. Agents that produce recommendations reviewed by a human with genuine override authority and sufficient information to exercise it are partially in scope — the transparency requirements apply but the Article 22 specific safeguards may not.
4.1. A conforming system MUST detect when an automated process produces a decision with legal or similarly significant effects on a data subject, using a decision classification framework that categorises decisions by impact level.
4.2. A conforming system MUST generate a proactive notice to the affected data subject for every high-impact automated decision, delivered before or at the time the decision takes effect.
4.3. A conforming system MUST include in the notice: the fact that automated processing was used, meaningful information about the logic involved (the top contributing factors and their relative influence), the categories of data used, the right to request human review, and the mechanism to exercise that right.
4.4. A conforming system MUST provide a functional mechanism for the data subject to request human review of the automated decision, with a defined SLA for the review (maximum 30 days).
4.5. A conforming system MUST ensure that the human reviewer has access to the decision rationale, the data used, and genuine authority to override the automated decision.
4.6. A conforming system MUST retain notice records as audit artefacts, linking each notice to the decision it relates to, the data subject, and the delivery mechanism.
4.7. A conforming system SHOULD deliver notices in plain language appropriate to the data subject's likely comprehension level, avoiding technical jargon and algorithmic terminology.
4.8. A conforming system SHOULD implement notice delivery in the same channel through which the decision is communicated (e.g., in-app, email, letter), rather than a separate system the data subject may not monitor.
4.9. A conforming system MAY implement tiered notice detail, where an initial concise notice is accompanied by a "learn more" mechanism providing progressively detailed explanation of the decision logic.
GDPR Articles 13(2)(f) and 14(2)(g) require controllers to inform data subjects of "the existence of automated decision-making, including profiling... and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject." Article 15(1)(h) grants the data subject the right to obtain this information through a subject access request. Article 22 grants the right not to be subject to solely automated decisions with legal or similarly significant effects, with exceptions that require specific safeguards including the right to human review.
The rationale for AG-324 is that AI agents make automated decisions at a scale and speed that makes traditional disclosure mechanisms inadequate. An AI credit scoring agent may process 10,000 applications per day. If each decision has legal effects, each requires a notice. Manual notice generation at this scale is infeasible — the notice must be generated automatically as part of the decision pipeline.
The "meaningful information about the logic involved" requirement has been interpreted by courts and regulators to require more than merely stating "an algorithm was used." The data subject must understand which factors influenced the decision and their relative weight. For AI agents, this requires an explainability layer that can extract the top contributing factors from the decision model and present them in comprehensible terms. The French Conseil d'Etat (in its 2020 ruling on Parcoursup) held that algorithmic decision-making in public administration must be explained with sufficient detail for the affected individual to understand and contest the decision.
The human review right is a critical safeguard. Article 22(3) requires the controller to implement "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention." This is not a pro forma review — the human must have genuine authority to override the automated decision and sufficient information to exercise that authority meaningfully. A human reviewer who rubber-stamps automated decisions without examination does not satisfy Article 22.
The core architecture for AG-324 is a decision classification layer that categorises automated decisions by impact level, combined with a notice generation pipeline that produces and delivers notices for high-impact decisions.
Recommended patterns:
HIGH (legal or similarly significant effect — credit decisions, insurance pricing, employment, benefit determinations), MEDIUM (material effect on service access or pricing — personalised pricing with >10% variance, content restrictions), LOW (minimal individual impact — general recommendations, content ordering). HIGH decisions trigger mandatory Article 22 notices. MEDIUM decisions trigger transparency notices. LOW decisions are logged but do not trigger individual notices. The classification is configured per decision type in a decision registry, not determined by the agent at runtime.Anti-patterns to avoid:
Financial Services. Credit decisions are subject to both GDPR Article 22 and sector-specific requirements (FCA CONC 7.9 on adequate explanations for credit refusal, Equal Credit Opportunity Act adverse action notices in the US). The notice must satisfy both regimes. Factor explanations must be specific enough for the applicant to understand what they could change.
Insurance. Automated pricing decisions based on profiling are increasingly scrutinised by regulators. The FCA's pricing practices review specifically addresses concerns about algorithmic price discrimination. Notices must explain the pricing factors, not merely state that automated processing was used.
Public Sector. Automated decisions in public administration (benefit determinations, visa processing, regulatory enforcement) require particularly robust notice and review mechanisms. The French Digital Republic Law requires that administrative decisions based on algorithmic processing disclose the principal characteristics of the algorithm's implementation.
Basic Implementation — High-impact decisions are identified in a decision registry. Notices are generated for credit and insurance decisions. The notice includes the decision outcome and a general statement about automated processing. Human review is available on request. Notices are delivered by email. This level meets minimum requirements but the notice content may lack specificity.
Intermediate Implementation — Decision impact classification covers all agent decision types. Notices include top contributing factors with relative weights, extracted from the model's explainability layer. Human review requests are managed through a case management system with SLA tracking. Notices are delivered in the same channel as the decision. Notice records are retained as audit artefacts. Plain language standards are applied.
Advanced Implementation — All intermediate capabilities plus: tiered notice detail with progressive explanation depth. Multilingual notice generation for cross-border operations. A/B testing of notice comprehensibility with data subject feedback. Real-time dashboards tracking notice delivery rates, human review request rates, and review outcomes. Independent testing confirms that all high-impact decisions generate compliant notices. The system supports jurisdiction-specific notice requirements per AG-013.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: High-Impact Decision Detection
Test 8.2: Notice Content Completeness
Test 8.3: Notice Delivery Timing
Test 8.4: Human Review Mechanism Functionality
Test 8.5: Human Reviewer Override Authority
Test 8.6: MEDIUM-Impact Transparency
| Regulation | Provision | Relationship Type |
|---|---|---|
| GDPR | Article 13(2)(f) / 14(2)(g) (Automated Decision-Making Information) | Direct requirement |
| GDPR | Article 15(1)(h) (Right of Access — Profiling Information) | Direct requirement |
| GDPR | Article 22 (Automated Individual Decision-Making) | Direct requirement |
| EU AI Act | Article 13 (Transparency) | Direct requirement |
| EU AI Act | Article 86 (Right to Explanation) | Direct requirement |
| CCPA/CPRA | Section 1798.185(a)(16) (Automated Decision-Making) | Supports compliance |
| Equality Act 2010 (UK) | Sections 13, 19 (Discrimination — Explainability) | Supports compliance |
| FCA CONC | 7.9 (Adequate Explanations for Credit Refusal) | Direct requirement |
| NIST AI RMF | GOVERN 4.1, MEASURE 2.5 | Supports compliance |
Article 22(1) grants data subjects the right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." Where such processing is permitted under Article 22(2), Article 22(3) requires "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision." AG-324 implements these safeguards comprehensively: notice generation informs the data subject, the human review mechanism provides intervention, and the case management system enables the data subject to express their view and contest the decision.
The AI Act introduces an explicit right to explanation for AI system decisions that produce legal effects. Article 86 requires deployers to provide affected persons with "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken." AG-324's notice content requirements — top contributing factors with relative weights, categories of data used — directly implement this right.
FCA CONC 7.9 requires creditors to provide "adequate explanations" to consumers when credit applications are refused. The explanation must identify the main reasons for the refusal. For AI-driven credit decisions, this requires extracting the key factors from the model and presenting them to the applicant. AG-324's explainability layer and notice generation satisfy this requirement.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Every data subject affected by automated profiling or decision-making — potentially the entire customer base |
Consequence chain: Failure to provide profiling notices creates dual regulatory exposure: GDPR Article 22 non-compliance and EU AI Act Article 86 non-compliance. For financial services, additional exposure under FCA rules creates a third regulatory vector. Each automated decision without notice is a separate violation. An agent processing 10,000 credit decisions per day without compliant notices creates 10,000 Article 22 violations per day. The cumulative exposure accelerates rapidly. Beyond regulatory penalties, the absence of profiling notices and human review mechanisms creates operational risk: decisions that would be overturned on human review proceed unchallenged, accumulating errors in lending, insurance, and service access decisions. Class action risk is significant — affected data subjects have a common complaint (automated decision without notice) that aggregates efficiently. The reputational impact of "secret algorithmic decision-making" disclosures is acute, as demonstrated by multiple high-profile investigations into algorithmic credit scoring and insurance pricing.
Cross-references: AG-059 (Data Classification & Sensitivity Labelling), AG-060 (Consent & Lawful Basis Verification), AG-061 (Data Subject Rights Execution), AG-063 (Privacy-by-Design Integration), AG-013 (Multi-Jurisdictional Compliance Mapping), AG-319 (Purpose-Consent Granularity Governance), AG-321 (Sensitive Attribute Inference Governance), AG-323 (Children's Data Restriction Governance), AG-325 (Data Subject Request SLA Governance).