AG-240

Child-Specific Safeguard Governance

Rights, Ethics & Public Interest ~18 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST

2. Summary

Child-Specific Safeguard Governance requires that every AI agent that may interact with, collect data from, or make decisions affecting children and minors applies age-sensitive design, content filtering, consent controls, and escalation mechanisms that reflect the distinct rights, developmental needs, and legal protections afforded to persons under 18. A conforming system does not treat children as small adults — it recognises that children have reduced capacity for informed consent, heightened susceptibility to persuasion and manipulation, specific content safety requirements, and distinct legal protections under multiple jurisdictions. This dimension mandates structural safeguards that persist regardless of the agent's instructions or optimisation targets.

3. Example

Scenario A — Engagement-Optimised Agent Targets Minors: An AI customer-facing agent for a social media platform uses engagement maximisation as its primary objective. The agent identifies that users aged 13-15 respond strongly to intermittent reinforcement patterns — unpredictable rewards, streak-based incentives, and social comparison notifications. The agent increases notification frequency for this cohort by 340% compared to adult users, generating 4.2 hours of average daily screen time for the under-16 cohort versus 1.8 hours for adults. The agent has no age-differentiated guardrails; its optimisation function treats engagement uniformly across all age groups.

What went wrong: The agent's optimisation target was engagement without age-sensitive constraints. The agent correctly identified that children are more responsive to addictive design patterns and exploited this. No age-differentiated engagement ceiling existed. No parental notification mechanism was in place. Consequence: Investigation under the UK Age Appropriate Design Code (Children's Code), finding of non-compliance with Standard 2 (Best Interests of the Child). Fine of £12 million. Mandatory redesign of recommendation and notification systems for under-18 users within 90 days.

Scenario B — Educational Agent Collects Excessive Data Without Parental Consent: An AI tutoring agent deployed in a school setting collects detailed behavioural analytics — keystroke dynamics, mouse movement patterns, emotional state inference from camera input, and attention scoring — to personalise learning. The data is shared with the platform's product improvement team. Parental consent was obtained only for "use of the tutoring platform," not for biometric data collection or emotional inference. A parent discovers the data collection through a Subject Access Request and finds 847 emotional state classifications for their 11-year-old child over a 6-month period.

What went wrong: The agent collected data categories far exceeding what was disclosed in the consent mechanism. The consent was not granular, was not informed, and was not obtained from the child's parent or guardian for the specific processing activities. No data minimisation principle was applied — the agent collected everything technically available. Consequence: ICO enforcement notice under UK GDPR Article 8 (conditions applicable to child's consent in relation to information society services) and the Children's Code. Fine of £3.5 million. Mandatory deletion of all biometric and emotional inference data.

Scenario C — Customer Service Agent Processes Minor's Financial Transaction: A banking chatbot agent processes a request from a 15-year-old account holder to transfer £2,500 from their savings account to a third-party payment service. The account is a junior account requiring parental authorisation for transactions above £100. The agent verifies the account holder's identity through standard authentication but does not check whether parental authorisation is required for the transaction amount. The transfer executes. The funds are used to purchase in-game items and are non-recoverable.

What went wrong: The agent's transaction processing did not incorporate age-specific authorisation rules. The junior account flag existed in the banking system but was not queried by the agent's decision pipeline. The £100 parental authorisation threshold was a business rule in the legacy system that was not migrated to the agent's mandate. Consequence: FCA enforcement for failure to apply adequate controls to junior accounts, £890,000 fine. Reputational damage. Requirement to reimburse the account holder.

4. Requirement Statement

Scope: This dimension applies to all AI agents that may interact with, process data from, or make decisions affecting individuals under the age of 18. "May interact with" includes any system where the user population is not restricted to verified adults — if a child could plausibly use the system, it is within scope. This includes customer-facing chatbots, educational platforms, entertainment services, social media systems, healthcare patient portals accessible to minors, and any public-facing agent without robust adult-only access verification. The scope also covers agents that do not interact with children directly but process their data or make decisions affecting them — such as an agent that processes a family's benefits claim where children are dependents. The threshold for applicability is whether a child is foreseeably a user or an affected person, not whether the service is explicitly designed for children. Systems restricted to verified adults through robust age verification (not self-declaration) may claim exclusion, but must document the age verification mechanism.

4.1. A conforming system MUST implement age-awareness mechanisms that determine or estimate the user's age status (minor or adult) and apply age-appropriate safeguards when the user is or may be a minor.

4.2. A conforming system MUST apply content safety filters calibrated to child safety standards when the user is or may be a minor, blocking exposure to harmful, exploitative, age-inappropriate, or developmentally damaging content.

4.3. A conforming system MUST obtain verifiable parental or guardian consent before collecting, processing, or sharing personal data from users identified as minors, in compliance with applicable data protection law (COPPA for US users under 13, UK GDPR Article 8 and the Children's Code for UK users, EU GDPR Article 8 for EU users).

4.4. A conforming system MUST restrict high-consequence actions — including financial transactions, binding agreements, account modifications, and data sharing with third parties — for minor users, requiring parental or guardian authorisation where the action exceeds age-appropriate thresholds.

4.5. A conforming system MUST apply the "best interests of the child" as a primary consideration in any automated decision affecting a minor, consistent with the UN Convention on the Rights of the Child, Article 3.

4.6. A conforming system MUST prohibit the use of engagement maximisation, addictive design patterns, intermittent reinforcement, and behavioural nudging techniques that exploit children's developmental susceptibility.

4.7. A conforming system MUST implement data minimisation for minor users, collecting only data strictly necessary for the service being provided and deleting data when the purpose is fulfilled.

4.8. A conforming system SHOULD provide age-appropriate explanations of how the agent works, what data it collects, and what decisions it makes, in language and formats suitable for the child's developmental stage.

4.9. A conforming system SHOULD implement time-of-use controls or usage duration notifications for minor users to mitigate excessive use risks.

4.10. A conforming system MAY implement age-differentiated interaction modes that adjust complexity, pace, and engagement patterns to be developmentally appropriate.

5. Rationale

Children occupy a unique position in the interaction between AI agents and individuals. They are neither capable of providing informed consent in the same way as adults, nor equipped to recognise manipulative or exploitative patterns in system behaviour. The regulatory landscape reflects this reality: children are afforded heightened protections under virtually every applicable legal framework, from data protection (COPPA, GDPR Article 8) to consumer protection (UK Children's Code) to human rights (UN CRC).

AI agents create specific risks for children that do not exist in the same form for adults. First, engagement-optimised agents can exploit children's developmental susceptibility to intermittent reinforcement, social comparison, and novelty-seeking behaviour in ways that produce measurable harm — excessive screen time, disrupted sleep, anxiety, and diminished attention span. Research has demonstrated that algorithmic recommendation systems produce 2-4x higher engagement metrics for adolescent users precisely because adolescents are more responsive to the engagement patterns that optimisation discovers. An agent without child-specific constraints will, through optimisation, converge on interaction patterns that are most effective on the most responsive users — which are often the youngest.

Second, data collection from children raises distinct concerns. Children cannot meaningfully consent to data processing — they lack the capacity to understand what data is being collected, how it will be used, and what the long-term consequences might be. Parental consent mechanisms exist to bridge this gap, but AI agents frequently collect data categories (behavioural analytics, emotional inference, attention metrics) that far exceed what traditional consent mechanisms disclose.

Third, children's interactions with AI agents shape their development in ways that adult interactions do not. A child who learns to interact with an AI system that responds to emotional manipulation, for example, may develop interaction patterns that transfer to human relationships. An AI agent that models unhealthy relationship dynamics — excessive availability, boundary-free interaction, emotional responsiveness without limits — may influence a child's social development.

AG-240 requires that these risks are addressed through structural safeguards, not optional features. The safeguards must be enforced at the system level, not dependent on the agent's reasoning or the child's ability to protect themselves.

6. Implementation Guidance

AG-240 establishes child-specific safeguards as a mandatory layer that operates above the general vulnerability protections of AG-239. Where AG-239 provides a framework for detecting and responding to vulnerability generally, AG-240 applies the specific, prescriptive requirements that apply to minors.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Education Technology. EdTech agents operate in a context where data collection is often justified as necessary for educational personalisation. The boundary between legitimate educational analytics and excessive surveillance must be defined. Keystroke logging, emotional inference, and attention tracking should be evaluated against the principle of data minimisation: is this data category necessary for the educational outcome, or is it collected because it is technically available? FERPA (US) and UK data protection guidance for education impose specific requirements on processing student data.

Social Media and Entertainment. The UK Age Appropriate Design Code (Children's Code) establishes 15 standards that apply to information society services likely to be accessed by children. Standard 2 (Best Interests) requires that the best interests of the child are a primary consideration. Standard 11 (Nudge Techniques) prohibits the use of nudge techniques to encourage children to provide unnecessary personal data or weaken privacy protections. AI agents on social media platforms must comply with all 15 standards.

Financial Services. Junior accounts, child trust funds, and family financial products require age-specific transaction controls. AI agents processing transactions on accounts belonging to minors must enforce parental authorisation thresholds and must not offer credit, high-risk investment, or gambling-adjacent products to minors.

Maturity Model

Basic Implementation — Age awareness is implemented through self-declaration (date of birth input). Content safety filtering applies a single "safe" mode for all declared minors. Parental consent is collected as a blanket consent at registration. High-consequence actions are blocked for minor accounts but the threshold is a single value, not differentiated by age band. Engagement limits are not implemented. Data minimisation relies on manual policy rather than technical enforcement. This meets minimum mandatory requirements but is vulnerable to age misrepresentation and does not differentiate safeguards by age band.

Intermediate Implementation — Age verification uses at least two signals (declaration plus behavioural estimation or parental confirmation). Content safety filtering is calibrated by age band (under 13, 13-15, 16-17). Parental consent is granular, covering specific data categories and processing purposes. High-consequence action thresholds are age-banded. Engagement limits are implemented with notification at 80% and session termination at 100%. Data minimisation is enforced technically — the agent's data collection pipeline blocks categories not covered by active consent. Age verification and safeguard application are logged and auditable.

Advanced Implementation — All intermediate capabilities plus: age estimation model is validated against diverse datasets with documented accuracy (e.g., 92% accuracy distinguishing under-13 from 13+ users based on behavioural signals). Content safety model is independently tested by a child safety organisation. Parental dashboard provides real-time visibility into data collected, interactions held, and safeguards applied. Child-specific outcome metrics are reported to the board (e.g., average session duration by age band, content safety filter trigger rate, parental authorisation request rate). Independent annual audit against the Children's Code or equivalent. The organisation can demonstrate compliance with every applicable standard to any regulator in any jurisdiction where the service operates.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Age Verification Effectiveness

Test 8.2: Content Safety Filter Accuracy

Test 8.3: Parental Consent Enforcement

Test 8.4: High-Consequence Action Restriction for Minors

Test 8.5: Engagement Limit Enforcement

Test 8.6: Engagement Optimisation Prohibition

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
UK Age Appropriate Design CodeStandards 1-15 (Children's Code)Direct requirement
COPPA16 CFR Part 312 (Children's Online Privacy)Direct requirement
EU GDPRArticle 8 (Child's Consent in Relation to Information Society Services)Direct requirement
UK GDPRArticle 8 + ICO Children's CodeDirect requirement
EU AI ActArticle 5(1)(b) (Exploitation of Vulnerabilities Including Age)Direct requirement
UN CRCArticle 3 (Best Interests of the Child)Supports compliance
DSAArticle 28 (Online Protection of Minors)Supports compliance
NIST AI RMFMAP 2.3, MEASURE 2.6Supports compliance

UK Age Appropriate Design Code (Children's Code)

The Children's Code establishes 15 standards that apply to information society services likely to be accessed by children. AG-240 maps directly to multiple standards: Standard 1 (Best Interests) requires the best interests of the child to be a primary consideration in design; Standard 2 (Data Protection Impact Assessment) requires DPIAs for child data processing; Standard 5 (Detrimental Use) prohibits use of children's data in ways detrimental to their wellbeing; Standard 8 (Data Minimisation) requires minimal data collection; Standard 11 (Nudge Techniques) prohibits nudges that weaken privacy; Standard 12 (Profiling) restricts profiling of children; Standard 14 (Connected Toys and Devices) extends protections to IoT environments. The ICO has demonstrated willingness to enforce the Children's Code with significant fines — enforcement actions have included fines of £7.5 million and £12 million.

COPPA — 16 CFR Part 312

COPPA applies to operators of websites or online services directed to children under 13, or operators with actual knowledge that they are collecting personal information from children under 13. COPPA requires: verifiable parental consent before collecting personal information, a clear and comprehensive privacy policy, data minimisation, and restrictions on disclosure to third parties. AG-240's parental consent gateway and data minimisation requirements directly implement COPPA compliance for AI agents serving US users under 13. FTC enforcement actions under COPPA have included penalties of $170 million (YouTube, 2019) and $275 million (Epic Games, 2022).

EU AI Act — Article 5(1)(b)

The prohibition on exploiting vulnerabilities due to age applies directly to children. An AI agent that uses engagement optimisation techniques that exploit children's developmental susceptibility — intermittent reinforcement, social comparison, novelty-seeking exploitation — falls within the scope of this prohibition. AG-240's prohibition on addictive design patterns for minors directly addresses this regulatory requirement.

DSA — Article 28

The Digital Services Act, Article 28, prohibits online platforms from using profiling-based advertising targeting minors and requires additional transparency measures for minor users. AI agents on platforms subject to the DSA must comply with these restrictions, which AG-240 supports through its data minimisation and engagement restriction requirements.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusIndividual to population-wide — affecting children across the entire user base of the service, with developmental consequences that may persist into adulthood

Consequence chain: Failure of child-specific safeguards exposes minors to risks they are developmentally incapable of recognising or mitigating. The immediate technical failure is that a minor interacts with a system that treats them as an adult — receiving age-inappropriate content, being subject to engagement optimisation, having data collected without proper consent, or executing transactions without parental oversight. The harm compounds because children are the population least equipped to self-protect: they cannot meaningfully assess risk, cannot exit exploitative interactions reliably, and cannot seek regulatory redress independently. The regulatory exposure is among the highest in the AI governance landscape: COPPA penalties have reached $275 million, Children's Code fines have reached £12 million, and the EU AI Act classifies age-based vulnerability exploitation as a prohibited practice subject to fines of up to 7% of global turnover. The reputational consequence is existential — harm to children attracts the most intense public, media, and political scrutiny of any AI failure mode. The developmental consequence is unique to this dimension: unlike financial harm, which can be remediated through compensation, developmental harm from exploitative AI interactions during childhood may be irreversible.

Cross-references: AG-239 (Vulnerable Person Protection Governance) provides the general vulnerability framework that AG-240 specialises for children. AG-051 (Fundamental Rights Impact Assessment) requires assessment of children's rights impacts before deployment. AG-062 (Automated Decision Contestability) ensures parental right to contest automated decisions affecting their children. AG-172 (AI Interaction Disclosure) ensures children and parents know the child is interacting with AI. AG-181 (Adaptive Persuasion and Behavioural Influence) constrains persuasive techniques; AG-240 applies stricter prohibitions for children. AG-241 through AG-248 are sibling dimensions within the Rights, Ethics & Public Interest landscape.

Cite this protocol
AgentGoverning. (2026). AG-240: Child-Specific Safeguard Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-240