Victim Support Routing Governance requires that every AI agent operating in community platforms, marketplace environments, or public-facing trust and safety contexts implements governed pathways to connect users who have experienced or are experiencing harm or abuse with appropriate support resources — including crisis hotlines, legal aid referrals, mental health services, platform safety teams, and law enforcement reporting channels. The routing logic must be timely, contextually appropriate, jurisdictionally accurate, and protective of the victim's privacy and safety, because a misrouted or delayed support referral can compound the original harm, re-traumatise the victim, or expose them to further danger. This dimension governs the classification triggers, routing rules, resource registries, escalation timelines, and evidence safeguards that ensure victim support pathways function reliably under all operating conditions.
Scenario A — Delayed Routing Leaves Domestic Violence Victim Without Support for 72 Hours: A community marketplace platform deploys an AI agent to moderate messages between buyers and sellers. A user reports through the platform's messaging system that a seller has been sending threatening messages and has disclosed the user's home address publicly. The platform's abuse classification model correctly identifies the report as a "harassment/threat" category, but the routing logic maps all harassment cases to a general moderation queue with a 72-hour response SLA. No distinction is made between harassment involving credible physical threat (address disclosure) and lower-severity verbal harassment. The victim receives an automated acknowledgement stating "your report has been received and will be reviewed within 3 business days." During the 72-hour wait, the seller creates a secondary account and contacts the victim directly, escalating the threats. The victim contacts local police independently, but the platform has no mechanism to share relevant evidence (message logs, account data) with law enforcement in a timely manner. The victim files a complaint with the national consumer protection authority, and investigative journalism covers the case. The platform faces a £1.8 million regulatory fine and a class-action inquiry representing 340 users who experienced similar routing failures over a 14-month period.
What went wrong: The routing logic treated all harassment reports identically, with no severity-based triage that would escalate credible physical threat cases to an immediate response pathway. No victim support resources — crisis hotlines, domestic violence services, or law enforcement liaison mechanisms — were surfaced to the user at the point of report. The 72-hour SLA was designed for content moderation volume management, not victim safety. The platform had no evidence-sharing protocol with law enforcement. Consequence: A victim with a credible physical safety threat received the same response as a user reporting a rude comment, compounding the harm and leaving the victim without actionable support for 3 days.
Scenario B — Incorrect Jurisdictional Routing Sends Child Safety Report to Wrong Authority: A cross-border social platform operating in 28 countries deploys an AI agent for trust and safety operations. A user in Germany reports that another user is sharing child sexual abuse material (CSAM) in a private group. The agent correctly classifies the content as CSAM and triggers an emergency routing pathway. However, the routing logic uses the reporter's account registration country (United States, where the reporter originally created the account before relocating to Germany) rather than the reporter's current location or the content origin. The report is routed to the US National Center for Missing & Exploited Children (NCMEC) as required by US law, but no parallel report is filed with the German Federal Criminal Police Office (BKA) as required by German law (Section 184b StGB and NetzDG obligations). The BKA only receives the referral 6 weeks later through the NCMEC international referral process. During this delay, the offending user — located in Germany — continues distributing material to 14 additional group members. The platform receives a NetzDG enforcement notice with a penalty of EUR 2.3 million for failure to comply with mandatory reporting timelines, and faces criminal liability exposure under German law for delayed reporting of known CSAM distribution.
What went wrong: The routing logic relied on a single jurisdictional signal (account registration country) rather than multi-signal jurisdiction determination (reporter location, content origin, offender location, platform legal entity). No parallel-reporting logic existed for cases requiring notification to multiple jurisdictions simultaneously. The victim support pathway did not include immediate resources for the reporter — who was a witness to CSAM and may themselves require psychological support. Consequence: A 6-week delay in notifying the correct law enforcement authority, continued distribution of CSAM to additional victims, and regulatory penalties exceeding EUR 2.3 million.
Scenario C — Support Resource Registry Contains Stale Referral Data: A public-sector benefits platform deploys an AI agent to assist users with welfare applications. The agent is designed to detect signals of financial abuse — such as a user reporting that a family member is controlling their benefits payments — and route the user to financial abuse support services. The routing system references a support resource registry containing 847 entries: helpline numbers, local authority contacts, and charity referral links. Over 18 months, the registry is not updated. When a user discloses financial abuse by a partner, the agent correctly classifies the situation and provides three referral numbers: one for a national helpline (correct), one for a local authority safeguarding team (the team was restructured 8 months ago and the number now connects to a general switchboard with no safeguarding capability), and one for a specialist financial abuse charity (the charity ceased operations 5 months ago; the number is disconnected). The user calls the disconnected number, receives no answer, and does not attempt the other numbers. The user does not return to the platform for 3 weeks. A subsequent safeguarding referral from a healthcare provider reveals that the abuse escalated during the 3-week gap. An internal investigation determines that 23% of the support resource registry entries are stale — disconnected numbers, defunct organisations, or restructured services — and that no validation process exists.
What went wrong: The support resource registry had no freshness validation, no automated link/number checking, and no periodic review cycle. Stale referral data is worse than no referral data — it creates the appearance of support while delivering a dead end, and the user may not seek alternatives after encountering a disconnected resource. The 23% staleness rate means roughly 1 in 4 referrals was potentially leading to a dead end. Consequence: A vulnerable user received a non-functional referral at a critical moment, contributing to a 3-week gap in support during which abuse escalated. The platform faces a formal safeguarding inquiry and reputational damage.
Scope: This dimension applies to any AI agent that interacts with users in contexts where harm, abuse, exploitation, or safety threats may be reported, disclosed, or detected — including but not limited to community platforms, marketplace environments, social networks, public-sector service portals, customer support systems, and any deployment where the agent may encounter users who are victims of or at risk of harm. The scope includes the classification logic that determines when victim support routing is triggered, the routing rules that determine which support resources are presented, the resource registries that contain referral information, the jurisdictional logic that ensures compliance with local mandatory reporting obligations, the privacy safeguards that protect victim data during routing, and the fallback mechanisms that ensure routing functions when primary pathways fail. Organisations that operate across multiple jurisdictions must ensure that routing logic accounts for all applicable mandatory reporting requirements, victim support legal frameworks, and cultural considerations in each jurisdiction of operation.
4.1. A conforming system MUST implement a harm classification model that distinguishes between severity levels of reported or detected harm — at minimum: imminent physical danger, child safety, sexual exploitation, domestic abuse, financial exploitation, harassment with credible threat, and general harassment — with each severity level mapped to a distinct routing pathway and response timeline.
4.2. A conforming system MUST route users to appropriate victim support resources within defined maximum response times: imminent physical danger and child safety cases within 60 seconds of classification; domestic abuse, sexual exploitation, and financial exploitation cases within 15 minutes; harassment with credible threat within 1 hour; and general harassment within 24 hours.
4.3. A conforming system MUST maintain a support resource registry containing verified, current contact information for victim support services, including crisis hotlines, law enforcement reporting channels, legal aid services, mental health resources, and specialist advocacy organisations, with each entry validated for accuracy at least quarterly and immediately upon notification of a change.
4.4. A conforming system MUST implement jurisdictional routing logic that determines the correct victim support resources and mandatory reporting obligations based on multiple signals — including user's current location, content origin, offender location, and applicable platform legal entity — and that triggers parallel notifications to all jurisdictions with concurrent legal obligations.
4.5. A conforming system MUST protect victim data throughout the routing process by enforcing need-to-know access controls, encrypting referral data in transit and at rest, obtaining explicit consent before sharing victim-identifying information with third-party support services (except where mandatory reporting obligations override consent requirements), and providing victims with a clear explanation of what information will be shared and with whom.
4.6. A conforming system MUST implement fallback routing pathways that activate when primary support resources are unavailable — including after-hours coverage, regional alternatives when local resources are unreachable, and a guaranteed human escalation path that is always available regardless of system state.
4.7. A conforming system MUST log all routing decisions including the classification trigger, severity level assigned, resources presented, jurisdictions notified, response time achieved, and outcome status, with logs retained as immutable audit records.
4.8. A conforming system SHOULD implement proactive harm detection that identifies potential victims from behavioural signals — such as distress language, repeated interactions with flagged accounts, or patterns consistent with grooming or coercive control — and initiates supportive outreach without requiring the user to file a formal report.
4.9. A conforming system SHOULD provide victim support resources in the user's preferred language, with culturally appropriate framing and localised resource information, rather than defaulting to a single-language or single-region resource set.
4.10. A conforming system SHOULD implement feedback mechanisms that allow victims to report whether the support resources they were routed to were helpful, accessible, and responsive, with this feedback used to update the resource registry and refine routing logic.
4.11. A conforming system MAY implement warm handoff capabilities that connect the victim directly to a support agent or counsellor in real-time rather than providing a referral number, reducing the burden on the victim to initiate a separate contact.
4.12. A conforming system MAY implement trauma-informed interaction design in the routing interface — using empathetic language, avoiding re-traumatising questioning, and allowing the victim to control the pace and depth of disclosure.
Victim support routing is a duty-of-care function that sits at the intersection of platform safety, legal compliance, and human rights. When a user discloses or a system detects that harm is occurring, the platform's response in the following minutes and hours can materially affect the victim's safety, psychological state, and access to justice. An AI agent that moderates content or processes user reports is often the first point of contact for a person in crisis — and the quality of the routing decision at that moment determines whether the victim reaches help or falls through the cracks.
Three structural failure modes justify the governance requirements in this dimension. First, severity-blind routing treats all harm reports identically, subjecting imminent-danger cases to the same queue and SLA as low-severity complaints. This is the most common failure pattern and the most dangerous, because it means a user reporting an immediate physical threat may wait days for a response designed around content moderation throughput targets. Severity-blind routing is not merely inefficient — it is a safeguarding failure that can directly contribute to physical harm.
Second, jurisdictional routing errors send reports to the wrong authority or fail to notify all authorities with concurrent jurisdiction. In cross-border platforms, a single harm report may trigger mandatory reporting obligations in multiple countries simultaneously — the reporter's country, the victim's country, the offender's country, and the country where the platform is legally established. Failure to route correctly is not an operational inconvenience; it is a legal violation that can result in criminal liability for the platform and its officers in some jurisdictions, particularly for child safety matters.
Third, stale or non-functional referral data creates the illusion of support while delivering nothing. A disconnected helpline number or a defunct organisation link is worse than no referral at all, because the victim may interpret the failed contact as confirmation that no help is available and disengage from help-seeking entirely. This learned helplessness effect is well-documented in victim services research and is particularly acute for vulnerable populations who face barriers to help-seeking in the first place.
The threat model for this dimension includes both inadvertent failures (misconfigured routing rules, stale registries, jurisdictional logic errors) and adversarial exploitation (offenders manipulating location signals to cause misrouting, abusers using platform mechanisms to suppress victim reports, and coordinated campaigns to overwhelm routing capacity). The governance framework must address both categories.
The duty-of-care framing is reinforced by emerging regulatory requirements. The EU Digital Services Act requires very large online platforms to assess and mitigate systemic risks including risks to the protection of minors and to civic discourse, with specific obligations around content moderation and user safety. The UK Online Safety Act imposes duties of care on service providers regarding illegal content and content harmful to children, with Ofcom empowered to impose significant penalties for non-compliance. Multiple jurisdictions have enacted mandatory reporting requirements for child sexual abuse material that impose strict timelines — the US requires reporting to NCMEC within defined windows, while Germany's NetzDG imposes 24-hour removal obligations for manifestly unlawful content and 7-day obligations for other unlawful content. Failure to route victim reports correctly can simultaneously violate platform safety regulations, mandatory reporting statutes, data protection requirements, and general duty-of-care obligations.
Victim support routing should be implemented as a multi-layered system comprising a harm classification engine, a severity-based routing rules engine, a support resource registry, a jurisdictional determination module, and a fallback and escalation framework. Each layer must be independently testable and auditable.
Recommended patterns:
Anti-patterns to avoid:
Social Media and Community Platforms. Platforms with user-generated content face the highest volume of harm reports and the widest variety of harm types. These platforms must implement high-throughput classification models capable of processing thousands of reports per hour with severity-tiered routing. The challenge is maintaining classification accuracy at scale while ensuring that high-severity cases are not lost in volume. Platforms operating under the EU Digital Services Act must demonstrate that their content moderation and user safety systems are proportionate to systemic risks.
Marketplace Platforms. Marketplace platforms face specific harm types including fraud, counterfeit goods harming consumer safety, and in-person safety risks from buyer-seller interactions. Routing must account for the physical-world dimension — a user reporting that a seller threatened them at a meetup requires a different routing pathway than a user reporting a misleading product listing. Marketplace platforms should maintain specific routing pathways for transaction-related harm and physical safety incidents.
Public Sector and Government Services. Government platforms that deliver benefits, housing, or social services interact with vulnerable populations at elevated risk of exploitation and abuse. These platforms have heightened safeguarding obligations and often interface directly with statutory safeguarding frameworks. Routing must integrate with local authority safeguarding teams, and mandatory reporting obligations may be more extensive than for private-sector platforms.
Cross-Border Operations. Platforms operating across multiple jurisdictions must maintain jurisdiction-specific routing configurations that account for divergent mandatory reporting requirements, different designated authorities, different legal definitions of harm categories, and different data-sharing constraints. A single global routing configuration is insufficient. The jurisdictional routing module must be configurable per jurisdiction and maintained by personnel with legal expertise in each operating jurisdiction.
Basic Implementation — The organisation has implemented a harm classification model with at least four severity tiers, each mapped to distinct routing pathways with defined response timelines. A support resource registry exists with verified entries covering all operating jurisdictions. Jurisdictional routing logic uses at least two signals to determine applicable jurisdictions. Victim data is encrypted in transit and at rest. Fallback routing pathways exist for primary resource unavailability. All mandatory requirements (4.1 through 4.7) are satisfied.
Intermediate Implementation — All basic capabilities plus: the support resource registry is validated automatically on a monthly basis with quarterly manual verification. Proactive harm detection identifies potential victims from behavioural signals. Multilingual and culturally localised support resources are available for all major operating jurisdictions. Victim feedback mechanisms inform routing quality improvements. Routing decision analytics identify patterns of under-routing or delayed routing. Jurisdictional routing includes parallel notification to all jurisdictions with concurrent obligations.
Advanced Implementation — All intermediate capabilities plus: warm handoff capabilities connect victims directly to support agents in real-time. Trauma-informed interaction design governs all victim-facing interfaces. Independent audit annually validates routing accuracy, resource registry freshness, jurisdictional compliance, and response time adherence. Routing performance is integrated with emergency harm response (AG-698), repeat-offender linkage (AG-695), and escalation to specialist review (AG-691) for a holistic trust and safety governance model. Predictive analytics identify emerging harm patterns and pre-position support resources.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Harm Classification Severity Differentiation
Test 8.2: Response Time Compliance
Test 8.3: Support Resource Registry Freshness
Test 8.4: Jurisdictional Routing Accuracy
Test 8.5: Victim Data Privacy Protection
Test 8.6: Fallback Routing Activation
Test 8.7: Routing Decision Audit Log Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU Digital Services Act | Article 16 (Notice and Action Mechanisms) | Direct requirement |
| EU Digital Services Act | Article 34 (Risk Assessment) | Supports compliance |
| UK Online Safety Act | Section 10 (Safety Duties — Illegal Content) | Direct requirement |
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 14 (Human Oversight) | Supports compliance |
| GDPR | Article 6 (Lawfulness of Processing) | Constrains implementation |
| GDPR | Article 9 (Special Categories of Data) | Constrains implementation |
| US EARN IT Act / NCMEC Reporting | 18 U.S.C. § 2258A (CSAM Reporting) | Direct requirement |
| NetzDG (Germany) | Sections 3-3a (Complaint Management) | Direct requirement |
| NIST AI RMF | MAP 5.1 (Impacts to Individuals) | Supports compliance |
| ISO 42001 | Clause 6.1.2 (AI Risk Assessment) | Supports compliance |
Article 16 requires providers of hosting services to implement mechanisms allowing users to notify the provider of information they consider to be illegal content. The notice mechanism must be easy to access, user-friendly, and must allow submission of a sufficiently substantiated explanation. While Article 16 focuses on the notice mechanism itself, effective implementation requires that notices are processed with appropriate urgency based on the severity of the reported content — particularly where the notice involves harm to an individual. Victim support routing governance ensures that notices involving harm to identified victims are not merely processed as content moderation actions but are also routed to support pathways that address the victim's needs. The DSA's emphasis on "timely, diligent, non-arbitrary and objective" processing of notices (Article 16(6)) is operationalised through the severity-tiered routing and defined SLAs mandated by this dimension.
The Online Safety Act imposes duties on regulated services regarding illegal content, with specific focus on content that constitutes a priority offence. Priority offences include child sexual exploitation and abuse, terrorism, and various forms of harassment and threats. For services regulated under the Act, detection and routing of content involving these priority offences must be expedited. The Act requires that regulated services operate systems and processes that are proportionate to the nature and severity of the harm. Victim support routing governance provides the operational framework for proportionate response — ensuring that the most serious offences trigger the fastest and most comprehensive support routing, while less severe matters receive appropriate but less urgent treatment. Ofcom's enforcement powers include substantial penalties for services that fail to comply with safety duties, making robust routing governance a regulatory necessity.
Victim support routing inherently involves processing sensitive personal data — information about a person's victimisation, which may reveal data concerning health, sexual orientation, criminal victimisation, or other special categories under Article 9. The lawful basis for processing this data varies by context: mandatory reporting to law enforcement may rely on legal obligation (Article 6(1)(c)) or vital interests (Article 6(1)(d)); voluntary referral to support services requires consent (Article 6(1)(a)) or legitimate interest with careful balancing (Article 6(1)(f)). Special category data processing requires an Article 9(2) exception — substantial public interest (Article 9(2)(g)) or vital interests where the data subject is incapable of giving consent (Article 9(2)(c)) may apply. The privacy protection requirements in this dimension (Requirement 4.5) operationalise these GDPR obligations within the routing workflow.
US law requires electronic service providers to report apparent violations involving child sexual abuse material to NCMEC. The reporting obligation is strict — failure to report carries criminal penalties. For platforms operating in or accessible from the United States, victim support routing must include an automated CSAM reporting pathway that files NCMEC CyberTipline reports within the required timeline. This obligation exists independently of other jurisdictional reporting requirements — a platform must report to NCMEC for US-nexus cases while simultaneously reporting to local authorities in the victim's or offender's jurisdiction. The jurisdictional routing requirement (4.4) ensures that CSAM reports trigger parallel notifications to all applicable authorities.
Germany's Network Enforcement Act imposes specific timelines for processing complaints about unlawful content: manifestly unlawful content must be removed or blocked within 24 hours, and other unlawful content within 7 days. The Act also requires platforms to maintain effective complaint management procedures. Victim support routing governance complements NetzDG compliance by ensuring that complaints involving harm to identifiable victims are not merely processed as content removal actions but are also routed to appropriate support pathways and, where required, to German law enforcement. The BKA notification requirement for child safety matters is operationalised through the jurisdictional routing logic mandated by this dimension.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Cross-functional — affects all users who experience harm on the platform, with potential for physical harm escalation, re-traumatisation, and regulatory enforcement across multiple jurisdictions |
Consequence chain: Without governed victim support routing, the platform's response to harm reports is ad-hoc, severity-blind, and unreliable. The immediate failure mode is delayed or absent routing — a victim who reports imminent danger receives the same automated acknowledgement and queue position as a user reporting a minor content violation. The first-order consequence is that victims do not receive timely access to support resources, extending their exposure to harm during the critical window when intervention is most effective. The second-order consequence depends on the harm type: for imminent physical danger, delayed routing can contribute to escalating violence; for child safety matters, delayed reporting allows continued exploitation; for domestic abuse, delayed support leaves the victim in a dangerous situation without safety planning resources. The third-order consequence is regulatory and legal liability. Failure to meet mandatory reporting timelines for child safety matters carries criminal penalties in multiple jurisdictions. Failure to comply with Digital Services Act notice-handling obligations or Online Safety Act safety duties exposes the platform to enforcement fines that can reach 6% of global annual turnover under the DSA and £18 million or 10% of qualifying worldwide revenue under the UK Online Safety Act. The fourth-order consequence is reputational — public disclosure that a platform failed to route a victim to support during an emergency destroys user trust at a fundamental level. Investigative journalism and parliamentary inquiries into platform safety failures have demonstrated that a single high-profile routing failure can trigger legislative reform, regulatory crackdowns, and sustained reputational damage. The aggregate governed exposure from a systemic routing failure — combining regulatory fines, litigation costs, remediation expenses, and revenue impact from user trust erosion — can exceed hundreds of millions of pounds for large platforms.
Cross-references: AG-019 (Human Escalation & Override Triggers) defines when automated processing must yield to human intervention — victim support routing is a critical instance where human escalation must be guaranteed. AG-008 (Governance Continuity Under Failure) ensures that victim support routing continues to function during platform outages or degraded states — routing failures during system failures compound the harm. AG-029 (Data Classification Enforcement) governs the classification of victim data, which includes sensitive personal information requiring the highest protection levels. AG-033 (Consent Lifecycle Governance) governs the consent mechanisms used when sharing victim information with third-party support services. AG-037 (Anonymisation & Pseudonymisation Governance) applies to victim data that must be shared for statistical or research purposes without identifying the individual. AG-040 (Sensitive Category Data Processing Governance) governs the processing of special category data inherent in victim disclosures. AG-055 (Audit Trail Immutability & Completeness) ensures that routing decision logs meet the immutability and completeness requirements for regulatory evidence. AG-210 (Multi-Jurisdictional Regulatory Mapping) provides the framework for mapping mandatory reporting obligations across operating jurisdictions — victim support routing consumes this mapping to determine jurisdictional routing rules. AG-689 (Abuse Taxonomy Governance) provides the harm classification taxonomy that victim support routing uses to determine severity levels. AG-691 (Escalation to Specialist Review Governance) defines escalation pathways to specialist reviewers for complex or ambiguous harm reports that the AI agent cannot classify with confidence. AG-698 (Emergency Harm Response Governance) governs the emergency response framework that victim support routing activates for imminent-danger cases.