AG-695

Repeat-Offender Linkage Governance

Community Platforms, Trust & Safety ~23 min read AGS v2.1 · April 2026
EU AI Act GDPR NIST ISO 42001

2. Summary

Repeat-Offender Linkage Governance requires that AI agents operating in community platform, trust and safety, and marketplace contexts link suspected repeat-offender activity across accounts, sessions, or identifiers in a proportionate, privacy-respecting manner that avoids over-claiming identity. Linking abuse activity is essential for detecting ban evasion, coordinated manipulation, and serial harassment — but imprecise or over-broad linkage creates severe risks: falsely associating innocent users with abusive actors, building shadow identity graphs that violate data protection principles, and enabling discriminatory enforcement that disproportionately affects marginalised communities. This dimension establishes the governance framework for when linkage is permissible, what confidence thresholds justify enforcement action, how linkage evidence is recorded and audited, and what safeguards prevent linkage mechanisms from becoming instruments of surveillance or profiling.

3. Example

Scenario A — Over-Linking Produces False Repeat-Offender Classifications at Scale: A social marketplace platform deploys an AI-driven repeat-offender detection agent that links accounts based on device fingerprints, IP address proximity, and behavioural similarity scores. The agent uses a composite similarity threshold of 0.72 to flag linked accounts. During a 3-month period, the agent links 14,300 accounts to 2,100 previously banned offender clusters. Enforcement actions — including permanent bans, listing removals, and payment holds — are applied automatically to all linked accounts. Post-incident analysis reveals that 3,740 of the 14,300 linked accounts (26.2%) were false positives: shared household devices, university campus IP ranges, and common purchasing patterns produced spurious links. 891 legitimate sellers had inventory worth a combined $2.3 million frozen for an average of 19 days. 214 sellers permanently left the platform.

What went wrong: The linkage model conflated correlation signals (shared IP, similar device) with identity evidence. The 0.72 threshold was calibrated on training data from a single geographic region and did not account for shared-infrastructure environments common in other regions. No confidence-tiered enforcement existed — all links above the threshold triggered the same maximum enforcement action. No human review was required before enforcement on high-confidence links, and no human review was even possible for low-confidence links because the system did not distinguish between confidence levels. Consequence: $2.3 million in frozen seller funds, 214 permanent seller departures representing approximately $890,000 in annual platform revenue, class-action lawsuit filed by affected sellers, regulatory inquiry from the FTC regarding unfair business practices, and reputational damage quantified at a 12% decline in new seller onboarding over the following quarter.

Scenario B — Under-Governed Linkage Graph Becomes a Surveillance Instrument: A community safety agent deployed by a municipal government platform links reported abuse activity across neighbourhood forums, public comment systems, and service request portals. The agent constructs a cross-service identity graph linking 47,000 residents across 6 municipal services using email fragments, phone number suffixes, and writing style analysis. The linkage graph is intended solely for detecting serial harassment of municipal employees. However, a municipal department obtains access to the linkage graph and uses it to identify anonymous complainants who filed whistleblower reports about code enforcement corruption. 23 complainants are identified and subsequently experience retaliatory inspection activity. The linkage graph had no access controls, no purpose limitation, no retention schedule, and no audit trail of queries.

What went wrong: The linkage graph was constructed without data minimisation principles — it linked all activity, not just abuse-flagged activity. No purpose limitation was enforced on graph queries. The graph's existence was not disclosed to residents, and no data protection impact assessment was conducted before deployment. Writing style analysis constituted behavioural biometric processing under GDPR Article 9 equivalent provisions, but was not classified as sensitive data processing. Consequence: 23 residents subjected to retaliation, municipal liability of $1.7 million in settlement costs, federal civil rights investigation, platform decommissioned by court order, and municipal CTO terminated.

Scenario C — Jurisdictional Linkage Mismatch Violates Data Protection Law: A cross-border e-commerce platform operates a repeat-offender detection agent that links seller accounts across its EU and US marketplaces. The agent transfers device fingerprints, IP geolocation data, and behavioural profiles from EU sellers to a centralised linkage engine hosted in the US. The linkage engine correlates EU seller data with US seller data to detect cross-border ban evasion. The platform has no Standard Contractual Clauses covering the linkage-specific data transfer, and the behavioural profiling constitutes automated decision-making under GDPR Article 22. In a 6-month period, 340 EU sellers are linked to US banned accounts and subjected to enforcement action without GDPR-compliant notification or right-to-explanation. A DPA investigation results in a EUR 4.2 million fine.

What went wrong: The cross-border linkage operation was treated as a technical abuse-detection function rather than a data processing operation subject to GDPR. No transfer impact assessment was performed for linkage-specific data flows. Behavioural profiling for linkage purposes was not recognised as automated decision-making requiring Article 22 safeguards. EU sellers received no notification that their data was being processed for cross-marketplace linkage. Consequence: EUR 4.2 million GDPR fine, mandatory suspension of cross-border linkage for 9 months pending remediation, 340 enforcement actions reversed pending compliant re-evaluation.

4. Requirement Statement

Scope: This dimension applies to any AI agent or automated system that links, correlates, or clusters user accounts, sessions, identifiers, or behavioural signals for the purpose of detecting repeat offenders, ban evasion, coordinated abuse, or serial misconduct on community platforms, marketplaces, or public-sector digital services. The scope includes both real-time linkage (blocking or flagging at the time of account creation or transaction) and retrospective linkage (analysing historical data to identify previously undetected connections). It covers all linkage signals, including but not limited to: device fingerprints, IP addresses, email or phone identifiers, payment instruments, behavioural biometrics (typing patterns, writing style, navigation behaviour), content similarity, and social graph proximity. The scope extends to linkage graphs — persistent data structures that record linkage relationships over time — and their access, retention, and purpose-limitation governance. Organisations that consume linkage outputs from third-party services are not exempted; they must ensure the third-party linkage process conforms to these requirements or implement compensating controls that achieve equivalent outcomes.

4.1. A conforming system MUST maintain a documented linkage policy that specifies: (a) the permitted purposes for which repeat-offender linkage may be performed, (b) the signal categories authorised for use in linkage, (c) the minimum confidence threshold required before linkage may trigger enforcement action, and (d) the maximum retention period for linkage records.

4.2. A conforming system MUST assign a quantified confidence score to every linkage assertion, using a defined and validated scoring methodology, and MUST NOT trigger enforcement action on linkage assertions below the documented minimum confidence threshold.

4.3. A conforming system MUST implement tiered enforcement responses that are proportionate to linkage confidence — specifically, linkage assertions with confidence below 0.90 (or the organisation's documented equivalent high-confidence threshold) MUST NOT trigger permanent account actions without human review.

4.4. A conforming system MUST record every linkage assertion in an immutable audit log that includes: the linked identifiers (pseudonymised where required), the signal categories that contributed to the assertion, the individual and composite confidence scores, the enforcement action triggered (if any), and a timestamp.

4.5. A conforming system MUST conduct a data protection impact assessment (or jurisdictional equivalent) before deploying or materially modifying any repeat-offender linkage capability, covering at minimum: the linkage signals used, the categories of individuals affected, the risks of false-positive linkage, and the safeguards against disproportionate impact.

4.6. A conforming system MUST enforce purpose limitation on linkage graphs and linkage outputs, restricting access to personnel and systems with a documented need for abuse-detection or trust-and-safety purposes, and MUST log all queries against linkage data.

4.7. A conforming system MUST classify any behavioural biometric signals used for linkage (writing style analysis, typing cadence, navigation patterns) as sensitive data under AG-040 and process them accordingly.

4.8. A conforming system MUST provide affected users with a mechanism to contest linkage-based enforcement actions, including disclosure of the fact that enforcement was based on linkage to another account (without disclosing the specific linked account if doing so would compromise safety).

4.9. A conforming system MUST validate linkage model accuracy at least quarterly, measuring false-positive and false-negative rates against a representative ground-truth dataset, and MUST document remediation actions when the false-positive rate exceeds 5%.

4.10. A conforming system SHOULD implement differential confidence thresholds by enforcement severity — requiring higher confidence for more severe actions (e.g., permanent ban requires higher confidence than temporary restriction).

4.11. A conforming system SHOULD conduct disparate impact analysis on linkage outcomes at least semi-annually, measuring whether linkage false-positive rates are disproportionately distributed across demographic groups, geographic regions, or device categories.

4.12. A conforming system SHOULD implement linkage decay — reducing the confidence or evidentiary weight of linkage signals that have not been reinforced by new evidence within a defined period.

4.13. A conforming system MAY implement federated linkage architectures that allow cross-platform offender detection without centralising identity graphs, using privacy-preserving techniques such as secure multi-party computation or private set intersection.

5. Rationale

Repeat-offender detection is one of the most consequential capabilities in trust and safety operations. Serial abusers — those who evade bans by creating new accounts, use multiple accounts to amplify harassment, or exploit marketplace trust systems through coordinated sock-puppet operations — cause disproportionate harm. Research from major platforms consistently shows that 1-3% of users generate 30-60% of abuse reports. Effective linkage of these repeat offenders is essential for platform safety.

However, the same linkage capabilities that identify serial abusers can, when ungoverned, produce three categories of serious harm.

False-positive harm. Linkage models operate on probabilistic signals. Device fingerprints are shared across family members, housemates, and public-access devices. IP addresses are shared across organisations, campuses, and entire regions using carrier-grade NAT. Behavioural patterns correlate across unrelated individuals with similar cultural backgrounds or professional training. When linkage models over-claim identity — asserting that two accounts belong to the same person when they do not — innocent users suffer the consequences of another user's misconduct. False-positive rates of 15-30% are commonly reported in academic evaluations of device-fingerprint-based linkage systems, and these rates are significantly higher in shared-infrastructure environments common in developing economies, student populations, and communal living situations.

Surveillance and function creep. Linkage graphs are identity infrastructure. A graph that links accounts across services, devices, and behavioural patterns is functionally equivalent to an identity database — one that often captures more information about an individual's cross-platform behaviour than any single service would ordinarily possess. Without strict purpose limitation and access control, linkage graphs become surveillance instruments. The history of technology platforms demonstrates that identity infrastructure created for one purpose is routinely repurposed for others — from advertising targeting to law enforcement cooperation to employee monitoring.

Proportionality failure. Trust and safety operations must be proportionate to the harm they address. Linking a user to a previously banned account for scam activity justifies stronger enforcement action than linking a user to an account that received a single content warning. Without tiered enforcement that maps to linkage confidence and offence severity, linkage systems produce binary outcomes — full enforcement or no enforcement — that are disproportionate in both directions.

The governance framework established by this dimension addresses all three categories by requiring: quantified confidence with documented methodology, tiered enforcement proportionate to confidence and severity, purpose limitation and access control on linkage data, mandatory impact assessment, regular accuracy validation, and user contestation mechanisms. These requirements draw on established principles from data protection law (purpose limitation, data minimisation, automated decision-making safeguards), criminal justice (proportionality of sanction, right to contest), and information security (access control, audit logging).

6. Implementation Guidance

The linkage governance framework should be implemented as a layered system: a signal collection layer that classifies and scores individual linkage signals, a linkage engine that combines signals into composite confidence scores, an enforcement layer that maps confidence scores to proportionate actions, and an audit layer that records all linkage assertions and their outcomes.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Marketplace platforms: Repeat-offender linkage in marketplaces directly affects livelihoods. False-positive linkage that freezes a seller's account, holds their funds, or removes their listings can cause immediate financial harm. Marketplace implementations should apply higher confidence thresholds for financial enforcement actions (fund holds, payout suspensions) than for non-financial actions (listing visibility reduction). Payment instrument linkage should be treated with particular care — family members and business partners legitimately share payment instruments.

Social and community platforms: Linkage for harassment detection must balance detection efficacy against the risk of chilling legitimate speech, particularly for users who create alternative accounts for safety reasons (e.g., domestic violence survivors, LGBTQ+ individuals in hostile environments, political dissidents). Implementations should distinguish between linkage for the purpose of detecting ban evasion and linkage for the purpose of de-anonymisation — the latter should be prohibited absent a specific safety justification.

Public-sector platforms: Municipal and government platforms are subject to constitutional and human rights constraints that commercial platforms are not. Linkage of citizen accounts across government services creates citizen-profiling infrastructure that may violate constitutional privacy protections regardless of the stated purpose. Public-sector implementations should default to the most restrictive linkage policies and require explicit legal authority for any cross-service linkage.

Maturity Model

Basic Implementation — The organisation has a documented linkage policy specifying permitted purposes, authorised signals, and minimum confidence thresholds. Linkage assertions carry quantified confidence scores. Enforcement actions are tiered by confidence level. An audit log records all linkage assertions and enforcement actions. A data protection impact assessment has been completed. Users can contest linkage-based enforcement through an existing appeals process.

Intermediate Implementation — All basic capabilities plus: signal-category separation with independent scoring. Purpose-bound linkage partitions with independent access controls. Linkage decay functions reduce confidence over time. Quarterly accuracy validation against ground-truth datasets with documented false-positive rates. Semi-annual disparate impact analysis. Dedicated contestation pathway with disclosure of signal categories. Linkage model changes are subject to formal change-control with governance review.

Advanced Implementation — All intermediate capabilities plus: federated or privacy-preserving linkage architectures for cross-platform detection. Real-time confidence recalibration based on contestation outcomes and accuracy metrics. Automated detection of linkage model drift using statistical process control. Independent third-party audit of linkage accuracy, proportionality, and disparate impact annually. Cross-jurisdictional linkage compliance automation mapping enforcement actions to local legal requirements. Linkage graphs are subject to independent data protection audit separate from the general platform audit.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Linkage Policy Completeness

Test 8.2: Confidence Scoring Functionality

Test 8.3: Tiered Enforcement Proportionality

Test 8.4: Audit Log Completeness and Immutability

Test 8.5: Data Protection Impact Assessment Currency

Test 8.6: Purpose Limitation Enforcement

Test 8.7: Behavioural Biometric Classification

Test 8.8: User Contestation Mechanism

Test 8.9: Quarterly Accuracy Validation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
GDPRArticle 22 (Automated Decision-Making)Direct requirement
GDPRArticle 35 (Data Protection Impact Assessment)Direct requirement
GDPRArticle 5(1)(b) (Purpose Limitation)Direct requirement
GDPRArticle 9 (Special Categories of Data)Supports compliance
EU AI ActArticle 6/Annex III (High-Risk Classification)Supports compliance
EU Digital Services ActArticle 20 (Internal Complaint-Handling)Direct requirement
EU Digital Services ActArticle 40 (Data Access for Research)Supports compliance
US FTC ActSection 5 (Unfair or Deceptive Practices)Supports compliance
UK Online Safety ActPart 3 (Duties of Care)Supports compliance
NIST AI RMFMAP 5, MEASURE 2.6Supports compliance
ISO 42001Clause 6.1.2 (AI Risk Assessment)Supports compliance

GDPR — Article 22 (Automated Decision-Making)

Repeat-offender linkage that triggers enforcement actions (account bans, payment holds, content removal) constitutes automated decision-making that produces legal effects or similarly significantly affects natural persons. Where linkage is the sole or primary basis for enforcement, Article 22(1) provides individuals the right not to be subject to such decisions unless one of the Article 22(2) exceptions applies (contract necessity, legal authorisation, or explicit consent). Even where an exception applies, Article 22(3) requires the data controller to implement suitable measures to safeguard the data subject's rights, freedoms, and legitimate interests — including at least the right to obtain human intervention, express their point of view, and contest the decision. AG-695 requirements 4.3 (human review for sub-threshold permanent actions) and 4.8 (contestation mechanism) directly implement these Article 22 safeguards.

GDPR — Article 35 (Data Protection Impact Assessment)

Article 35 requires a DPIA for processing that is likely to result in a high risk to the rights and freedoms of natural persons. Recital 91 specifically identifies profiling and large-scale processing as indicators of high risk. Repeat-offender linkage constitutes profiling (automated processing of personal data to evaluate aspects of a natural person's behaviour) and operates at scale. AG-695 requirement 4.5 directly implements the Article 35 DPIA obligation for linkage operations.

GDPR — Article 5(1)(b) (Purpose Limitation)

Personal data collected for abuse detection must not be further processed in a manner incompatible with that purpose. Linkage graphs that are accessible for purposes beyond abuse detection — such as advertising, employee monitoring, or general analytics — violate purpose limitation. AG-695 requirement 4.6 implements purpose limitation through access controls and query logging on linkage data.

EU Digital Services Act — Article 20 (Internal Complaint-Handling)

The DSA requires providers of online platforms to provide an internal complaint-handling system for decisions to restrict or remove content, suspend or terminate accounts, or suspend or terminate the provision of services. Linkage-based enforcement actions fall squarely within this scope. AG-695 requirement 4.8 ensures that the complaint-handling mechanism specifically addresses linkage-based enforcement, including disclosure of the linkage basis.

US FTC Act — Section 5 (Unfair or Deceptive Practices)

The FTC has taken enforcement action against platforms whose trust and safety practices cause substantial injury to consumers that is not reasonably avoidable and not outweighed by countervailing benefits. False-positive repeat-offender linkage that freezes funds, removes listings, or bans accounts without adequate accuracy validation or contestation mechanisms may constitute an unfair practice under Section 5. AG-695's accuracy validation, proportionate enforcement, and contestation requirements reduce FTC enforcement risk.

UK Online Safety Act — Part 3 (Duties of Care)

The Online Safety Act imposes duties on providers to operate proportionate systems and processes for dealing with reported content and user complaints. Repeat-offender linkage is a component of these systems. Disproportionate linkage — linking innocent users or applying excessive enforcement based on low-confidence linkage — may constitute a failure to operate proportionate systems.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusPlatform-wide — affects all users subject to repeat-offender detection, enforcement consistency, and appeal outcomes

Consequence chain: Failure to govern repeat-offender linkage produces a bifurcated harm pattern. On the over-linkage path: insufficiently governed linkage models produce false-positive rates that climb undetected, linking innocent users to banned accounts. Without confidence-tiered enforcement, these false positives trigger the same maximum enforcement actions as true positives. Without contestation mechanisms, affected users have no recourse. The downstream consequence is mass incorrect enforcement — accounts banned, funds frozen, listings removed — affecting users who have committed no violation. Financial harm is immediate (frozen funds, lost sales); reputational harm to the platform follows as affected users report their experiences publicly; regulatory harm materialises as consumer protection agencies (FTC, national DPAs, DSA enforcement bodies) investigate. On the under-governance path: linkage graphs built without purpose limitation, access control, or retention schedules become surveillance infrastructure. Function creep enables the linkage graph to be used for purposes far beyond abuse detection — identifying anonymous users, profiling behaviour, enabling retaliation. The downstream consequence is fundamental rights violations: chilling effects on speech, exposure of vulnerable individuals, and erosion of trust in digital platforms. In public-sector contexts, ungoverned linkage graphs may constitute unconstitutional surveillance, triggering civil rights litigation and court-ordered decommissioning. In both paths, the root cause is the same: the absence of a governance framework that ensures linkage is accurate, proportionate, purpose-limited, and contestable.

Cross-references: AG-001 (Operational Boundary Enforcement) defines the operational boundaries within which linkage operations must operate. AG-019 (Human Escalation & Override Triggers) governs the escalation requirements for linkage assertions that require human review. AG-022 (Behavioural Drift Detection) detects drift in linkage model accuracy over time. AG-029 (Data Classification Enforcement) requires linkage data to be classified according to its sensitivity. AG-033 (Consent Lifecycle Governance) governs any consent-based legal basis for linkage processing. AG-037 (Anonymisation & Pseudonymisation Governance) governs the pseudonymisation of identifiers in linkage audit logs. AG-040 (Sensitive Category Data Processing Governance) applies to behavioural biometric signals used in linkage. AG-043 (Access Control & Credential Governance) underpins the purpose-limited access controls on linkage graphs. AG-055 (Audit Trail Immutability & Completeness) governs the immutability of linkage audit logs. AG-210 (Multi-Jurisdictional Regulatory Mapping) enables cross-border linkage operations to comply with local legal requirements in each jurisdiction.

Cite this protocol
AgentGoverning. (2026). AG-695: Repeat-Offender Linkage Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-695