AG-697

Cross-Platform Threat Intelligence Governance

Community Platforms, Trust & Safety ~26 min read AGS v2.1 · April 2026
EU AI Act GDPR NIST ISO 42001

2. Summary

Cross-Platform Threat Intelligence Governance requires that AI agent systems participating in the sharing or consumption of safety signals — such as abuse indicators, known-bad content hashes, threat actor behavioural fingerprints, and coordinated inauthentic behaviour patterns — do so under formally governed safeguards that protect against data leakage, misattribution, re-identification of victims, competitive intelligence extraction, and jurisdictional non-compliance. Safety signal sharing is a critical multiplier for trust and safety operations because threat actors operate across platforms, but ungoverned sharing creates severe risks including privacy violations when personal data is embedded in threat signals, wrongful enforcement when signals are consumed without context, and regulatory exposure when signals cross borders without appropriate data transfer mechanisms. This dimension mandates end-to-end governance of the signal lifecycle — from classification and anonymisation through transmission, ingestion, validation, and downstream enforcement action — ensuring that the safety benefits of intelligence sharing are realised without creating new categories of harm.

3. Example

Scenario A — Unredacted Threat Signals Expose Victim Identity Across Platforms: A large social media platform operates an AI agent that detects child sexual abuse material (CSAM) and generates perceptual hashes for distribution to an industry threat-sharing consortium. The agent is configured to include contextual metadata alongside hashes to improve detection accuracy on recipient platforms — upload timestamps, associated account creation dates, geographic IP clusters, and content description tags. Over a 4-month period, the agent shares 23,400 hash records with full contextual metadata. A security researcher at a recipient platform discovers that 1,847 records contain sufficient metadata to re-identify the uploading account and, through correlation with publicly available data, to identify the associated real-world individual. In 312 of these cases, the uploading account belonged to a victim or mandatory reporter, not a perpetrator. The metadata — originally intended to improve detection — has been distributed to 14 consortium members across 9 jurisdictions, with no mechanism to recall or redact the data. Regulatory authorities in two EU member states open investigations under GDPR Article 33 for failure to implement adequate technical measures to prevent personal data leakage through threat-sharing channels.

What went wrong: The AI agent generated threat signals without a data classification step that would have identified personal data embedded in contextual metadata. No anonymisation or pseudonymisation layer existed between the detection system and the sharing channel. The sharing protocol did not define minimum and maximum metadata fields, leaving the agent to include all available context. No recipient-side validation checked incoming signals for re-identification risk. Consequence: 312 victim identities exposed across 14 organisations in 9 jurisdictions, GDPR investigations in 2 member states, consortium trust collapse requiring 8-month renegotiation of sharing agreements, £1.2 million in legal and remediation costs, and reputational damage that reduced voluntary participation in the consortium by 40%.

Scenario B — Consumed Threat Signals Applied Without Contextual Validation Cause Mass False Enforcement: An e-commerce marketplace operates an AI agent that consumes threat signals from a cross-platform fraud intelligence network. The network distributes behavioural fingerprints — device configurations, transaction velocity patterns, and network characteristics associated with fraud rings. The marketplace agent ingests these signals and automatically restricts accounts matching the fingerprints. Over a 6-week period, the agent restricts 4,200 seller accounts. Sellers receive notices stating their accounts are "under review for policy violations" with no further detail. After 3 weeks, a pattern emerges: 2,870 of the restricted accounts (68%) are legitimate sellers whose device and network characteristics matched the fraud fingerprints because they operate from shared coworking spaces, use common mobile carriers with carrier-grade NAT, or operate in geographic regions with limited ISP diversity. The marketplace loses an estimated £3.8 million in transaction volume during the restriction period. 410 sellers permanently migrate to competing platforms. A class-action lawsuit is filed by 1,200 affected sellers alleging wrongful restriction of trade.

What went wrong: The marketplace agent consumed external threat signals and applied them as enforcement inputs without contextual validation — the signals were designed to indicate risk, not to serve as deterministic enforcement triggers. No confidence scoring or false-positive tolerance was applied to consumed signals. The agent treated all matching signals identically regardless of the signal source's known accuracy rate. No human review threshold existed for bulk enforcement actions triggered by external signals. The marketplace had no feedback mechanism to report false positives back to the signal source. Consequence: 2,870 legitimate sellers wrongfully restricted, £3.8 million in lost transaction volume, class-action litigation, and permanent loss of 410 sellers to competitors.

Scenario C — Cross-Border Signal Sharing Violates Data Transfer Restrictions: A messaging platform headquartered in the EU operates an AI agent that detects coordinated inauthentic behaviour (CIB) — networks of accounts engaging in coordinated manipulation campaigns. When the agent identifies a CIB network, it generates threat intelligence packages containing account behavioural graphs, message timing patterns, content similarity scores, and network topology maps. These packages are shared in near-real-time with partner platforms to enable rapid detection of the same campaign on other services. One partner platform is headquartered in a country without an EU adequacy decision. Over 11 months, the EU platform shares 890 threat intelligence packages containing behavioural data derived from EU data subjects. A regulatory audit reveals that 340 of these packages contain behavioural graph data that qualifies as personal data under GDPR — the graph structure, combined with timing and content data, allows identification of specific accounts and, in some cases, the natural persons behind them. No Standard Contractual Clauses, Binding Corporate Rules, or other GDPR Chapter V transfer mechanism covers the sharing arrangement. The Data Protection Authority issues an enforcement notice requiring immediate cessation of sharing and initiates proceedings that result in a €2.6 million fine under GDPR Article 83.

What went wrong: The platform classified threat intelligence as "operational security data" rather than "personal data," bypassing the data transfer governance framework. No data protection impact assessment was conducted for the sharing arrangement. The AI agent's output was not assessed for personal data content before transmission. No cross-border transfer mechanism was established because the data was not recognised as falling within GDPR scope. The partner platform received and processed the data for 11 months without either party identifying the compliance gap. Consequence: €2.6 million GDPR fine, mandatory cessation of a critical threat intelligence channel, 11 months of non-compliant data transfers requiring retrospective assessment, and a 6-month delay in CIB detection capability while compliant sharing mechanisms were established.

4. Requirement Statement

Scope: This dimension applies to any AI agent system that shares safety signals, threat indicators, abuse intelligence, or enforcement metadata with external platforms, industry consortia, or government entities, or that consumes such signals from external sources and uses them to inform enforcement, restriction, or risk-scoring decisions. The scope includes all forms of threat intelligence exchange — hash-based content identification signals, behavioural fingerprints, account reputation scores, network topology indicators, coordinated activity patterns, and enforcement action metadata. The scope extends to both automated signal exchange (API-to-API sharing) and semi-automated exchange (agent-generated reports shared through structured platforms). Organisations that participate in industry sharing consortia (such as content hash-sharing databases, fraud intelligence networks, or coordinated abuse reporting platforms) are within scope regardless of whether they are signal producers, signal consumers, or both. The scope also includes internal signal sharing between separately governed business units or subsidiaries operating in different jurisdictions where cross-border data transfer obligations apply.

4.1. A conforming system MUST classify all outbound threat intelligence signals according to a defined data classification schema before sharing, determining whether each signal contains personal data, pseudonymous data, or data that could enable re-identification when combined with data available to the recipient.

4.2. A conforming system MUST apply anonymisation, pseudonymisation, or redaction to outbound threat intelligence signals to remove or protect any personal data or re-identifiable data, with the specific technique determined by the data classification outcome, and must document the technique applied and its assessed re-identification risk.

4.3. A conforming system MUST define and enforce a signal schema for each sharing channel that specifies the minimum required fields, maximum permitted fields, data types, and retention constraints for shared signals, preventing agents from including ad hoc contextual metadata that has not been classified and approved.

4.4. A conforming system MUST validate all inbound threat intelligence signals before using them to inform enforcement, restriction, or risk-scoring decisions, including verification of signal source authenticity, assessment of the source's historical accuracy rate, application of confidence scoring, and determination of whether the signal alone is sufficient for automated action or requires human review.

4.5. A conforming system MUST establish and enforce human review thresholds for enforcement actions triggered by consumed threat intelligence signals, requiring human review when: the enforcement action affects more than a defined number of accounts within a defined time window, the signal source's historical false-positive rate exceeds a defined threshold, or the signal type has not been previously validated against the platform's own detection systems.

4.6. A conforming system MUST ensure that all cross-border threat intelligence transfers comply with applicable data transfer regulations, including establishing appropriate transfer mechanisms (Standard Contractual Clauses, adequacy decisions, Binding Corporate Rules, or other lawful basis) before initiating transfers that include personal data or data classified as potentially re-identifiable.

4.7. A conforming system MUST maintain a complete, immutable audit trail of all threat intelligence signals shared and consumed, including the signal content (or a non-reversible reference), the sharing channel, the sender and recipient identities, timestamps, the data classification applied, the anonymisation technique used, and any enforcement actions taken based on consumed signals.

4.8. A conforming system MUST implement a signal recall or retraction mechanism enabling the originating platform to notify recipients when a previously shared signal is determined to be erroneous, and recipients must process recall notices within a defined SLA and reverse any enforcement actions taken solely on the basis of the recalled signal.

4.9. A conforming system SHOULD implement a feedback loop enabling signal consumers to report false positives, false negatives, and contextual mismatches back to signal producers, with aggregated feedback used to improve signal quality and calibrate confidence scores.

4.10. A conforming system SHOULD conduct periodic joint validation exercises with sharing partners — testing a sample of shared signals against ground-truth outcomes to measure signal accuracy, false-positive rates, and downstream enforcement accuracy across the sharing ecosystem.

4.11. A conforming system SHOULD implement differential access tiers for shared threat intelligence, granting recipients access to signal detail levels commensurate with their data protection maturity, jurisdictional alignment, and demonstrated need-to-know.

4.12. A conforming system MAY implement cryptographic signal-sharing protocols (such as private set intersection or secure multi-party computation) that enable platforms to compare threat indicators without either party revealing their full signal set to the other.

4.13. A conforming system MAY participate in standardised threat intelligence exchange formats (such as STIX/TAXII adapted for trust and safety contexts) to improve interoperability and reduce the risk of signal misinterpretation across platforms.

5. Rationale

Cross-platform threat intelligence sharing is one of the most powerful tools available for trust and safety operations. Threat actors — whether producing child exploitation material, conducting fraud operations, running coordinated manipulation campaigns, or trafficking in counterfeit goods — do not confine their activities to a single platform. A fraud ring detected on one marketplace is likely operating on others. A CSAM distribution network disrupted on one hosting service will migrate to alternatives. A coordinated inauthentic behaviour campaign targeting one social platform will simultaneously target others. Without cross-platform intelligence sharing, each platform fights these threats in isolation, detecting and responding to the same actors independently, often after significant harm has already occurred on each platform.

However, the very properties that make threat intelligence valuable — specificity, context, and timeliness — also make it dangerous when shared without governance. Three categories of risk demand formal governance.

First, privacy and re-identification risk. Threat intelligence signals are derived from user behaviour, content, and account characteristics. Even when signals are intended to be abstract (hashes, behavioural fingerprints, network patterns), they frequently contain enough contextual information to re-identify individuals, including victims. The re-identification risk compounds with each recipient, because each recipient holds different auxiliary data that can be combined with the shared signal. A signal that is anonymous in isolation may become identifiable when combined with recipient-side data. Governing the classification, anonymisation, and schema of shared signals is a necessary precondition for preventing this compounding re-identification risk.

Second, enforcement accuracy risk. Consuming external threat signals and applying them to enforcement decisions introduces error propagation across platforms. A false-positive signal from one platform — flagging a legitimate user as a threat actor — becomes a false enforcement action on every platform that consumes the signal without independent validation. The blast radius of a single erroneous signal is multiplied by the number of consuming platforms. This creates a systemic risk where cross-platform sharing, intended to improve safety, instead propagates errors at scale. Governance must require validation, confidence scoring, and human review thresholds that prevent unconditional trust in external signals.

Third, regulatory and jurisdictional risk. Threat intelligence crosses borders. A signal generated from user data in the EU, shared with a platform in the US, and consumed by a platform in Southeast Asia traverses multiple regulatory regimes with different data protection, law enforcement, and content regulation requirements. Without governance of cross-border transfers, sharing arrangements may violate data protection regulations (GDPR, LGPD, PIPA), exceed the scope of law enforcement cooperation frameworks, or create liability under local content regulation laws. Competitive sensitivity adds further complexity — sharing threat intelligence between commercial competitors requires safeguards against the extraction of commercial intelligence from safety signals.

The threat model for this dimension encompasses both external adversaries and internal governance failures. External adversaries may attempt to poison the sharing ecosystem by injecting false signals, weaponising the signal recall mechanism to disrupt enforcement, or exploiting the sharing channel itself as an intelligence-gathering vector. Internal governance failures include classification errors that expose personal data, validation gaps that propagate false positives, and transfer compliance gaps that create regulatory exposure. Both categories require preventive controls implemented before signals are shared or consumed, which is why this dimension is classified as preventive rather than detective.

6. Implementation Guidance

Cross-Platform Threat Intelligence Governance requires a layered architecture that governs the full signal lifecycle: classification of outbound signals, enforcement of signal schemas, anonymisation before transmission, validation of inbound signals, confidence scoring before enforcement, human review for high-impact actions, audit trail capture, and feedback mechanisms. The architecture must be designed for a multi-party ecosystem where the organisation controls only its own sharing behaviour and must establish trust relationships with partners who may have different governance maturities.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Social Media and Content Platforms. Platforms sharing content hashes (CSAM, terrorist content, non-consensual intimate imagery) must comply with specific regulatory frameworks including the EU CSAM Regulation proposals, the Christchurch Call, and national mandatory reporting obligations. Hash-sharing databases such as those operated by NCMEC, the IWF, and GIFCT have established protocols that should serve as baseline references. However, these protocols focus on content identification and may not address the broader signal types (behavioural fingerprints, network topology, coordination patterns) that AI agents generate.

E-Commerce and Marketplaces. Fraud intelligence sharing between competing marketplaces raises antitrust concerns in addition to data protection requirements. Sharing arrangements must be structured to prevent the extraction of competitive intelligence (pricing data, seller performance metrics, customer behaviour patterns) from safety signals. Antitrust-compliant sharing typically requires a neutral intermediary, signal schemas limited to safety-relevant fields, and prohibitions on sharing data that could inform competitive strategy.

Financial Services. Financial institutions sharing threat intelligence through networks such as FS-ISAC must comply with banking secrecy laws, Suspicious Activity Report confidentiality requirements, and sector-specific data protection regulations. AI agents generating threat signals from transaction data must ensure that shared signals do not reveal protected financial information or enable the reconstruction of individual transaction histories.

Public Sector. Government agencies consuming threat intelligence from private sector platforms face additional constraints including freedom of information obligations, due process requirements for enforcement actions based on external intelligence, and restrictions on government surveillance that may be circumvented through private-sector intelligence sharing arrangements.

Maturity Model

Basic Implementation — The organisation has implemented a signal classification pipeline for outbound threat intelligence with documented data classification criteria. Signal schemas are defined and enforced for each sharing channel. Inbound signals are validated for source authenticity before use. Cross-border transfers are assessed for data transfer compliance. A complete audit trail of shared and consumed signals is maintained. Human review is required for enforcement actions triggered by external signals that affect more than a defined threshold of accounts. All mandatory requirements (4.1 through 4.8) are satisfied.

Intermediate Implementation — All basic capabilities plus: confidence tiers are assigned to signal sources and signal types, with differentiated enforcement policies per tier. Feedback loops report false positives and false negatives to signal producers. Signal recall is tested periodically and recall SLAs are measured. Anonymisation techniques are assessed for re-identification risk using formal methodologies (k-anonymity, l-diversity, or differential privacy analysis). Joint validation exercises with key partners are conducted annually. Cross-border transfer mechanisms are integrated with the sharing pipeline and automatically verified before each transfer.

Advanced Implementation — All intermediate capabilities plus: cryptographic signal-sharing protocols (private set intersection, secure multi-party computation) are deployed for sensitive signal types, enabling comparison without full signal disclosure. Signal quality metrics are published to sharing partners with transparency reports. Differential access tiers grant signal detail levels based on recipient governance maturity. The signal-sharing architecture is independently audited annually, covering classification accuracy, anonymisation effectiveness, validation rigour, and cross-border compliance. The organisation contributes to standardised threat intelligence exchange format development and participates in ecosystem-level governance initiatives.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Outbound Signal Classification Completeness

Test 8.2: Anonymisation Effectiveness Verification

Test 8.3: Signal Schema Enforcement

Test 8.4: Inbound Signal Validation and Confidence Scoring

Test 8.5: Human Review Threshold Enforcement

Test 8.6: Cross-Border Transfer Compliance Verification

Test 8.7: Audit Trail Completeness and Immutability

Test 8.8: Signal Recall Processing

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
GDPRArticle 5(1)(c) (Data Minimisation)Direct requirement
GDPRArticle 44-49 (International Transfers)Direct requirement
GDPRArticle 25 (Data Protection by Design)Supports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU Digital Services ActArticle 45 (Codes of Conduct for Systemic Risks)Supports compliance
NIST AI RMFGOVERN 1.7 (Third-Party Risk Management)Supports compliance
ISO 42001Clause 8.4 (Externally Provided Processes)Supports compliance
DORAArticle 19 (ICT-Related Incident Reporting)Supports compliance
UK Online Safety ActSection 85 (Information Sharing)Supports compliance

GDPR — Article 5(1)(c) (Data Minimisation) and Articles 44-49 (International Transfers)

The data minimisation principle directly governs the content of shared threat intelligence signals. Signals must contain only the data necessary for the safety purpose — not all available contextual data that the detection system can produce. AG-697's signal schema enforcement (Requirement 4.3) is the technical mechanism for implementing data minimisation in the sharing context. Articles 44-49 govern the cross-border transfer of personal data, which applies to threat intelligence signals when they contain personal data or data that enables re-identification. The cross-border transfer compliance requirement (4.6) directly implements Articles 44-49 by ensuring that no personal data crosses borders without an appropriate transfer mechanism. The compounding risk in threat intelligence sharing — where individually non-personal data points combine to create personal data at the recipient — requires a conservative approach to classification that accounts for the recipient's auxiliary data.

EU Digital Services Act — Article 45 (Codes of Conduct for Systemic Risks)

The DSA encourages platforms to adopt codes of conduct that address systemic risks, including through cooperation and information sharing. Cross-platform threat intelligence sharing is a form of such cooperation. However, the DSA also imposes due diligence obligations that require platforms to implement sharing with appropriate safeguards. AG-697 provides the governance framework that enables platforms to participate in DSA-aligned sharing arrangements while maintaining compliance with data protection and due process obligations. The DSA's emphasis on transparency supports the publication of signal quality metrics and sharing transparency reports at the advanced maturity level.

NIST AI RMF — GOVERN 1.7 (Third-Party Risk Management)

GOVERN 1.7 addresses the management of risks introduced through third-party AI components and data. Cross-platform threat intelligence consumed by an AI agent is a form of third-party data input that introduces third-party risk — the risk that the external signal is inaccurate, biased, or non-compliant. AG-697's inbound signal validation requirements (4.4, 4.5) implement third-party risk management for this specific data category, requiring validation, confidence scoring, and human review thresholds that prevent blind trust in external inputs.

UK Online Safety Act — Section 85 (Information Sharing)

The Online Safety Act creates frameworks for information sharing between platforms and between platforms and regulators for online safety purposes. Section 85 and related provisions enable and in some cases require sharing of safety-relevant information. AG-697 provides the governance framework that ensures such sharing complies with data protection requirements (UK GDPR), maintains proportionality, and includes safeguards against misuse of shared intelligence. The Act's recognition that information sharing is essential for online safety validates the need for governance rather than prohibition of cross-platform intelligence sharing.

DORA — Article 19 (ICT-Related Incident Reporting)

For financial institutions, DORA's incident reporting and threat intelligence sharing requirements intersect with AG-697 when AI agents are involved in detecting and sharing financial threat intelligence. DORA establishes frameworks for ICT-related threat intelligence sharing among financial entities, which must be conducted with appropriate governance. AG-697's requirements for classification, anonymisation, and cross-border compliance apply in addition to DORA-specific requirements for financial threat intelligence.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusCross-organisational — ungoverned signal sharing propagates errors and compliance failures across every platform in the sharing ecosystem

Consequence chain: Without cross-platform threat intelligence governance, the organisation faces three compounding failure cascades. The first cascade is privacy failure: unclassified signals containing personal data are shared to multiple external parties, each of whom may further share or combine the data, creating an expanding and irrecoverable exposure. The privacy harm is multiplied by the number of recipients and cannot be remediated through recall once re-identification has occurred. The second cascade is enforcement accuracy failure: consumed signals without validation or confidence scoring trigger enforcement actions against legitimate users at scale. Because the signals originate externally, the organisation's own detection accuracy metrics do not capture the error — the false positive appears to be a correct enforcement action because the signal said so. The resulting wrongful enforcement erodes user trust, generates legal liability, and may not be discovered until affected users challenge the actions or an audit reveals the gap. The third cascade is regulatory failure: cross-border signal transfers without appropriate transfer mechanisms violate data protection regulations in every jurisdiction where the transferred data constitutes personal data. Because threat intelligence sharing is typically ongoing and automated, the volume of non-compliant transfers accumulates rapidly — 11 months of daily sharing can generate thousands of non-compliant transfers before the gap is identified. Regulatory fines under GDPR can reach 4% of global annual turnover. The compounding effect is that all three cascades may occur simultaneously: a single ungoverned sharing channel can simultaneously expose personal data, propagate enforcement errors, and violate transfer regulations, with the total organisational impact exceeding the sum of individual failure modes because remediation requires suspending the sharing channel entirely, degrading safety capabilities during the remediation period.

Cross-references: AG-001 (Operational Boundary Enforcement) constrains the scope within which an agent may share or consume external data, including threat intelligence. AG-005 (Instruction Integrity Verification) ensures that sharing instructions and signal schemas have not been tampered with. AG-007 (Governance Configuration Control) governs the configuration of sharing channels and validation rules as governance artefacts. AG-029 (Data Classification Enforcement) provides the classification framework that AG-697 applies to threat intelligence signals before sharing. AG-030 (Cross-Border Data Transfer Governance) provides the transfer compliance framework referenced by Requirement 4.6. AG-033 (Consent Lifecycle Governance) addresses consent considerations when user data is processed for threat intelligence generation. AG-037 (Anonymisation & Pseudonymisation Governance) governs the anonymisation techniques applied to outbound signals under Requirement 4.2. AG-042 (Encryption & Cryptographic Control Governance) provides the cryptographic controls for securing sharing channels and implementing advanced sharing protocols. AG-043 (Access Control & Credential Governance) governs access to the sharing infrastructure and signal repositories. AG-055 (Audit Trail Immutability & Completeness) provides the audit trail framework referenced by Requirement 4.7. AG-210 (Multi-Jurisdictional Regulatory Mapping) provides the jurisdictional registry used by the cross-border transfer compliance layer. AG-689 (Abuse Taxonomy Governance) defines the abuse categories that determine signal classification and sharing priorities. AG-690 (Marketplace Integrity Governance) addresses marketplace-specific signal types including fraud indicators and seller reputation signals. AG-695 (Repeat-Offender Linkage Governance) generates cross-platform offender linkage signals that must be governed under this dimension. AG-698 (Emergency Harm Response Governance) defines emergency conditions under which expedited sharing with reduced review may be justified, subject to post-hoc governance review.

Cite this protocol
AgentGoverning. (2026). AG-697: Cross-Platform Threat Intelligence Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-697