This dimension governs the conditions under which AI agents may generate content that imitates, replicates, or is substantially derived from the recognisable stylistic, tonal, structural, or expressive characteristics of a specific living author, deceased author's protected estate, musical artist, journalist, broadcaster, or other identified creative voice, as well as the conditions under which wholesale reproduction of copyrighted expression is solicited or produced. The dimension is critical because style mimicry sits at the intersection of intellectual property law, personality rights, consumer deception, and democratic integrity: outputs that convincingly replicate a trusted voice can distribute misinformation at scale under a borrowed credibility shield, while outputs that reproduce substantial portions of protected text expose deploying organisations to significant civil liability across multiple jurisdictions simultaneously. Failure manifests as agents producing marketable ghost-written content in a living author's name without consent, generating fake political commentary attributed—explicitly or implicitly—to a known journalist, reproducing extended song lyrics or code verbatim in commercial products, or flooding information ecosystems with synthetic content indistinguishable in style from a credible source, thereby degrading public epistemic trust and exposing the deploying organisation to injunctions, statutory damages, and reputational collapse.
A mid-tier content agency deploys a customer-facing AI agent with a prompt template instructing it to "write a 1,200-word essay in the style of [Named Literary Author], suitable for commercial publication." The agent produces three essays per hour across a 14-day pilot, totalling 504 commercially positioned pieces. The named author is a living novelist with an active literary estate and a well-documented contractual relationship with a single publisher. No consent is sought. One essay is submitted to a literary magazine, passes editorial review on stylistic grounds, and is published before the fraud is identified. The author's publisher initiates a breach-of-contract investigation against the magazine, the author's legal team files for injunctive relief against the agency under the Lanham Act and state right-of-publicity statutes, and the literary magazine retracts the piece with a public statement. Downstream, four additional submissions are identified in slush piles at other publications. The agency faces statutory damages of up to USD 150,000 per infringed work under 17 U.S.C. § 504(c)(2) if wilfulness is established, plus reputational exposure that terminates a pending USD 2.3 million content services contract. The AI agent had no style mimicry constraint, no consent verification step, and no disclosure requirement attached to its output.
A public-sector communications contractor uses an AI agent to rapidly draft civic information bulletins during a municipal emergency. A junior operator, working under time pressure, submits a prompt: "Write this update in the style of [Named Investigative Journalist], so it sounds credible and trusted by local residents." The agent produces a bulletin that replicates the journalist's signature paragraph structure, characteristic rhetorical questions, and specific recurring phrases documented across a decade of the journalist's published work. The bulletin is issued on an official municipal channel without authorship attribution. Within six hours, the bulletin is screenshotted, stripped of its municipal header, and circulates on social media as an authentic article by the named journalist. The journalist publicly denies authorship, the story contradicts a separate investigation the journalist is actively conducting, and the municipal government faces a public credibility crisis. The journalist's union files a formal complaint with the press standards body. The contractor loses its public-sector communications licence renewal. No output classification step existed to flag the style-specific prompt instruction before generation.
A software-as-a-service company embeds an AI agent in its creative suite, marketed to independent musicians and developers. The agent is accessible via API without rate limiting on output length. Over three months, telemetry analysis reveals that approximately 7,400 requests successfully elicited outputs containing more than eight consecutive lines of copyrighted song lyrics verbatim, and approximately 2,100 requests produced substantial verbatim reproductions of GPL-licensed code stripped of its licence header. A music rights organisation identifies the product through routine digital monitoring, issues a takedown demand, and initiates litigation citing secondary infringement liability. Simultaneously, two open-source software foundations identify the stripped licence headers in code outputs incorporated into downstream commercial products, triggering GPL compliance failures for at least nine separate companies who relied on those outputs. The SaaS company had implemented no reproduction-length ceiling, no rights-status check against a licensed content registry, and no output watermarking to enable downstream traceability. Remediation costs, including legal fees, retroactive licensing negotiations, and product recall, exceed USD 4.1 million across an 18-month period.
This dimension applies to all AI agent deployments that generate natural language text, structured prose, poetry, song lyrics, dialogue, code, musical notation, or any other expressive output where a human operator or automated pipeline has supplied a prompt, system instruction, or contextual signal that references, names, describes, or implicitly targets the stylistic, tonal, structural, or expressive characteristics of a specific identified person (living or deceased within an active estate period), a registered creative brand, or a body of work subject to active copyright protection. It applies regardless of whether the final output is intended for internal use, public publication, commercial sale, or automated redistribution. It applies across all deployment profiles identified in Section 1 and is triggered by both explicit mimicry instructions ("write like X") and implicit mimicry signals (system prompts pre-loaded with exemplar passages from an identified author without licence). Dimensions governing related controls—including output attribution (AG-201), deceptive content (AG-214), identity impersonation (AG-312), and synthetic media disclosure (AG-538)—operate concurrently and do not supersede this dimension.
4.1.1 The agent system MUST implement a pre-generation classification step that detects when an input contains an explicit or strongly implicit instruction to replicate the recognisable stylistic, tonal, or expressive characteristics of a named individual or identified creative entity.
4.1.2 The classification step MUST distinguish between three tiers of mimicry signal: (a) Direct Named Mimicry — the prompt names a specific person or work and instructs replication; (b) Structural Mimicry — the prompt supplies exemplar passages or describes distinctive structural patterns associated with an identified source; (c) Implicit Voice Capture — the system prompt or contextual configuration pre-loads the agent with sufficient exemplar material that output reliably converges on the target voice without an explicit runtime instruction.
4.1.3 The agent system MUST log the classification tier assigned to each request that triggers this dimension, together with the triggering signal, before generation proceeds.
4.1.4 The agent system SHOULD maintain a configurable watch-list of high-risk named individuals and creative entities, updated at intervals not exceeding 90 days, against which incoming prompts are screened.
4.2.1 For Direct Named Mimicry of a living individual, the agent system MUST require that the deploying organisation has documented evidence of one of the following prior to permitting generation: (a) explicit written consent from the named individual or their authorised representative; (b) a current licence agreement with the rights holder covering synthetic stylistic reproduction; or (c) a documented legal opinion confirming that the proposed use falls within an applicable statutory exception (e.g., fair use, fair dealing, parody, criticism, or education) with jurisdiction specified.
4.2.2 The agent system MUST refuse to proceed with Direct Named Mimicry where none of the conditions in 4.2.1 can be verified, and MUST return a refusal response that explains the basis for refusal without reproducing the content requested.
4.2.3 For Structural Mimicry, the agent system MUST verify that exemplar passages supplied in the prompt or system configuration are either (a) not subject to active copyright protection; (b) licensed for the intended use; or (c) sufficiently brief and transformative to qualify under applicable fair use or fair dealing doctrine, with jurisdiction documented.
4.2.4 The agent system SHOULD provide operators with a consent-verification workflow integrated into the prompt submission pathway, such that the verification status is recorded in the request metadata before generation.
4.3.1 The agent system MUST implement output-side controls that detect and suppress verbatim reproduction of protected textual content exceeding a configurable threshold, with a default ceiling of no more than four consecutive lines of poetry or song lyrics, and no more than 300 consecutive words of prose from a single identified copyrighted work, unless a valid licence for that specific work is recorded in the system configuration.
4.3.2 The agent system MUST log every instance where an output is modified or truncated by the verbatim reproduction control, including the detected source identifier (where attributable), the length of the match, and the action taken.
4.3.3 The agent system MUST NOT suppress or obscure the fact of truncation from the receiving operator; the truncated output MUST include a disclosure notice indicating that content was modified to comply with reproduction limits.
4.3.4 The agent system SHOULD integrate with at least one licensed content fingerprinting or rights-status registry service to enable automated rights-status checks against outputs before delivery, particularly for high-volume or automated pipeline deployments.
4.3.5 For code outputs, the agent system MUST apply equivalent verbatim reproduction controls calibrated to the licence terms of identified open-source or proprietary codebases, and MUST preserve or surface any licence header associated with reproduced code segments.
4.4.1 Where the agent generates content in response to a mimicry-classified request that has been permitted under a verified authorisation (4.2.1), the agent system MUST attach a machine-readable metadata tag to the output identifying: (a) that the content is AI-generated; (b) the stylistic target (where consent permits disclosure); and (c) the authorisation type relied upon.
4.4.2 The agent system MUST NOT produce output that explicitly or implicitly represents the generated content as authored by the named individual, unless the deploying organisation has explicit written consent for attributed ghost-writing or co-authorship from the named individual or their estate.
4.4.3 Where output is delivered to an end consumer rather than an operator, the agent system MUST surface a human-readable disclosure that the content is AI-generated and does not represent the views or authentic voice of any named individual whose style may have been referenced.
4.4.4 The agent system SHOULD implement watermarking or steganographic tagging of stylistically derived outputs to support downstream traceability, provenance verification, and removal in the event of a rights dispute.
4.5.1 The agent system MUST apply the same authorisation requirements in 4.2.1 to named mimicry of deceased individuals where the death occurred within 70 years (or the applicable jurisdiction-specific copyright term) and the estate is demonstrably active, unless a documented legal opinion confirms the specific work or style element is in the public domain in the jurisdiction of deployment and intended use.
4.5.2 The agent system MUST maintain a jurisdiction-specific public domain reference that is reviewed and updated at intervals not exceeding 12 months, to ensure that copyright term assessments reflect current law.
4.5.3 The agent system SHOULD flag requests targeting deceased individuals who have been the subject of active estate litigation in the preceding five years as requiring elevated review regardless of apparent copyright term status.
4.6.1 The agent system MUST apply heightened review to any mimicry-classified request where the stated or inferred use is commercial publication, resale, marketing, political communication, or public institutional communication.
4.6.2 For political communication contexts, the agent system MUST refuse to generate content that mimics the voice, style, or attributed positions of any named politician, public official, or political commentator, unless the request originates from the named individual's own verified account and is accompanied by authenticated consent.
4.6.3 The agent system MUST log all commercial-context escalations, including the classification basis, the decision outcome, and the operator identity, in an audit trail retained per Section 7.
4.7.1 The agent system MUST present deploying operators with a terms-of-use acknowledgement at configuration time that explicitly identifies the operator's responsibility for obtaining and retaining consent documentation, licence agreements, and legal opinions required under this dimension.
4.7.2 The agent system MUST NOT accept system-level configurations that disable or bypass the mimicry classification step (4.1.1) or the verbatim reproduction controls (4.3.1) without a documented override authorisation from a designated compliance officer role within the deploying organisation, with the override recorded in the audit log.
4.7.3 The agent system SHOULD provide operators with a self-service compliance dashboard that surfaces the count and classification of mimicry-classified requests, refusals, and authorised generations on a rolling 30-day basis.
4.8.1 The agent system MUST, where deployment spans multiple jurisdictions with materially different copyright term durations, personality rights frameworks, or fair use/dealing doctrines, apply the most protective applicable standard to any given request unless a jurisdiction-specific configuration has been established and documented.
4.8.2 The agent system MUST maintain a jurisdiction matrix that maps each active deployment region to its operative copyright term, applicable moral rights provisions, and any specific statutory exemptions relevant to AI-generated stylistic content, reviewed at intervals not exceeding 12 months.
4.8.3 The agent system SHOULD implement jurisdiction-aware prompt routing such that requests identified as politically sensitive or involving public figures are assessed under the law of both the jurisdiction of generation and the jurisdiction of intended publication.
4.9.1 The agent system MUST maintain an incident response procedure specifically addressing copyright and style mimicry complaints, including a documented takedown pathway that can suppress access to a specific output or output batch within four hours of a verified rights-holder complaint.
4.9.2 The agent system MUST log the full generation record — including prompt, classification decision, authorisation status, and output hash — for every mimicry-classified request for the retention period specified in Section 7, to support post-incident investigation and litigation response.
4.9.3 The agent system SHOULD maintain a legal contact registry mapping each deployment region to the designated legal counsel responsible for rights-holder communications, updated at intervals not exceeding six months.
Copyright and style mimicry risks are not adequately addressed by structural controls operating purely at the input layer. A prohibition on prompts naming specific authors is trivially circumvented by supplying dense exemplar passages without attribution, by describing idiosyncratic stylistic markers without naming their originator, or by pre-loading the agent's system context with sufficient training signal that voice convergence is achieved implicitly. Structural enforcement — rate limits, keyword filters, named-entity blocklists — catches only the most explicit surface forms of mimicry. Effective governance requires a layered architecture in which structural controls are reinforced by output-side behavioural enforcement: verbatim reproduction detection, stylistic divergence scoring, and post-generation metadata tagging. Neither layer alone is sufficient; their combination creates defence-in-depth appropriate to the High-Risk/Critical tier designation.
Behavioural controls are necessary because the harms from style mimicry are fundamentally epistemic and reputational as well as legal. An agent that consistently produces content indistinguishable from a known journalist's voice — even absent verbatim reproduction — can achieve the same deceptive effect as a deepfake: a borrowed credibility shield that launders disinformation through a trusted persona. Copyright law's inability to protect pure style (as distinct from specific expression) means that legal controls alone are insufficient to prevent the epistemic harm. Behavioural controls — disclosure requirements, attribution metadata, political-context refusals, and watermarking — extend governance beyond what law currently mandates and address the harms that law does not yet reach. This is consistent with the EU AI Act's risk-based approach, which requires mitigation of reasonably foreseeable harms even where no specific legal prohibition exists.
The High-Risk/Critical designation reflects the convergence of four independent harm vectors: intellectual property liability that can reach statutory damages of USD 150,000 per work in the United States alone; personality rights and moral rights violations that carry injunctive and reputational consequences across EU, UK, and Commonwealth jurisdictions; democratic integrity risks where synthetic voices are used to impersonate political figures or trusted information intermediaries; and systemic epistemic harm where large-scale deployment of uncontrolled style mimicry degrades the public's ability to attribute and evaluate information. Any single harm vector would justify a High tier designation; their convergence, and the speed at which AI agents can operate at scale, justifies the Critical elevation.
This dimension operates in close coordination with AG-214 (Deceptive Content Prevention), which governs the broader category of outputs designed to mislead, and AG-312 (Identity Impersonation Controls), which addresses the harder case of direct persona fabrication. Style mimicry sits between these two dimensions: it involves real identity-convergence risk without the explicit fabrication that AG-312 addresses, and it produces deceptive epistemic effects even when the content itself is factually accurate. AG-538 (Synthetic Media Disclosure) addresses the disclosure layer that this dimension's 4.4 requirements operationalise for the specific context of stylistic content. AG-602 (Disinformation and Narrative Manipulation) addresses the downstream use of mimicked voices in coordinated narrative operations, representing the most severe escalation pathway from uncontrolled style mimicry.
Tiered Classification Pipeline. Deploy a three-stage classification pipeline operating on every incoming request: (1) Named-Entity Recognition (NER) cross-referenced against a maintained watch-list of high-risk individuals and creative entities; (2) Structural Pattern Matching that identifies exemplar-based mimicry signals in the prompt body and system context; (3) Implicit Convergence Sampling that generates a short token sample from the configured context and scores its stylistic proximity to known high-risk voices using a lightweight classifier. The three-stage pipeline dramatically reduces false-negative rates relative to any single-stage approach.
Consent Ledger Integration. Maintain a consent and authorisation ledger as a system-of-record component, separate from the agent's inference infrastructure, that stores consent documents, licence agreements, and legal opinions keyed to named individuals and creative entities. The classification pipeline queries this ledger at request time; no mimicry-classified request proceeds to generation without a positive ledger response. The ledger should support expiry dates, so licences that lapse do not silently continue to authorise generation.
Output Fingerprinting Before Delivery. Implement a post-generation fingerprinting step that computes a rolling hash of output segments and queries a rights-status registry before the output is delivered to the operator or end user. This is particularly important for high-volume, automated pipeline deployments where manual review is not operationally feasible. The fingerprinting step can also flag outputs that closely match the stylistic signature of a high-risk voice even in the absence of verbatim reproduction.
Watermarking for Downstream Traceability. Apply steganographic or metadata-level watermarks to all mimicry-classified outputs, encoding the generation timestamp, the deploying organisation's identifier, and the authorisation status. This enables rights holders and press standards bodies to trace the provenance of circulating synthetic content back to its source, materially reducing the deploying organisation's liability exposure in secondary distribution scenarios.
Jurisdiction-Aware Routing. For cross-border deployments, implement a routing layer that applies the most protective jurisdiction's rules as the default, with operator-configurable overrides that require explicit compliance officer sign-off. Route politically sensitive requests — those referencing named public officials, politicians, or journalists — through an elevated review queue regardless of jurisdiction.
Operator Training and Responsible Use Certification. Require operators who configure mimicry-related capabilities to complete a documented responsible-use training module covering copyright basics, personality rights, political communication restrictions, and incident reporting procedures. Record completion in the operator's account metadata and gate access to advanced style configuration options on training completion.
Anti-Pattern: Keyword-Only Input Filtering. Relying solely on a blocklist of author names or known titles as the mimicry detection mechanism. This approach is defeated by paraphrase, by exemplar-based prompting that never names the target, and by system-context pre-loading. Keyword filtering is a necessary but wholly insufficient component of the detection architecture.
Anti-Pattern: Delegating Compliance to the End User. Framing the operator's terms of service as a sufficient control ("users agree not to infringe copyright") without implementing technical enforcement mechanisms. This approach fails the preventive control requirement of this dimension and has been rejected by courts in secondary liability cases as insufficient where the deploying party had both the technical capability to implement controls and a direct financial interest in the infringing activity.
Anti-Pattern: Style as Outside Copyright. Treating the non-copyrightability of pure style as a blanket permission to produce stylistically converged outputs at scale. Even where specific expression is not reproduced verbatim, the aggregate epistemic and reputational harms of style mimicry — particularly in political, journalistic, and public health contexts — constitute foreseeable risks that preventive controls must address. Additionally, the boundary between protectable expression and non-protectable style is a legal determination that varies by jurisdiction and case; agents cannot reliably make that determination at generation time.
Anti-Pattern: Treating Deceased Persons as Automatically Unprotected. Assuming that requests targeting historical or deceased figures carry no IP or rights risk. Estates of recently deceased individuals retain copyright; moral rights persist in many civil law jurisdictions regardless of copyright term; and the reputational and democratic harms from fabricating content attributed to deceased public figures can be severe and legally actionable under defamation frameworks in some jurisdictions.
Anti-Pattern: Disabling Controls for Internal Use Cases. Providing an unconstrained internal API that bypasses the mimicry classification pipeline on the basis that outputs will not be published externally. Internal documents leak, internal tools get repurposed, and the organisation's legal exposure is not eliminated by an intent-to-keep-internal policy that lacks technical enforcement.
Anti-Pattern: Static Watch-Lists. Maintaining a watch-list of protected individuals and creative entities that is not updated at defined intervals. The landscape of active rights holders, living authors, and contested estates changes continuously. A watch-list that was current at deployment time will generate material false-negative rates within 12 months.
Publishing and Media. For agents deployed in publishing workflows, the most significant risk is ghost-writing at scale in named authors' voices without consent. Implement consent ledger integration at the template level, not just the request level, so that template configurations that reference named authors are gated at creation time.
Public Sector and Government Communications. The political communication restrictions in 4.6.2 are particularly acute in this context. Agents used for public communications must apply categorical refusals to prompts referencing the voices of political opponents, independent journalists, or civil society figures, regardless of operator intent.
Music and Entertainment. Verbatim lyric reproduction is the highest-frequency litigation risk in this sector. Implement conservative reproduction-length ceilings (four lines) as a hard default, not a configurable parameter, for lyric content. Integrate with at least one licensed music rights registry to enable real-time rights-status checks.
Legal and Compliance Services. Agents generating legal documents must not replicate the identified stylistic or structural signature of specific named lawyers or law firms in a manner that could be mistaken for authentic practitioner output, particularly in contexts where the output might be submitted to a tribunal or counterparty.
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Initial | Keyword-based input filtering only; no output-side controls; no consent ledger; no audit log for mimicry-classified requests |
| Level 2 — Developing | Three-tier classification pipeline implemented; verbatim reproduction controls active with default ceilings; basic consent documentation required at operator onboarding; audit log capturing request metadata |
| Level 3 — Defined | Consent ledger integrated at request time; output fingerprinting before delivery; operator training required for style configuration access; jurisdiction matrix maintained; incident response procedure documented |
| Level 4 — Managed | Rights-status registry integration for automated checks; watermarking on all mimicry-classified outputs; implicit convergence sampling in classification pipeline; compliance dashboard for operators; legal contact registry maintained |
| Level 5 — Optimising | Continuous watch-list updating with rights-holder notification integration; cross-border jurisdiction-aware routing; downstream traceability programme with press standards body relationships; annual external audit of controls against current rights landscape |
| Artefact | Description | Retention Period |
|---|---|---|
| Mimicry Classification Log | Per-request record of classification tier, triggering signal, classification timestamp, and disposition decision for every request processed under this dimension | 7 years from generation date |
| Consent and Authorisation Ledger | Record of all consent documents, licence agreements, and legal opinions relied upon to authorise mimicry-classified generations, including document metadata, issuing party, effective dates, and expiry dates | 10 years from expiry of the underlying authorisation |
| Refusal Log | Record of all requests refused under 4.2.2 or 4.6.2, including prompt hash (not full prompt text unless legally required), refusal basis, and timestamp | 7 years from generation date |
| Verbatim Reproduction Control Log | Record of all instances where output was modified or truncated under 4.3.1, including detected source identifier, match length, action taken, and truncation disclosure text delivered | 7 years from generation date |
| Output Metadata Archive | Machine-readable metadata tags attached to permitted mimicry-classified outputs, including AI-generation marker, authorisation type, and stylistic target identifier where consent permits disclosure | 7 years from generation date |
| Watermark Registry | Record of watermarks applied to mimicry-classified outputs, cross-referenced to generation records, to support downstream provenance tracing | 10 years from generation date |
| Watch-List Update Log | Dated records of each review and update to the named individual and creative entity watch-list, including items added, removed, and retained | 5 years from each update |
| Jurisdiction Matrix | Current and historical versions of the jurisdiction matrix required under 4.8.2, with update dates and reviewer identities | 10 years from creation |
| Operator Responsibility Acknowledgement | Signed or electronically confirmed records of the operator terms-of-use acknowledgement required under 4.7.1 | 10 years from operator offboarding |
| Incident Response Records | Records of all rights-holder complaints, takedown requests, legal notices, and the deploying organisation's response actions, including timestamps and outcome | 10 years from resolution |
| Override Authorisation Records | Records of any override of the mimicry classification or verbatim reproduction controls under 4.7.2, including the authorising compliance officer, the basis stated, and the requests subsequently processed under the override | 10 years from authorisation date |
All artefacts listed in 7.1 must be accessible to the deploying organisation's designated compliance function within two business days of a request. For organisations operating under the EU AI Act's high-risk AI system requirements, artefacts must be retained in a form that supports the post-market monitoring obligations of Article 72 and the technical documentation obligations of Annex IV. For organisations subject to regulatory supervision (financial services, healthcare, public sector), artefact retention must satisfy the longer of the periods specified in 7.1 and any applicable sector-specific data retention requirement.
Maps to: 4.1.1, 4.1.2, 4.1.3
Objective: Verify that the classification pipeline correctly identifies and logs all three tiers of mimicry signal.
Method: Submit a test corpus of 30 prompts across three equal groups: 10 containing Direct Named Mimicry (explicit instruction to replicate a named individual's style); 10 containing Structural Mimicry (exemplar passages from an identified source without explicit naming); 10 containing Implicit Voice Capture signals (system context pre-loaded with recognisable exemplar material). For each prompt, verify that: (a) the classification step executes before generation; (b) the correct tier is assigned; (c) the log entry is created with the triggering signal and timestamp before any output is generated.
Pass Criteria:
Maps to: 4.2.1, 4.2.2, 4.2.3
Objective: Verify that Direct Named Mimicry requests are blocked where no valid authorisation exists and permitted where a valid authorisation is recorded.
Method: Submit 20 Direct Named Mimicry requests under four conditions (five requests each): (a) no authorisation in ledger; (b) valid consent document in ledger, not expired; (c) expired licence in ledger; (d) legal opinion in ledger for a documented jurisdiction-specific exception. Verify for each condition: whether generation proceeds; whether the refusal response is returned where required; whether the refusal response text avoids reproducing the requested content.
Pass Criteria:
Maps to: 4.3.1, 4.3.2, 4.3.3, 4.3.5
Objective: Verify that verbatim reproduction controls detect and suppress outputs exceeding the configured ceiling, log the action, and disclose the truncation to the operator.
Method: Issue 20 requests designed to elicit verbatim reproduction across four content types (five each): (a) song lyrics; (b) literary prose; (c) GPL-licensed code with licence header; (d) poetry. For each type, five requests should elicit reproduction at lengths: (i) below the ceiling; (ii) at ceiling; (iii) 50% above ceiling; (iv) double ceiling; (v) substantially above ceiling. Verify for each output: whether the control triggers at the correct threshold; whether truncation is logged with source identifier and action; whether the disclosure notice is present in the delivered output; whether GPL licence headers are preserved for code outputs.
Pass Criteria:
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Copyright and Style Mimicry Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-604 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-604 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Copyright and Style Mimicry Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without copyright and style mimicry governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-604, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.