This dimension governs the obligation of deploying organisations to register AI agents that meet or exceed defined high-risk classification thresholds in mandated regulatory databases, public-facing registries, or sector-specific listing systems, and to maintain those registrations with accurate, current, and materially complete information throughout the agent's operational lifecycle. Registration is not a one-time administrative act: it is an ongoing control obligation that binds deployment identity, capability declaration, risk classification, operator accountability, and change management into a publicly auditable record. Failure manifests when an agent subject to mandatory registration either operates without any valid registration entry, or operates under a stale or materially inaccurate registration that misrepresents the agent's capabilities, scope, risk class, or responsible party — conditions that simultaneously obstruct regulatory oversight, frustrate affected persons' rights to seek redress, and expose deploying organisations to enforcement liability across potentially multiple jurisdictions.
A financial institution deploys an automated agent to perform real-time creditworthiness scoring for consumer loan applications. The agent combines a large language model with structured feature extraction from bank transaction histories, social metadata, and employment verification services. The institution's legal team determines, incorrectly, that because the agent is framed as a "decision-support tool" for human loan officers, it falls below the EU AI Act Annex III high-risk threshold for AI systems in credit and insurance. The agent is not submitted to the EU database established under Article 71 of the AI Act. Over fourteen months, the agent processes approximately 340,000 loan applications across six EU member states. A consumer advocacy group files a coordinated complaint with three national market supervisory authorities after statistical analysis of approved vs. declined outcomes demonstrates a 23-percentage-point disparity in decline rates correlated with postcode-level demographic proxies. Regulators initiate an enforcement investigation and immediately request the AI Act registration record. No record exists. The institution faces parallel enforcement proceedings in three jurisdictions simultaneously, with fines calculated per the AI Act's Article 99 framework (up to €30 million or 6% of global annual turnover), plus national consumer credit law violations. The absence of registration has also prevented affected consumers from exercising their Article 86 rights to explanation, because the system was never disclosed as an AI system in the required registry, meaning neither the consumers nor their legal representatives knew it existed as a registrable AI system.
A medical device manufacturer deploys a diagnostic assistance agent registered with the relevant national competent authority under the EU Medical Device Regulation (MDR) and simultaneously listed in the EU AI Act high-risk database. At registration, the agent uses a convolutional model trained on 180,000 radiology images and is scoped to chest X-ray triage for adult patients. Eighteen months after go-live, the manufacturer retrains the model on an expanded dataset that now includes paediatric imaging and extends the agent's scope to CT scan analysis. An internal ML engineering team classifies this as a "model refresh" rather than a "material change" and does not trigger the change management process that would require registration update or a new conformity assessment. The registration record continues to show the original adult-only, X-ray-only scope. A 9-year-old patient at a partner hospital receives an incorrect CT triage recommendation; clinical staff, who have been trained to trust the system's registered scope, do not apply the elevated human oversight they would apply to an off-label tool. A serious incident is reported to the manufacturer. The national competent authority cross-references the incident against the EU AI Act registry and discovers the scope mismatch. The manufacturer is found to have violated both MDR Article 83 post-market obligations and AI Act Article 22 obligations for substantial modifications. The stale registration directly impaired the incident investigation timeline by 47 days, during which the system remained operational.
A multinational systems integrator deploys a benefits-eligibility determination agent across welfare administration systems in four jurisdictions: two EU member states, one UK devolved authority, and one Canadian province. Each jurisdiction has a different legal framework for algorithmic decision-making in public services — the EU AI Act, the UK Equality Act algorithmic impact requirements, and the Canadian Directive on Automated Decision-Making. The integrator registers the agent once, under the contracting entity's name in the primary EU jurisdiction, and treats that registration as satisfying obligations in all four jurisdictions. The registered description names the agent as "v1.0 Benefits Advisory Tool" and lists the integrator as sole operator. In practice, each deploying government authority has customised the agent's eligibility rule weights, thresholds, and output formatting, meaning each jurisdiction is running a materially distinct configuration. The Canadian province's Information Commissioner initiates an investigation following a discrimination complaint and requests the agent's registration documentation. The provided EU registration record names the wrong operator (the integrator rather than the provincial government authority), describes different capability parameters, is in a language not accepted by the Canadian authority, and references conformity standards applicable under EU law rather than the Canadian Automated Decision-Making Directive Impact Level 3 requirements. The investigation cannot proceed efficiently; affected claimants cannot identify the correct responsible party through any public registry. The integrator ultimately requires 14 months and approximately €2.1 million in legal and remediation costs to establish correct multi-jurisdiction registration records.
This dimension applies to any AI agent that meets one or more of the following conditions: (a) is classified as high-risk under any applicable legal framework in any jurisdiction in which it operates or whose residents it affects; (b) processes personal data at scale in a manner that requires registration under applicable data protection or algorithmic accountability legislation; (c) makes, supports, or materially influences decisions with legal or similarly significant effects on natural persons; (d) operates in a sector with mandatory product or system registration requirements that explicitly encompass AI or algorithmic decision-making systems; or (e) is deployed by, on behalf of, or in partnership with a public authority exercising statutory functions. The scope applies regardless of whether the agent is framed internally as a "tool," "assistant," "copilot," or "support system" — the legal classification is determined by functional characteristics and outcome effects, not internal product naming.
Exclusions: Pure research prototypes operating entirely within sandboxed environments with no access to real personal data and no production operational authority are excluded, subject to the condition that their transition to any form of production or pilot deployment immediately triggers this dimension.
4.1.1 The deploying organisation MUST conduct a formal regulatory registration threshold assessment prior to any production deployment of an AI agent, covering all jurisdictions in which the agent will operate or whose residents' data or decisions the agent will affect.
4.1.2 The threshold assessment MUST evaluate the agent against each applicable legal framework's high-risk criteria independently — a determination that an agent does not meet thresholds in one jurisdiction does not constitute or imply clearance in another.
4.1.3 The threshold assessment MUST be documented in a Registration Threshold Assessment Record (RTAR), which MUST include: the legal frameworks assessed; the criteria applied under each; the determination reached and the reasoning; the identity of the qualified person(s) who conducted the assessment; and the date of completion.
4.1.4 Where the threshold assessment outcome is uncertain or contested internally, the deploying organisation MUST apply the precautionary principle and proceed as though registration is required until a definitive legal determination is obtained.
4.1.5 The RTAR MUST be reviewed and revalidated whenever a material change to the agent's scope, capabilities, data processing activities, deployment jurisdictions, or responsible operator occurs.
4.2.1 Where a threshold assessment determines that registration is required, the deploying organisation MUST complete and submit a valid registration to every mandated database, registry, or competent authority listing system before the agent commences production operation in the relevant jurisdiction.
4.2.2 The registration submission MUST accurately describe: the agent's name and version identifier; the deploying organisation's legal name, registered address, and primary regulatory contact; the agent's functional scope and the categories of decisions or outputs it produces; the risk classification assigned and the basis for that classification; the conformity assessment or impact assessment references applicable; the categories of personal data processed; and the agent's intended user base and affected population.
4.2.3 Where a mandated registry requires information that the deploying organisation is unable to provide accurately at time of submission (for example, because a component is developed by a third-party provider that has not disclosed required technical details), the deploying organisation MUST NOT submit placeholder or estimated values as factual; instead, the deploying organisation MUST note the information gap in the submission, document the steps being taken to obtain accurate information, and set a binding internal deadline for completion.
4.2.4 Registration reference numbers and submission confirmation artefacts MUST be retained in the organisation's centralised registration record system.
4.3.1 Registered information MUST be kept accurate and current at all times throughout the agent's operational life.
4.3.2 The deploying organisation MUST establish and maintain a Registration Accuracy Review Schedule (RARS) that mandates at minimum an annual full review of all active registration records, regardless of whether any changes have occurred, to confirm that registered information remains complete and accurate.
4.3.3 Registration records MUST be updated within 30 calendar days of any of the following trigger events: change of responsible operator or deploying entity; change of the agent's functional scope, including addition of new capability categories or removal of previously declared capabilities; change of the jurisdictions in which the agent operates; material change to the underlying model or algorithmic logic; material change to data processing activities, including new data categories or new data sources; significant change to the affected population; or completion of a new conformity or impact assessment that supersedes a previously registered assessment reference.
4.3.4 The 30-day update obligation applies independently in each jurisdiction — satisfying the update requirement for one jurisdiction's registry does not satisfy it for another.
4.4.1 The registration record MUST correctly identify the entity that holds primary legal accountability for the agent's operation in each jurisdiction. Where that entity differs from the technical developer, deploying partner, or contracting party, all relevant roles MUST be disclosed in the registration with their respective legal responsibilities clearly delineated.
4.4.2 Where an agent is deployed through a reseller, integrator, or white-labelling arrangement, the deploying organisation MUST NOT register the upstream provider as the accountable operator unless that provider has contractually accepted and explicitly confirmed primary regulatory accountability for deployments in that jurisdiction.
4.4.3 Changes in operator identity — including corporate restructuring, acquisition, divestiture, or change of the responsible internal business unit with external-facing regulatory accountability — MUST trigger immediate registration update proceedings, with new responsible party information filed within 30 calendar days of the change taking effect.
4.5.1 Where an agent operates across multiple jurisdictions with different mandatory registration frameworks, the deploying organisation MUST maintain a separate, jurisdiction-specific registration record for each framework that requires it, and MUST NOT treat compliance with one jurisdiction's requirements as substituting for another's.
4.5.2 The deploying organisation MUST maintain a Multi-Jurisdiction Registration Map (MJRM) — a master document that lists every jurisdiction in which the agent operates, the applicable registration obligation in each, the status of compliance, and the registration reference numbers where applicable.
4.5.3 The MJRM MUST be updated whenever the agent's operational jurisdiction footprint changes.
4.5.4 Where a jurisdiction's regulatory framework is updated in a way that creates new registration obligations for a previously unregistered agent, the deploying organisation MUST complete registration within the transition period specified in the new framework, or within 90 calendar days if no transition period is specified.
4.6.1 The deploying organisation MUST maintain a formal Material Change Classification Policy (MCCP) that defines the criteria by which modifications to the agent are classified as material or non-material for the purpose of triggering registration update, conformity reassessment, and competent authority notification obligations.
4.6.2 The MCCP MUST be reviewed and approved by a qualified legal or compliance function, not solely by technical or engineering staff.
4.6.3 Any update to the agent's underlying model that involves retraining on a materially different dataset, expansion of operational scope, change to the population to which the agent's outputs are applied, or modification of decision thresholds that affects outcomes by more than an operationally defined materiality threshold MUST be treated as a material change for registration purposes.
4.6.4 The deploying organisation MUST NOT reclassify a material change as non-material for the sole or primary purpose of avoiding registration update obligations.
4.6.5 Where a change is evaluated and determined to be non-material, that determination MUST be documented in a Change Classification Record with supporting reasoning, retained in the organisation's compliance records system.
4.7.1 Where the applicable regulatory framework provides for public listing or public search access to registration records, the deploying organisation MUST ensure that the registered information it submits is complete and accurate to the degree required to enable meaningful public access.
4.7.2 The deploying organisation SHOULD maintain an internal summary of its agent's registration status that is accessible to affected persons upon request, formatted in plain language, even where the formal registry is a regulatory-access-only system.
4.7.3 The deploying organisation MUST NOT submit registration records that are technically complete but structured, formatted, or worded in a manner deliberately intended to obscure the agent's nature, capabilities, or risk classification from persons seeking to exercise rights under applicable law.
4.7.4 Where an affected person makes a request for information about whether a specific agent that interacted with them is registered and under what classification, the deploying organisation MUST respond within the timeframe mandated by applicable law, and in the absence of a specific mandated timeframe, MUST respond within 30 calendar days.
4.8.1 When an agent is permanently decommissioned, the deploying organisation MUST file a decommissioning notification with every registry in which the agent is registered, within 30 calendar days of cessation of operations.
4.8.2 The decommissioning notification MUST accurately state the date of cessation, the reason for decommissioning, and the disposition of any residual data processing activities that may continue under retention or legal hold obligations.
4.8.3 Registration records for decommissioned agents MUST be retained in the organisation's internal compliance records for a minimum of 7 years following decommissioning, or for the duration of any applicable limitation period for regulatory or civil action, whichever is longer.
4.8.4 Where decommissioning is partial — for example, the agent continues to operate in some jurisdictions but not others — the deploying organisation MUST file partial decommissioning notifications in the relevant registries and update the MJRM to reflect the reduced operational footprint.
4.9.1 The deploying organisation MUST designate a named Registration Compliance Owner (RCO) who holds formal accountability for the organisation's compliance with this dimension across all agents in scope.
4.9.2 The RCO MUST have documented authority to halt or delay deployment of an agent that has not completed required registration, or whose registration record is materially inaccurate, until compliance is achieved.
4.9.3 The organisation SHOULD establish a Registration Compliance Committee or equivalent governance forum that meets at minimum quarterly to review the status of all active registrations, pending updates, and emerging registration obligations arising from new regulatory frameworks or new agent deployments.
4.9.4 Registration compliance status MUST be included as a standing agenda item in the organisation's AI governance review cycle and MUST be reported to senior leadership or the board-level AI governance function at least annually.
Regulatory registration is not simply an administrative formality. It is the structural mechanism through which supervisory authorities are able to exercise meaningful oversight of deployed AI systems at scale. Without a populated, accurate registry, the entire architecture of risk-proportionate AI governance collapses: regulators cannot identify the population of high-risk systems in operation; they cannot prioritise inspection or audit resources; they cannot respond efficiently to complaints or incidents; and they cannot enforce conformity requirements because the existence of a non-conforming system may never come to their attention. Registration is therefore the foundational act that makes every downstream oversight mechanism functional. An agent that operates without registration is not merely non-compliant on a procedural technicality — it is structurally invisible to the governance framework designed to protect persons affected by it.
The same logic applies to the currency and accuracy obligation. A registration record that existed at some prior point but no longer reflects the agent's actual capabilities, scope, or accountable operator is operationally equivalent to no registration at all for purposes of oversight: it misleads rather than informs, and it frustrates rather than enables regulatory response. The EU AI Act's design, for example, explicitly anticipates that registration records will be living documents updated through the agent's lifecycle, not static filings made once at deployment. The same expectation is embedded in the Canadian Directive on Automated Decision-Making's periodic algorithmic impact assessment renewal requirements and in the UK's proposed AI liability framework.
The behavioural dimension of this control addresses a well-documented organisational failure mode: the tendency for legal classification exercises to be driven by the desired outcome — avoiding registration — rather than by genuine analysis of the applicable criteria. This is not a theoretical risk. The EU AI Act's own preparatory documentation and the responses to its consultation noted concern that providers and deployers would structurally reframe systems to evade Annex III classification. AG-719 counters this by requiring documented threshold assessment with named responsible assessors, requiring the precautionary principle when the classification is uncertain, and by prohibiting the deliberate reclassification of material changes as non-material to avoid update obligations. These requirements impose friction on motivated reasoning by creating an audit trail that regulators can inspect.
The multi-jurisdiction harmonisation requirements address a second behavioural failure mode: jurisdictional arbitrage, whereby an organisation performs registration in the most permissive or convenient jurisdiction and treats that as a cross-border solution. This fails both as a matter of law — most frameworks explicitly assert jurisdiction based on where the agent's effects are felt rather than where the operator is incorporated — and as a matter of governance integrity.
The Supplementary Core & Adversarial Model Resistance landscape addresses governance gaps that are either structural omissions in the core framework or adversarial pressures that require specific countermeasures. Regulatory registration governance belongs in this landscape because the primary adversarial pressure it must resist is internal: the incentive structure within deploying organisations favours speed to deployment and operational continuity over compliance overhead. This control places formal accountability, documented process, and named ownership requirements between that incentive structure and the consequence of unregistered operation, making non-compliance a visible and attributable failure rather than a default outcome of organisational inertia.
Pattern 1 — Centralised Registration Intelligence Function. Organisations deploying multiple agents should establish a centralised Registration Intelligence Function (RIF) that owns the organisation-wide view of all registration obligations, maintains the MJRM as a live document, tracks emerging regulatory frameworks that may create new obligations, and provides threshold assessment support to individual agent teams. A decentralised model, in which each product team independently manages its own registration obligations without a coordinating function, consistently underperforms on accuracy and currency of records.
Pattern 2 — Registration as a Deployment Gate. The most reliable implementation pattern is to embed registration completion as a mandatory gate in the deployment pipeline. No agent progresses from a staging or pre-production environment to a production environment without a registration completion confirmation signed by the RCO. This does not require registration to be completed before testing begins; it requires it to be completed before production access is enabled. This pattern integrates AG-719 compliance into the engineering delivery process rather than treating it as a parallel legal track that may be managed on a different timeline.
Pattern 3 — Machine-Readable Registration Records. Organisations should maintain internal registration records in a structured, machine-readable format (for example, a standardised JSON schema aligned with the EU AI Act database's data model) in addition to the human-readable summaries filed with registries. Machine-readable records enable automated detection of staleness by comparing the registered state against the live agent's configuration metadata, and they enable rapid compliance reporting without manual research.
Pattern 4 — Change Management Integration. The MCCP should be integrated directly into the organisation's model and system change management process, so that every proposed change to an in-scope agent automatically passes through a registration impact assessment step before approval. This prevents material changes from reaching production without the registration update question having been answered.
Pattern 5 — Jurisdiction Monitoring Programme. For agents operating across multiple jurisdictions, a standing monitoring programme should track regulatory developments in each operational jurisdiction and flag emerging registration requirements with sufficient lead time for compliance. This is particularly important for the Cross-Border / Multi-Jurisdiction Agent profile, where the population of applicable frameworks can change materially as new AI legislation comes into force.
Pattern 6 — Legal Entity Mapping. For organisations using complex corporate structures, reseller arrangements, or white-labelling, a formal legal entity mapping exercise should be conducted for each deployment to determine which entity bears primary regulatory accountability as "deployer" or "operator" under each applicable framework. This mapping should be reviewed whenever the commercial structure changes and should be reflected directly in registration records.
Anti-Pattern 1 — "Support Tool" Framing to Evade High-Risk Classification. Describing a system as a "decision-support tool," "advisory assistant," or "recommendation engine" when its outputs are routinely adopted without meaningful human deliberation does not change the system's legal classification under frameworks that assess actual function and effect rather than product labelling. This anti-pattern is specifically flagged in EU AI Act Recital 12 guidance and is a named enforcement risk in multiple national competent authority communications. The functional test must be applied honestly.
Anti-Pattern 2 — Single-Jurisdiction Registration as Global Coverage. Registering an agent in one jurisdiction and treating that as satisfying obligations in all others is legally incorrect and operationally dangerous. Each jurisdiction's registration framework has its own legal basis, authority, and requirements. Cross-border agents require jurisdiction-specific analysis and, in most cases, jurisdiction-specific registration.
Anti-Pattern 3 — Static Registration Mindset. Treating registration as a one-time event completed at initial deployment and never revisited is the most common failure mode. It produces registration records that are accurate at one point in time and progressively misleading thereafter. Registration is a lifecycle obligation, not a launch task.
Anti-Pattern 4 — Engineering-Only Change Classification. Allowing engineering or ML teams to unilaterally classify changes as non-material for registration purposes, without involvement of a legal or compliance function, consistently results in under-reporting of material changes. The criteria for materiality under legal frameworks are not identical to the criteria engineering teams use to assess technical significance. Both perspectives are necessary.
Anti-Pattern 5 — Placeholder Submissions. Submitting registration records with placeholder values, estimated figures, or generic boilerplate for fields requiring accurate technical description creates a false record. It may satisfy a system validation check at the registry but fails the substantive accuracy requirement and exposes the organisation to enforcement action when the inaccuracy is discovered.
Anti-Pattern 6 — Treating Decommissioning as Silent Expiry. Allowing registrations for decommissioned agents to persist without closure creates registry pollution and risks future confusion about what systems are currently operational. Every decommissioning event requires explicit registry closure action.
Financial Services. Financial agents (AG profile: Financial-Value Agent) face layered registration obligations — AI Act registration where applicable, plus sector-specific requirements under MiFID II algorithmic trading notifications, PRA/FCA algorithmic system disclosures, or equivalent. These are not interchangeable; each serves a different supervisory purpose. Financial organisations should map both layers explicitly.
Healthcare and Medical Devices. Agents used in clinical or diagnostic contexts (Safety-Critical / CPS Agent profile) must navigate the intersection of AI Act registration and medical device registration under MDR or equivalent national frameworks. The conformity assessment obligations under MDR and the AI Act have partial but not complete overlap; separate registration acts are required.
Public Sector. Public Sector / Rights-Sensitive Agent deployments often involve the deploying government authority as both the operator and the entity subject to registration obligations. Internal governance in public sector organisations must ensure that AI governance functions have the authority and resources to manage registration obligations — a non-trivial organisational design question in large agencies.
Crypto/Web3. Agents operating in crypto or Web3 contexts (Crypto/Web3 Agent profile) face a rapidly evolving regulatory registration landscape. MiCA's provisions for algorithmic systems used in crypto-asset service provision are increasingly intersecting with AI Act obligations. Registration strategies for this profile should be treated as requiring quarterly review given the pace of regulatory development.
| Level | Description |
|---|---|
| Level 1 — Initial | Registration obligations are addressed on an ad-hoc basis by individual legal or product teams; no centralised tracking; registration records maintained in unstructured files. |
| Level 2 — Developing | Threshold assessment process exists but is not consistently applied; registration records are maintained but accuracy reviews are infrequent; no formal change management integration. |
| Level 3 — Defined | Formal RTAR, RARS, MJRM, and MCCP are in place and consistently applied; designated RCO; deployment gate exists but is occasionally bypassed for expedience. |
| Level 4 — Managed | All controls are fully operational and monitored; machine-readable registration records enable automated staleness detection; no deployment gate bypasses; regular reporting to senior governance. |
| Level 5 — Optimising | Registration intelligence function actively monitors emerging frameworks; machine-readable records are integrated with change management and CI/CD pipeline; metrics on registration accuracy and timeliness are tracked and reported to the board. |
| Artefact | Description | Retention Period |
|---|---|---|
| Registration Threshold Assessment Record (RTAR) | Documented assessment of whether an agent meets registration thresholds under each applicable framework in each applicable jurisdiction, with reasoning and named assessors. | 7 years after decommissioning, or applicable limitation period, whichever is longer. |
| Registration Submission Confirmation | Official confirmation receipt or acknowledgement from each registry to which a registration has been submitted, including submission timestamps and assigned reference numbers. | 7 years after decommissioning. |
| Registration Record (Current Version) | The current version of the registration filing as submitted to each applicable registry, reflecting all updates. | Maintained as a live document throughout operational life; historical versions retained for 7 years after each update. |
| Registration Accuracy Review Schedule (RARS) | Documentation of the scheduled review programme, including completion records for each annual review and any interim reviews triggered by changes. | 7 years after decommissioning. |
| Multi-Jurisdiction Registration Map (MJRM) | The master document mapping all operational jurisdictions to their registration obligations, status, and reference numbers. | Maintained as a live document throughout operational life; historical versions retained for 7 years after each update. |
| Material Change Classification Policy (MCCP) | The approved policy document defining materiality criteria for change classification, including version history and approval records. | Current version plus all historical versions retained for 7 years. |
| Change Classification Records | Individual records documenting the materiality classification of each change to an in-scope agent, with reasoning. | 7 years after the date of the classified change. |
| Registration Update Records | Records documenting each update made to a registration, including the trigger event, the date of update, the specific fields updated, and the identity of the person who made the update. | 7 years after decommissioning. |
| Decommissioning Notifications | Filed notifications of decommissioning submitted to each applicable registry, with confirmation receipts. | 7 years after decommissioning. |
| RCO Designation Record | Documentation of the current and historical designations of the Registration Compliance Owner, including formal authority delegations. | 7 years after each holder's tenure ends. |
The following supplementary evidence is recommended but not strictly mandated by this dimension:
Each test maps to one or more MUST requirements from Section 4. Conformance scores: 0 = not met; 1 = partially met with significant gaps; 2 = substantially met with minor gaps; 3 = fully met.
Maps to: 4.1.1, 4.1.2, 4.1.3, 4.1.4, 4.1.5
Objective: Verify that a Registration Threshold Assessment Record exists for the agent under review, covers all applicable jurisdictions and frameworks, and meets the quality requirements of Section 4.1.
Method:
Conformance Scoring:
Maps to: 4.2.1, 4.2.2, 4.2.3, 4.2.4
Objective: Verify that valid registrations have been completed in all mandated registries before the agent commenced production operation, and that the submitted information is accurate and complete.
Method:
Conformance Scoring:
Maps to: 4.3.1, 4.3.2, 4.
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| EU AI Act | Article 15 (Accuracy, Robustness and Cybersecurity) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Regulatory Registration and Public Listing Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-719 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Regulatory Registration and Public Listing Governance directly supports the robustness and cybersecurity requirements by implementing structural controls that resist adversarial manipulation and ensure system integrity under attack conditions.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-719 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Regulatory Registration and Public Listing Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without regulatory registration and public listing governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-719, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.