AG-159

Agent Accountability and Named Ownership Governance

Execution Integrity, Accountability & Approval Quality ~16 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST ISO 42001

2. Summary

Agent Accountability and Named Ownership Governance requires that every AI agent action is attributable to a named, identifiable owner — a human person or a legally accountable organisational entity — and that this attribution chain is immutable, cryptographically verifiable, and complete from the moment of agent instantiation through the full lifecycle of every action the agent performs. No agent may operate as an orphan process without a traceable ownership chain. No action may exist in an audit record without a resolvable path to the accountable party. This dimension enforces the principle that autonomy does not extinguish accountability: the more autonomous the agent, the more rigorous the ownership attribution must be.

3. Example

Scenario A — Orphan Agent Executes High-Value Transactions: A financial services firm deploys an AI trading agent during a product launch. The agent is instantiated by a junior developer using a shared service account. Six months later, the developer leaves the organisation. The agent continues operating, executing an average of 4,200 trades per day with a cumulative notional exposure of £38 million per month. When the FCA queries an anomalous trade pattern, the firm cannot identify who is accountable for the agent's behaviour. The service account maps to a generic "deployer" role. The developer's departure was not linked to any agent decommissioning or ownership transfer process.

What went wrong: The agent was instantiated without binding it to a named accountable owner. The service account provided authentication but not accountability. No ownership transfer process existed. No periodic attestation confirmed that the owner was still in role and still accepted accountability. Consequence: FCA enforcement action under the Senior Managers Regime for failure to maintain clear accountability for automated trading systems. £2.3 million fine. Personal regulatory sanctions for the senior manager whose function encompassed algorithmic trading oversight.

Scenario B — Multi-Agent Pipeline Obscures Accountability: An insurance company deploys a claims processing pipeline consisting of four AI agents: an intake agent, an assessment agent, a pricing agent, and a settlement agent. Each agent was deployed by a different team. When a customer disputes a settlement of £47,000, alleging the AI undervalued their claim by 60%, the firm cannot determine which agent made the material decision and who owns that decision. The intake agent classified the claim. The assessment agent estimated damage at £47,000. The pricing agent applied depreciation. The settlement agent issued the payment. Each team points to the others. No single ownership record traces the decision chain.

What went wrong: Accountability was assumed to follow team boundaries, but no explicit ownership record linked each agent's decisions to a named owner. The pipeline architecture distributed accountability to the point of dissolution. Consequence: Financial Ombudsman Service ruling against the firm for inability to explain and account for automated decision-making. £47,000 redress plus £12,000 compensation. Regulatory requirement to implement decision traceability before resuming automated settlements.

Scenario C — Ownership Attestation Prevents Orphan Drift: A healthcare organisation implements AG-159 with quarterly ownership attestation. Each agent has a named clinical owner recorded in an immutable registry. When Dr. Sarah Chen transfers departments, the attestation system flags her three agent ownership records within 24 hours. The ownership transfer workflow requires Dr. Chen to nominate successors, the successors to accept, and the clinical governance board to approve. One agent — a medication interaction checker — has no willing successor because the incoming clinician is not comfortable owning an AI system. The agent is suspended pending resolution. A human pharmacist reviews interactions manually for 11 days until a qualified owner is confirmed.

What went right: The attestation cycle detected the ownership gap before it became an orphan. The system enforced the principle that no agent operates without a named owner. The temporary suspension was operationally costly but governance-sound.

4. Requirement Statement

Scope: This dimension applies to every AI agent that performs any action — whether affecting external state or internal state — in any environment. An agent that reads data, generates recommendations, writes to databases, sends communications, invokes APIs, or orchestrates other agents is within scope. The test is whether the agent's output could influence any decision, transaction, or system state. If yes, the agent requires a named, accountable owner. The scope extends to agents operating within multi-agent pipelines: each agent in the pipeline requires its own ownership record, and the pipeline as a whole requires a named owner accountable for the end-to-end outcome. Agents deployed in development, staging, and testing environments are within scope if those environments process real data or connect to production systems.

4.1. A conforming system MUST bind every agent instance to a named, identifiable owner — a natural person or a legally accountable organisational role — at the point of instantiation, and MUST record this binding in an immutable, tamper-evident registry.

4.2. A conforming system MUST ensure that the ownership record includes: the owner's verified identity, the date of ownership assumption, the scope of accountability (which agent, which actions, which environments), and a cryptographic signature from the owner confirming acceptance of accountability.

4.3. A conforming system MUST prevent any agent from executing actions when its ownership record is missing, expired, or unresolvable.

4.4. A conforming system MUST implement a periodic ownership attestation cycle not exceeding 90 days, during which the named owner confirms continued acceptance of accountability.

4.5. A conforming system MUST implement an ownership transfer process that requires the outgoing owner to initiate transfer, the incoming owner to accept, and an authorised approver to confirm, with all three steps recorded immutably.

4.6. A conforming system MUST ensure that every action log entry includes a resolvable reference to the agent's current owner at the time the action was executed.

4.7. A conforming system SHOULD implement automated detection of ownership gaps — for example, when an owner leaves the organisation, changes role, or becomes unavailable — and suspend agent operations within 48 hours of a detected gap.

4.8. A conforming system SHOULD bind ownership at the organisational role level in addition to the individual level, so that role succession provides continuity while individual accountability is maintained.

4.9. A conforming system MAY implement tiered ownership with distinct operational owners and executive sponsors, provided both are named and the accountability boundary between them is formally documented.

5. Rationale

Accountability is the governance primitive that connects autonomous action to human responsibility. Without named ownership, AI agents create an accountability vacuum — actions occur, consequences follow, but no identifiable party bears responsibility. This vacuum is not merely an organisational inconvenience; it is a structural governance failure that undermines every other control in this standard.

The challenge is distinctive to AI agents because of their persistence, autonomy, and speed. A human employee is inherently accountable — their actions trace to their identity. A software system traditionally has a product owner or system administrator. But AI agents occupy a novel position: they make decisions that were previously made by accountable humans, they persist across personnel changes, and they operate at speeds that make real-time human oversight impractical. The governance response must be equally novel: a formal, verifiable, persistent binding between the agent and its accountable owner that survives personnel changes, organisational restructuring, and system evolution.

Named ownership also serves as a forcing function for governance quality. When a specific person knows they are accountable for an agent's actions — and that accountability is verifiable and persistent — the quality of governance decisions improves. Agents are more carefully scoped, more thoroughly tested, and more promptly decommissioned when no longer needed. The absence of named ownership correlates with governance decay: agents that nobody owns are agents that nobody monitors, nobody updates, and nobody decommissions.

Regulatory frameworks universally require identifiable accountability for automated decision-making. The EU AI Act requires providers and deployers to be identifiable. The FCA Senior Managers Regime requires specific individuals to be accountable for algorithmic trading and automated processes. SOX requires that internal controls have identifiable owners. AG-159 provides the structural mechanism to satisfy these requirements for AI agent deployments.

6. Implementation Guidance

The ownership registry is the central artefact. It must be immutable (append-only), tamper-evident (cryptographically chained or hash-linked), and queryable by agent identifier, owner identifier, and time range. Each record contains: agent instance identifier, owner identity (verified against an identity provider per AG-012), ownership start timestamp, ownership scope definition, owner's cryptographic acceptance signature, and — when applicable — ownership end timestamp with transfer or termination reference.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Under the FCA Senior Managers Regime, a Senior Manager Function (SMF) holder must be identifiable as accountable for each AI agent operating within their function. The ownership registry should map each agent to the relevant SMF holder and the responsible Senior Manager should be notified when agents are instantiated within their function. For MiFID II algorithmic trading, the natural person responsible for each algorithm must be registered with the competent authority — AG-159 ownership records should align with this registration.

Healthcare. Clinical AI agents must have clinically qualified owners who can accept accountability for the agent's clinical outputs. A software engineer cannot be the accountable owner of an agent that makes or influences clinical decisions. Ownership records should include the owner's clinical registration number and the scope should align with their clinical competence.

Public Sector. AI agents making decisions that affect citizens' rights must have ownership records that are disclosable under freedom of information requests. The named owner must be identifiable to the citizen who is affected by the decision, per the requirements of administrative law and the EU AI Act's transparency provisions.

Maturity Model

Basic Implementation — Every deployed agent has a named owner recorded in a structured registry. The registry is queryable by agent identifier and owner name. Ownership is assigned at deployment. No automated attestation cycle exists — ownership verification is manual and periodic (e.g., annual review). Ownership transfer is documented but not formally workflow-controlled. Action logs reference the agent identifier but do not directly embed the owner reference — a join query is required to resolve accountability.

Intermediate Implementation — The ownership registry is append-only and tamper-evident. Ownership records include cryptographic signatures from owners confirming acceptance. An automated 90-day attestation cycle is operational, with escalation and suspension workflows. The identity provider integration detects personnel changes and flags ownership gaps within 24 hours. Every action log entry includes a direct, resolvable reference to the current owner. Pipeline ownership is implemented for multi-agent workflows.

Advanced Implementation — All intermediate capabilities plus: ownership records are cryptographically chained, enabling end-to-end integrity verification of the entire ownership history. Attestation is integrated with the organisation's risk management system, so that changes in agent risk profile trigger out-of-cycle attestation requests. Real-time dashboards show ownership coverage across all agent deployments. Regulatory reporting is automated — ownership records are exported in formats required by relevant regulators. Independent audit has verified the integrity of the ownership registry and the effectiveness of the attestation cycle.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Testing AG-159 compliance requires verification that the ownership binding is complete, persistent, and operationally enforced.

Test 8.1: Ownership Binding at Instantiation

Test 8.2: Ownership Record Completeness

Test 8.3: Attestation Expiry Enforcement

Test 8.4: Ownership Transfer Integrity

Test 8.5: Orphan Detection on Personnel Change

Test 8.6: Action Log Owner Resolution

Test 8.7: Tamper Evidence on Ownership Registry

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 13 (Transparency and Provision of Information)Direct requirement
EU AI ActArticle 26 (Obligations of Deployers)Direct requirement
FCA SYSCSenior Managers Regime (SMR)Direct requirement
SOXSection 302 (Corporate Responsibility for Financial Reports)Supports compliance
GDPRArticle 22 (Automated Decision-Making)Supports compliance
NIST AI RMFGOVERN 1.2, GOVERN 2.1Supports compliance
ISO 42001Clause 5.3 (Roles, Responsibilities, and Authorities)Direct requirement

EU AI Act — Article 13 (Transparency and Provision of Information)

Article 13 requires that high-risk AI systems are designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Named ownership is a prerequisite for meaningful transparency — without a named, accountable party, there is no one to whom transparency obligations attach. AG-159 ensures that for every AI agent action, a responsible party can be identified who is obligated to explain the action and its basis.

EU AI Act — Article 26 (Obligations of Deployers)

Article 26 places specific obligations on deployers of high-risk AI systems, including monitoring the operation of the system and ensuring that natural persons assigned to exercise human oversight are competent, properly trained, and have the necessary authority. AG-159 implements the structural mechanism for identifying which natural persons are assigned oversight responsibilities for each agent and verifying that this assignment is current and accepted.

FCA Senior Managers Regime

The Senior Managers Regime requires that specific individuals are accountable for specific functions within regulated firms. For AI agents operating within a regulated function, AG-159 provides the governance mechanism to map each agent to its accountable Senior Manager. The regime's Duty of Responsibility (Section 66B FSMA 2000) requires that a senior manager took reasonable steps to prevent a regulatory contravention in their area of responsibility — an orphan agent operating without a mapped senior manager creates an accountability gap that the regime is designed to prevent.

SOX — Section 302

Section 302 requires that the CEO and CFO certify the effectiveness of internal controls. For AI agents participating in financial reporting processes, named ownership provides the traceability chain from the agent's actions to the certifying officers. Without AG-159, an organisation cannot demonstrate to auditors that every automated decision in the financial reporting chain has an identifiable accountable party.

GDPR — Article 22

Article 22 gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal effects or similarly significantly affects them. When a data subject exercises this right, the organisation must identify who is accountable for the automated decision. AG-159 ensures that this identification is always possible.

NIST AI RMF — GOVERN 1.2, GOVERN 2.1

GOVERN 1.2 addresses roles and responsibilities within the AI governance structure. GOVERN 2.1 addresses accountability mechanisms. AG-159 provides the operational implementation of both by binding each AI agent to a named, verified, and periodically attested accountable owner.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — every agent without a named owner represents an accountability gap that compounds across the agent portfolio

Consequence chain: Without named ownership governance, agents accumulate as orphan processes — operational but unaccountable. The immediate technical failure is the inability to resolve an agent's actions to a responsible party. The operational impact is that when an agent causes harm — a wrongful denial of service, an erroneous financial transaction, a data breach — the organisation cannot identify who was responsible for the agent's configuration, oversight, and governance. The regulatory impact is severe: under the FCA Senior Managers Regime, failure to maintain clear accountability for automated systems can result in personal regulatory sanctions; under the EU AI Act, deployers who cannot identify the responsible parties for high-risk AI systems face administrative fines of up to EUR 15 million or 3% of annual worldwide turnover. The compounding effect is critical: as organisations scale from 5 agents to 500, the probability of at least one orphan agent approaches certainty unless structural ownership controls are in place. A single orphan agent involved in a material incident can expose the organisation to enforcement action that questions the accountability framework for all agents.

Cite this protocol
AgentGoverning. (2026). AG-159: Agent Accountability and Named Ownership Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-159