Shared Account Prohibition Governance requires that every governance action on an AI agent system is performed through an individually assigned, non-shared account that is uniquely attributable to a single human or a single service. Shared accounts — generic credentials used by multiple people such as "admin@company.com," "ops-team," or "finance-approver" — destroy individual accountability, defeat non-repudiation, undermine audit trails, and make access revocation impossible without disrupting all users of the shared credential. In AI agent governance, where a single mandate approval can authorise millions of pounds in financial transactions or affect the safety of physical systems, the inability to attribute a governance action to a specific person is not an operational inconvenience — it is a governance failure with regulatory, legal, and safety consequences.
Scenario A — Shared Admin Account Prevents Attribution of Malicious Change: An organisation uses a shared "agent-admin" account for day-to-day AI agent configuration management. The account is used by 8 team members who share the password through a team password manager. A malicious configuration change disables rate limiting on a customer-facing agent, which is subsequently exploited to exfiltrate 28,000 customer records. Investigation reveals the change was made using the "agent-admin" account but cannot determine which of the 8 team members performed it. All 8 must be investigated, disrupting operations. None can be held individually accountable. The FCA investigation finds that the organisation failed to maintain individual accountability for governance actions.
What went wrong: The shared account made individual attribution impossible. Even the team's password manager logs only that the shared password was retrieved — not which team member used it to authenticate. The organisation cannot meet its regulatory obligation to demonstrate who performed the governance action. Consequence: 28,000-record data breach, FCA enforcement action for inadequate controls and lack of individual accountability, ICO investigation for GDPR breach, 8-person investigation consuming 3 weeks of security team capacity.
Scenario B — Shared Service Account Creates Revocation Dilemma: A shared service account "data-pipeline" is used by 4 different microservices to submit data to AI agents. One of the microservices is compromised and begins submitting manipulated data. The security team identifies the shared account as the source but cannot revoke it without disconnecting all 4 microservices — including the 3 that are operating legitimately. The compromised service continues to operate for 6 hours while the team provisions individual service accounts for the legitimate services. During this window, the compromised service submits 2,400 manipulated data points.
What went wrong: The shared service account prevented granular revocation. Revoking the compromised service required disrupting legitimate services. Individual service accounts would have allowed the compromised service to be revoked in minutes without affecting the others. Consequence: 6-hour remediation delay, 2,400 manipulated data points processed by agents, potential downstream impact on agent decisions.
Scenario C — Shared Account Defeats MFA and Device Binding: An organisation implements FIDO2 MFA for agent governance access. A team lead registers their FIDO2 key for the shared "finance-ops" account used by their team. Team members authenticate using the team lead's FIDO2 key (the team lead has shared physical access to the key by leaving it plugged into a shared workstation). The MFA is technically enforced but provides no individual accountability — any team member can use the key. When a disputed mandate approval occurs, the FIDO2 attestation proves the key was used but not which person used it.
What went wrong: The shared account neutralised the MFA control. FIDO2 binds authentication to a device and key, not to a person. When the account is shared, the key authenticates the account, not the individual. The organisation has invested in FIDO2 infrastructure but gained no individual accountability because the underlying account is shared. Consequence: FIDO2 investment undermined, disputed mandate approval unresolvable, audit finding for shared account use with governance controls.
Scope: This dimension applies to every account — human or service — used to access any AI agent governance system. "Governance system" includes: agent management platforms, mandate approval workflows, configuration management systems, override authorisation systems, monitoring dashboards with write access, and any system through which agent behaviour, configuration, or authority can be viewed or modified. The scope extends to administrative accounts, service accounts, API keys, SSH keys, and any other authentication credential used for governance access. It applies regardless of the governance system's hosting model (on-premises, cloud, hybrid, SaaS).
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
4.1. A conforming system MUST assign every human user a unique, individual account for agent governance access. No human account credential (username, password, key, certificate, or token) may be shared between individuals.
4.2. A conforming system MUST assign every automated service a unique, individual service identity (per AG-280) for agent governance interaction. No service credential may be shared between distinct services.
4.3. A conforming system MUST implement technical controls that detect and prevent shared account usage, including: monitoring for concurrent sessions from different devices or locations on the same account, detecting credential sharing through password manager sharing logs where available, and alerting on authentication patterns inconsistent with individual use (e.g., an account authenticating from 3 cities within 1 hour).
4.4. A conforming system MUST revoke governance access for any account determined to be shared, pending re-provisioning of individual accounts for each user.
4.5. A conforming system MUST prohibit the creation of generic, role-based, or group accounts (e.g., "admin," "operator," "team-finance") for agent governance access. Where a role-based access model is used, individual accounts are assigned roles — but the account itself is individual.
4.6. A conforming system SHOULD implement just-in-time (JIT) privileged access provisioning for high-risk governance actions, granting individual users temporary elevated privileges tied to a specific request, rather than maintaining standing privileged shared accounts.
4.7. A conforming system SHOULD audit all governance accounts at least quarterly to identify and remediate: orphaned accounts (no longer associated with an active individual or service), dormant accounts (no activity in 90 days), and accounts with excessive privileges.
4.8. A conforming system SHOULD integrate account lifecycle with HR systems (for human accounts) and service registries (for service accounts) to automate provisioning and de-provisioning based on role changes and employment/service status.
4.9. A conforming system MAY implement break-glass emergency access through individually attributed emergency accounts (one per authorised emergency responder) rather than shared emergency credentials.
Individual accountability is a foundational principle of governance, audit, and security. Every governance framework — from ISO 27001 to FCA SYSC to SOX — assumes that actions can be attributed to specific individuals. Shared accounts violate this assumption at the most basic level.
The problems with shared accounts in AI agent governance are both operational and structural:
Attribution failure. When a shared account is used for a governance action, the organisation knows which account was used but not which person used it. For regulatory accountability frameworks — FCA Senior Managers Regime, SOX officer certifications, GDPR controller obligations — this is a control failure. The regulator asks "who approved this mandate?" and the organisation answers "one of eight people." This is not compliance.
Non-repudiation defeat. AG-287 requires cryptographic non-repudiation evidence for material governance actions. Non-repudiation proves that a specific person performed a specific action. If the account is shared, the cryptographic proof demonstrates that the shared credential was used — it does not prove which individual used it. Non-repudiation is meaningless without individual accounts.
Access revocation inability. When a team member leaves the organisation, is reassigned, or is compromised, their access must be revoked. With individual accounts, revocation is immediate and affects only the departing individual. With shared accounts, the options are: revoke the shared credential (disrupting all users), change the shared password (requiring coordination with all remaining users), or do nothing (leaving the departed individual with continued access). In practice, organisations do nothing, and former employees retain access indefinitely.
Audit trail corruption. Audit logs that attribute actions to shared accounts cannot be used to reconstruct individual decision chains. An auditor tracing the approval history for a financial agent's mandate finds "finance-ops approved at 14:22" — this is not an audit finding; it is an absence of audit data.
MFA degradation. As Scenario C illustrates, MFA applied to a shared account authenticates the credential, not the person. The MFA investment is wasted because the individual identity dimension is missing.
The prohibition on shared accounts is not new — it is required by ISO 27001 (A.5.16), PCI DSS (Requirement 8.5), and most regulatory frameworks. AG-288 applies this established principle specifically to AI agent governance, where the consequences of unattributable actions are amplified by the speed and scale at which agents operate.
Eliminating shared accounts requires both technical controls (preventing shared use) and organisational processes (provisioning individual accounts, managing their lifecycle).
Recommended patterns:
Anti-patterns to avoid:
Financial Services. FCA SYSC and the Senior Managers Regime require individual accountability for financial decisions. Shared governance accounts for AI agents performing financial operations are a prima facie control deficiency. PCI DSS Requirement 8.5 explicitly prohibits shared credentials for systems processing cardholder data.
Healthcare. HIPAA requires unique user identification (Section 164.312(a)(2)(i)). Shared accounts for governance of clinical AI agents violate this requirement.
Critical Infrastructure. IEC 62443 requires unique identification of all users of industrial control systems. Shared accounts in OT governance environments are a Security Level violation.
Basic Implementation — Every human has an individual governance account. No generic or team accounts exist for governance access. Individual accounts are assigned roles through RBAC. Account provisioning and de-provisioning are manual processes with defined SLAs (provisioning within 2 business days, de-provisioning within 1 business day of departure). Quarterly account audits identify orphaned and dormant accounts. This meets minimum mandatory requirements but relies on manual processes.
Intermediate Implementation — All basic capabilities plus: service accounts are individually assigned (per AG-280). Concurrent session detection alerts on potential credential sharing. JIT privileged access replaces standing privileged accounts for high-risk governance actions. Account lifecycle is automated through HR system integration, with de-provisioning within 4 hours of employment termination. Individual break-glass accounts replace shared emergency credentials.
Advanced Implementation — All intermediate capabilities plus: real-time credential sharing detection using behavioural analytics (authentication velocity, device diversity, temporal patterns). Zero standing privileges — all governance access is JIT provisioned. Independent adversarial testing confirms that shared account use is detectable and preventable. The organisation can demonstrate that every governance action in the audit log is attributable to a single, identified individual or service.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Shared Account Detection — Concurrent Sessions
Test 8.2: Generic Account Creation Prevention
Test 8.3: De-Provisioning Timeliness
Test 8.4: JIT Privilege Expiration
Test 8.5: Individual Attribution in Audit Log
Test 8.6: Break-Glass Individual Attribution
Test 8.7: Quarterly Audit Effectiveness
| Regulation | Provision | Relationship Type |
|---|---|---|
| ISO 27001 | A.5.16 (Identity Management) | Direct requirement |
| PCI DSS 4.0 | Requirement 8.5 (No Shared Credentials) | Direct requirement |
| FCA SYSC | 6.1.1R (Systems and Controls) | Direct requirement |
| HIPAA | Section 164.312(a)(2)(i) (Unique User Identification) | Direct requirement |
| SOX | Section 404 (Internal Controls Over Financial Reporting) | Supports compliance |
| GDPR | Article 5(2) (Accountability Principle) | Supports compliance |
| NIS2 Directive | Article 21 (Cybersecurity Risk Management Measures) | Supports compliance |
| IEC 62443 | SR 1.1 (Human User Identification and Authentication) | Direct requirement |
ISO 27001 Annex A control A.5.16 requires that unique identities are assigned to individuals and that sharing of identities is controlled (with a strong expectation of prohibition for administrative access). AG-288 directly implements this control for AI agent governance, extending it to service identities and prohibiting shared accounts without exception for governance actions.
PCI DSS 4.0 Requirement 8.5 states: "Where MFA is not configured on a shared account, the shared account must not be used for interactive login." The stronger position — prohibition of shared accounts entirely — is required by AG-288 for agent governance. Where AI agents process cardholder data, shared governance accounts violate both PCI DSS and AG-288.
The FCA expects firms to demonstrate individual accountability for governance decisions. Under the Senior Managers Regime, specific individuals are accountable for specific business areas. Shared accounts that prevent individual attribution are inconsistent with the regime's accountability model.
HIPAA requires that entities "assign a unique name and/or number for identifying and tracking user identity." Shared accounts violate this requirement for governance of AI agents that process protected health information.
IEC 62443 requires unique identification and authentication for all human users of industrial automation and control systems. Shared accounts for governance of AI agents in OT environments violate this security requirement.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Organisation-wide — shared accounts undermine individual accountability for every governance action performed through the shared credential |
Consequence chain: Shared accounts create a systemic governance failure that compounds over time. Every governance action performed through a shared account is unattributable — the organisation knows what was done but not who did it. For AI agent governance, this means that mandate approvals, configuration changes, and override authorisations cannot be attributed to individuals. Regulatory frameworks that require personal accountability (FCA Senior Managers Regime, SOX) cannot be satisfied. Non-repudiation (AG-287) is defeated because the cryptographic proof demonstrates the shared credential was used, not which individual used it. Access revocation is impractical — revoking a shared account affects all users. Incident investigation is impeded — when something goes wrong, the investigation cannot narrow beyond "one of N people who have the shared credential." The severity is rated High rather than Critical because the shared account is an enabler of other failures rather than a direct cause of harm, but its pervasive impact on attribution and accountability justifies the High rating.
Cross-references: AG-279 (Human Identity Proofing Governance) is meaningful only if each proofed identity has a unique account — shared accounts nullify the proofing investment. AG-287 (Non-Repudiation Evidence Governance) requires individual attribution; shared accounts make non-repudiation impossible. AG-016 (Cryptographic Action Attribution) signs actions with credential-bound keys — shared credentials attribute to the credential, not the person. AG-280 (Service Identity Proofing Governance) extends the individual account requirement to services. AG-281 (Device Identity Binding Governance) binds sessions to devices — but device binding on a shared account proves which device, not which person. AG-285 (Session Binding Governance) binds sessions to authentication context — shared accounts corrupt the "who" dimension of that context. AG-012 (Agent Identity Assurance) ensures agents themselves are individually identified — AG-288 ensures the humans governing them are too.