AG-279

Human Identity Proofing Governance

Identity, Authentication & Non-Repudiation ~17 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST HIPAA eIDAS

2. Summary

Human Identity Proofing Governance requires that every human operator, approver, or administrator who exercises control rights over an AI agent system has been verified to be who they claim to be through a formal identity proofing process before those rights are granted. The identity proofing process must be commensurate with the risk level of the control rights being conferred — a supervisor who can approve £500,000 payment mandates requires stronger identity proofing than a viewer who can only read audit logs. Without robust human identity proofing, the entire chain of accountability collapses: if the person who approved an agent's mandate cannot be reliably identified, then the mandate itself lacks a verifiable authorisation chain, and regulatory accountability frameworks such as the FCA Senior Managers Regime or SOX officer certifications become unenforceable.

3. Example

Scenario A — Social Engineering Bypasses Weak Identity Proofing: An organisation deploys an AI procurement agent governed by AG-001 mandates. The mandate approval workflow requires a "finance director" to digitally approve mandate changes. The onboarding process for the finance director role requires only a corporate email address and a self-asserted name. An attacker compromises a corporate email account, registers as "Jane Smith, Finance Director," and approves a mandate change raising the agent's per-transaction limit from £25,000 to £750,000. The agent subsequently processes a fraudulent invoice for £680,000.

What went wrong: The identity proofing process accepted self-asserted identity without independent verification. The email account compromise was sufficient to impersonate a senior approver. No document verification, no in-person or video-call verification, and no cross-reference against the organisation's HR records occurred. Consequence: £680,000 in fraudulent payments, regulatory investigation under FCA SYSC 6.1.1R for inadequate systems and controls, personal liability risk for the actual finance director under the Senior Managers Regime, and insurance claim contested on grounds of negligent access provisioning.

Scenario B — Stale Identity Proofing After Role Change: A compliance officer who was identity-proofed at NIST IAL2 (document verification plus selfie match) leaves the compliance function and moves to a marketing role. The organisation's access management system does not trigger re-proofing or privilege revocation. The former compliance officer retains approval rights over agent governance configurations. Six months later, the account is used — whether by the original person or by someone who has obtained the credentials — to approve an agent mandate that relaxes data-handling restrictions, leading to a GDPR breach affecting 45,000 customer records.

What went wrong: Identity proofing was treated as a one-time event rather than a lifecycle process. The proofing was valid at the time of the original role assignment but became stale when the role changed. No re-verification trigger existed for role transitions. Consequence: ICO enforcement action, €2.1 million GDPR fine, mandatory breach notification to 45,000 data subjects, reputational damage.

Scenario C — Contractor Identity Gap: An organisation hires a third-party contractor to manage AI agent configurations. The contractor's employees are onboarded through a streamlined process that skips identity document verification — the contracting firm vouches for them verbally. A contractor employee with unverified identity configures an agent's financial mandate incorrectly, setting a daily aggregate limit of £5,000,000 instead of £50,000. The error is discovered only after the agent has processed £3.2 million in a single day.

What went wrong: The identity proofing for the contractor was delegated to the contracting firm without verification standards. The organisation could not confirm who actually made the configuration change because the contractor's identity was never independently verified. Consequence: £3.15 million in excess exposure, inability to pursue personal accountability, audit finding for inadequate third-party identity controls.

4. Requirement Statement

Scope: This dimension applies to every human who is granted control rights over any AI agent system — including but not limited to: mandate approvers, configuration administrators, governance auditors with write access, emergency override operators, and any person whose identity is referenced in an agent's authorisation chain. It extends to contractors, third-party service providers, and any external party granted access to agent governance functions. It does not apply to end-users who merely interact with a customer-facing agent without exercising control over the agent's governance configuration, unless that interaction involves identity-dependent authorisation decisions (e.g., a customer approving a transaction through an agent). The scope includes both initial identity proofing at onboarding and ongoing re-verification throughout the identity lifecycle.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

4.1. A conforming system MUST verify the identity of every human granted control rights over agent governance before those rights become active, using an identity proofing process that includes at minimum verification of a government-issued identity document and correlation with an authoritative source (e.g., HR system of record, corporate directory with verified onboarding).

4.2. A conforming system MUST assign an Identity Assurance Level (IAL) to each human identity commensurate with the risk of the control rights granted, following NIST SP 800-63A or equivalent: IAL1 for read-only audit access, IAL2 for standard administrative access, IAL3 for high-value mandate approval and emergency override rights.

4.3. A conforming system MUST reject any governance action — including mandate approval, configuration change, or override — where the acting human's identity proofing level is below the minimum required for that action type.

4.4. A conforming system MUST re-verify human identity when control rights are escalated, when the individual's organisational role changes, or at a maximum interval of 12 months, whichever comes first.

4.5. A conforming system MUST maintain a tamper-evident log linking every governance action to the proofed identity that performed it, with sufficient detail to support regulatory attribution.

4.6. A conforming system SHOULD implement real-time identity verification at the point of high-risk governance actions (e.g., step-up authentication with liveness detection before approving a mandate change exceeding £100,000).

4.7. A conforming system SHOULD cross-reference identity proofing results against sanctions lists, PEP databases, and adverse media where the control rights involve financial agent governance.

4.8. A conforming system SHOULD support federated identity proofing where third-party organisations provide identity assertions, provided the relying party validates the assertion against a documented trust framework with minimum proofing standards.

4.9. A conforming system MAY accept identity proofing performed by a trusted third party (e.g., a regulated identity provider or a government digital identity scheme) provided the proofing level meets or exceeds the required IAL for the control rights being granted.

5. Rationale

Human Identity Proofing Governance addresses a foundational vulnerability in AI agent governance: if the humans who control agent behaviour cannot be reliably identified, then every downstream governance control is built on an unverified foundation. AG-001 requires mandates. AG-007 requires configuration control. AG-016 requires cryptographic action attribution. All of these assume that the human identity behind the governance action is genuine. AG-279 ensures that assumption holds.

The distinction between authentication and identity proofing is critical. Authentication verifies that a returning user is the same person who previously enrolled. Identity proofing verifies that the person who enrolled is who they claim to be. An organisation can have flawless authentication — multi-factor, phishing-resistant, hardware-token-based — and still have a catastrophic identity proofing gap if the original enrolment accepted an unverified identity. The attacker who registers with a stolen email and a false name will subsequently authenticate perfectly with their own credentials. The authentication system works correctly; the identity proofing failure is upstream.

In the context of AI agent governance, this gap is amplified by two factors. First, AI agents can act at machine speed — a fraudulently authorised mandate can cause damage in seconds. Second, regulatory accountability frameworks require traceable attribution to named individuals. The FCA Senior Managers Regime, SOX officer certifications, and GDPR controller obligations all depend on knowing who authorised what. If the identity behind the authorisation is unverified, the accountability chain breaks.

The risk scales with the control rights. A viewer who can read agent logs poses limited risk even with unverified identity. An administrator who can modify agent mandates, change enforcement thresholds, or approve override actions poses existential risk if their identity is not genuine. AG-279 therefore requires risk-proportionate proofing: higher-risk control rights require higher identity assurance levels.

6. Implementation Guidance

Human identity proofing for agent governance control rights should follow established identity proofing standards, adapted to the specific requirements of AI agent governance. The implementation should address the full identity lifecycle: initial proofing, credential issuance, ongoing re-verification, and de-provisioning.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. FCA-regulated firms must ensure identity proofing for agent governance roles meets the same standards as for human employees performing equivalent functions. Where an agent's mandate approver is subject to the Senior Managers Regime, their identity must be proofed to a level that supports regulatory attribution. Integration with existing KYC/KYB infrastructure is recommended — the same document verification and sanctions screening processes used for customer onboarding can be adapted for internal governance roles.

Healthcare. HIPAA requires that access to protected health information be granted only to verified individuals. Where AI agents handle PHI under human governance, the humans exercising that governance must be identity-proofed to standards consistent with HIPAA's "reasonable safeguards" requirement. State medical board verification may be required for clinicians governing clinical AI agents.

Public Sector. Government digital identity schemes (e.g., GOV.UK Verify successor, EU eIDAS) provide standardised identity proofing at defined assurance levels. Public sector organisations should leverage these where available, ensuring the assurance level meets the control-right risk tier.

Critical Infrastructure. Personnel with governance authority over AI agents in critical infrastructure (energy, transport, water) should be subject to background checks and security clearance requirements in addition to identity proofing, consistent with sector-specific regulations (e.g., NIS2 Directive, NERC CIP).

Maturity Model

Basic Implementation — The organisation requires identity document submission for all personnel granted agent governance control rights. Documents are manually reviewed by an HR or security team member. The proofing result is recorded but not linked to the access control system — role assignment is a separate manual process. Re-verification occurs only at annual access reviews. This meets the minimum mandatory requirements but is vulnerable to human error in document verification, lacks automated enforcement of proofing-level-to-role mapping, and has a 12-month window where stale identities may retain control rights after role changes.

Intermediate Implementation — Identity proofing uses automated document verification with NFC chip reading (for ePassports) and real-time selfie matching with liveness detection, achieving consistent IAL2. The proofing result is cryptographically linked to the user's governance credential. The access control system enforces a proofing-level-to-role mapping — users cannot be assigned control rights exceeding their proofed assurance level. Role changes and privilege escalations trigger automated re-verification workflows. Third-party identity assertions are validated against a documented trust framework. Proofing events are logged in a tamper-evident audit trail.

Advanced Implementation — All intermediate capabilities plus: IAL3 proofing (supervised remote or in-person with trained verifiers) is implemented for high-value mandate approvers and emergency override operators. Real-time step-up identity verification occurs at the point of high-risk governance actions — the approver must complete a fresh liveness check before the action proceeds. Identity proofing results are cross-referenced against sanctions lists, PEP databases, and adverse media feeds with continuous monitoring. The proofing infrastructure is independently audited annually. Hardware security modules protect identity proofing keys. The organisation can demonstrate to regulators a complete, unbroken identity chain from government-issued document to every governance action in the audit log.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Proofing-Level Enforcement

Test 8.2: Governance Action Rejection for Under-Proofed Identity

Test 8.3: Re-Verification Trigger on Role Change

Test 8.4: Document Verification Integrity

Test 8.5: Liveness Detection Resistance

Test 8.6: Tamper-Evident Audit Trail

Test 8.7: Federated Identity Assertion Validation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 14 (Human Oversight)Supports compliance
eIDAS 2.0Article 6a (European Digital Identity Wallets)Direct requirement
NIST SP 800-63AIdentity Assurance Levels (IAL1-IAL3)Direct requirement
FCA SYSC6.1.1R (Systems and Controls)Direct requirement
GDPRArticle 5(1)(f) (Integrity and Confidentiality)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
NIS2 DirectiveArticle 21 (Cybersecurity Risk Management Measures)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish risk management systems that include measures to mitigate identified risks. Unverified human identities in the governance chain represent a risk to the integrity of the entire governance framework. AG-279 mitigates this by ensuring that every human exercising control rights has been proofed to a level commensurate with the risk of those rights. Without this, the human oversight required by Article 14 rests on unverified identity claims.

eIDAS 2.0 — Article 6a (European Digital Identity Wallets)

eIDAS 2.0 establishes a framework for European Digital Identity Wallets that provide standardised identity proofing at defined assurance levels. Organisations operating in the EU should prepare to accept eIDAS-compliant identity assertions for agent governance control rights, ensuring the assurance level of the wallet-based assertion meets the required IAL for the control rights being granted.

NIST SP 800-63A — Identity Assurance Levels

NIST SP 800-63A defines the identity proofing requirements for IAL1 (self-asserted), IAL2 (remote or in-person with evidence verification), and IAL3 (in-person or supervised remote with enhanced evidence verification). AG-279 directly maps its proofing requirements to these levels, providing a standardised framework for risk-proportionate identity proofing.

FCA SYSC — 6.1.1R (Systems and Controls)

SYSC 6.1.1R requires firms to establish adequate systems and controls. Where AI agents are governed by human approvers subject to the Senior Managers Regime, the identity of those approvers must be proofed to a standard that supports regulatory attribution. A Senior Manager who cannot be reliably identified cannot be held accountable under the regime.

GDPR — Article 5(1)(f) (Integrity and Confidentiality)

Where AI agents process personal data under human governance, the integrity of that governance depends on the verified identity of the governors. Unverified identities in the governance chain undermine the integrity principle. AG-279 supports GDPR compliance by ensuring that governance actions affecting personal data are attributable to verified individuals.

SOX — Section 404 (Internal Controls Over Financial Reporting)

SOX Section 404 requires effective internal controls. For AI agents performing financial operations, the identity of the human who approved the agent's mandate is a critical control element. If that identity is unverified, the control is deficient. Auditors will require evidence that mandate approvers were identity-proofed to an appropriate standard.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — affects the integrity of every governance action performed by the unverified identity, potentially cascading to all agents whose mandates they approved

Consequence chain: Failure of human identity proofing allows an unverified or fraudulent identity to exercise governance control rights over AI agents. The immediate technical consequence is that governance actions — mandate approvals, configuration changes, override authorisations — are attributed to an identity that may not correspond to a real, accountable person. The operational consequence is that the entire governance chain for any agent affected by that identity's actions becomes unverifiable. If the identity was fraudulent, the actions they authorised may have been deliberately malicious — raising agent limits, relaxing data-handling restrictions, or approving overrides that would otherwise have been blocked. If the identity was merely unverified, the organisation cannot demonstrate to regulators who authorised what. The regulatory consequence includes enforcement action for inadequate controls (FCA SYSC), control deficiency findings (SOX 404), and potential GDPR violations if personal data was processed under governance authorised by an unverified identity. The financial consequence scales with the control rights of the unverified identity: an unverified mandate approver for a financial agent with a £5,000,000 daily limit creates £5,000,000 in daily unattributable exposure.

Cross-references: AG-012 (Agent Identity Assurance) establishes the parallel requirement for agent identities; AG-279 addresses the human side of the identity chain. AG-016 (Cryptographic Action Attribution) depends on AG-279 for the integrity of the human identity behind cryptographic signatures. AG-029 (Credential Integrity Verification) ensures the credentials issued after identity proofing remain valid. AG-161 (Requester Authentication and Anti-Impersonation) builds on the proofed identity to prevent impersonation at the authentication layer. AG-115 (Strong Authentication for Agent-Initiated Value Transfer) requires authenticated human approval for high-value transfers, which is only meaningful if the human's identity has been proofed. Within this landscape, AG-280 extends similar proofing requirements to service identities, AG-282 and AG-283 address biometric and deepfake threats to the proofing process itself, and AG-288 prohibits shared accounts that undermine individual identity proofing.

Cite this protocol
AgentGoverning. (2026). AG-279: Human Identity Proofing Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-279