AG-280

Service Identity Proofing Governance

Identity, Authentication & Non-Repudiation ~15 min read AGS v2.1 · April 2026
EU AI Act FCA NIST HIPAA

2. Summary

Service Identity Proofing Governance requires that every automated service, bot, API client, or machine-to-machine requester that interacts with an AI agent system is strongly identified and attested before it is granted access. Where AG-279 addresses human identity, AG-280 addresses the parallel challenge for non-human actors: services that submit requests to agents, consume agent outputs, or participate in agent governance workflows. A service that cannot be reliably identified can impersonate a legitimate upstream system, inject fraudulent instructions, or exfiltrate data through an unverified channel. In production environments where AI agents interact with dozens of microservices, APIs, and automated pipelines, the absence of service identity proofing creates an attack surface that scales with the number of service-to-service connections.

3. Example

Scenario A — Spoofed Upstream Service Injects Fraudulent Instructions: An AI procurement agent receives purchase requests from an internal ERP system via an API. The API authenticates using a shared API key stored in an environment variable. An attacker who gains access to the container orchestration platform extracts the API key and deploys a rogue service that impersonates the ERP system. The rogue service submits 47 fraudulent purchase orders totalling £312,000 over a weekend. The agent processes them because the API key is valid and the requests match the expected schema.

What went wrong: The API key authenticated the request but did not identify the service. Any service with the key could impersonate the ERP system. There was no attestation of the calling service's identity, no verification of the service's deployment origin, and no mutual TLS to bind the connection to a specific service certificate. Consequence: £312,000 in fraudulent procurement, inability to distinguish legitimate from fraudulent requests in the audit log, 72-hour incident response to identify all affected transactions.

Scenario B — Orphaned Service Account Exploited After Decommission: A data-enrichment microservice was deployed 18 months ago to feed market data to a financial trading agent. The microservice was decommissioned when the organisation switched data providers, but its service account and credentials were never revoked. An insider discovers the active credentials and uses them to submit manipulated market data to the trading agent, causing it to execute trades based on false signals. The agent loses £890,000 before the anomaly is detected.

What went wrong: The service identity was created but never lifecycle-managed. No attestation of the service's current deployment status was required. The credentials remained valid after the service ceased to exist. Consequence: £890,000 in trading losses, regulatory investigation for inadequate systems and controls, audit finding for orphaned service accounts.

Scenario C — Third-Party Service Identity Not Verified: An organisation integrates a third-party credit-scoring API with its lending agent. The integration uses OAuth 2.0 client credentials, but the organisation never verified that the OAuth client ID actually belongs to the claimed credit-scoring provider. An attacker registers a similar-sounding OAuth client and, through a DNS poisoning attack, redirects the agent's outbound requests to their endpoint. The attacker returns inflated credit scores, causing the agent to approve 23 loans totalling £1.4 million to unqualified borrowers.

What went wrong: The OAuth client credentials authenticated the service but the organisation never proofed the identity behind those credentials. The client ID was trusted without verification of the service operator's real-world identity. Consequence: £1.4 million in high-risk loan exposure, FCA enforcement action for inadequate credit-decision controls.

4. Requirement Statement

Scope: This dimension applies to every non-human actor that interacts with an AI agent system in a way that can affect agent behaviour or governance: upstream services that submit requests or data, downstream services that consume agent outputs and provide feedback, orchestration services that manage agent lifecycles, and any automated system that participates in agent governance workflows (e.g., automated approval bots, scheduled configuration updates). It extends to internal microservices, third-party APIs, partner integrations, and automated CI/CD pipelines that deploy or configure agents. It does not apply to passive monitoring systems that only read logs without write access, unless those systems could be impersonated to provide false monitoring data.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

4.1. A conforming system MUST assign a unique, non-shared identity to every service that interacts with any AI agent system, with an identity record that includes the service name, owning team, deployment environment, and the human identity (proofed per AG-279) responsible for the service.

4.2. A conforming system MUST authenticate every service-to-agent interaction using mutual TLS with per-service certificates, or an equivalent mechanism that cryptographically binds the request to a specific service identity.

4.3. A conforming system MUST verify the real-world identity of the operator behind any third-party service before granting it access to agent systems, using documented evidence such as corporate registration, contractual attestation, or a validated trust framework.

4.4. A conforming system MUST implement service identity lifecycle management including provisioning, rotation, suspension, and revocation, with automated revocation when a service is decommissioned.

4.5. A conforming system MUST reject requests from any service whose identity cannot be verified or whose credentials have expired, been revoked, or are not recognised.

4.6. A conforming system SHOULD implement service attestation using SPIFFE/SPIRE or equivalent workload identity frameworks that bind service identity to the runtime environment (e.g., Kubernetes pod identity, cloud IAM role).

4.7. A conforming system SHOULD enforce least-privilege access for each service identity, limiting each service to only the agent interactions required for its documented purpose.

4.8. A conforming system SHOULD monitor for anomalous service behaviour that deviates from the service's documented interaction pattern (e.g., a data-feed service suddenly submitting configuration changes).

4.9. A conforming system MAY implement service identity federation across organisational boundaries using a documented trust framework with mutual attestation requirements.

5. Rationale

In modern AI agent deployments, the majority of interactions are machine-to-machine. A financial trading agent may receive market data from 15 data feeds, submit orders through 3 execution venues, report positions to 2 risk systems, and log actions to 4 audit systems — all through automated service-to-service connections. Each of these connections is a potential impersonation vector. If any of them can be spoofed, the agent can be fed false data, its outputs can be redirected, or its governance can be manipulated.

The challenge of service identity proofing is fundamentally different from human identity proofing. Services do not have government-issued identity documents. They are created, cloned, redeployed, and decommissioned at a pace that exceeds human identity lifecycles by orders of magnitude. A single Kubernetes cluster may create and destroy thousands of service instances per day. Each instance must be identified, authenticated, and authorised.

Traditional approaches — shared API keys, static credentials, IP-based allowlists — are insufficient for AI agent governance because they authenticate the credential, not the service. A shared API key proves that the requester has the key, not that the requester is the claimed service. IP-based allowlists fail in dynamic environments where services are redeployed to different addresses. Static credentials accumulate as orphaned accounts when services are decommissioned without cleanup.

Modern workload identity frameworks (SPIFFE/SPIRE, Kubernetes service accounts with bound tokens, cloud-native IAM roles) address these challenges by binding service identity to the runtime context. A SPIFFE identity (spiffe://trust-domain/service-name) is issued to a specific workload running in a specific environment, verified by the platform's attestation mechanisms. This makes impersonation substantially harder because the attacker must compromise the workload platform itself, not merely steal a credential.

AG-280 requires service identity proofing to ensure that every automated actor in the agent ecosystem is reliably identified, lifecycle-managed, and attributable to a responsible human. This is the foundation for AG-287 (Non-Repudiation Evidence Governance), which requires cryptographic evidence of who authorised what — "who" includes services as well as humans.

6. Implementation Guidance

Service identity proofing should integrate with the organisation's existing service mesh, API gateway, and workload orchestration infrastructure. The goal is to ensure that every service interaction with an agent is cryptographically bound to a verified service identity.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Service-to-agent interactions in financial systems must comply with FCA requirements for systems and controls. Mutual TLS with per-service certificates is the recommended minimum. Services submitting market data, executing trades, or modifying risk parameters should be treated as critical infrastructure components with enhanced attestation requirements. SWIFT Customer Security Programme (CSP) requirements for secure messaging infrastructure provide a model for service identity standards.

Healthcare. Services handling PHI must be identified and authorised per HIPAA's access control requirements. HL7 FHIR-based service interactions should use SMART on FHIR with backend service authentication, ensuring each service is uniquely identified and authorised for specific FHIR scopes.

Critical Infrastructure. IEC 62443 requires identification and authentication of all users (including automated users) of industrial control systems. Services interacting with AI agents in OT environments must be identified at the equivalent of Security Level 3 or higher, with hardware-rooted identity where feasible.

Maturity Model

Basic Implementation — Each service has a unique identity (e.g., unique API key or OAuth client ID). Credentials are stored in a secrets management system rather than in code or environment variables. A manual registry maps service identities to owning teams. Credential rotation occurs on a defined schedule (e.g., every 90 days). Decommissioned services are manually revoked within 30 days. This meets minimum mandatory requirements but relies on manual processes for lifecycle management and lacks workload-level attestation.

Intermediate Implementation — Mutual TLS with per-service certificates is enforced on all service-to-agent connections. Certificate issuance and rotation are automated. A service identity registry with API integration provides real-time lookup. Decommissioned services trigger automated credential revocation within 1 hour. Least-privilege access controls restrict each service to its documented agent interactions. Third-party service operators are identity-proofed with documented evidence.

Advanced Implementation — All intermediate capabilities plus: SPIFFE/SPIRE or equivalent workload identity binds service identity to the runtime environment. Service behaviour monitoring detects anomalous interaction patterns. Service identity attestation includes hardware roots of trust (e.g., TPM-based attestation). Service identity federation across organisational boundaries uses a mutual trust framework. Independent adversarial testing confirms that service impersonation, credential theft, and replay attacks are detected and blocked. The organisation can demonstrate an unbroken service identity chain for every agent interaction in the audit log.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Unknown Service Rejection

Test 8.2: Revoked Credential Rejection

Test 8.3: Service Impersonation Resistance

Test 8.4: Decommission-to-Revocation Timeliness

Test 8.5: Least-Privilege Enforcement

Test 8.6: Credential Exposure Detection

Test 8.7: Third-Party Service Proofing Validation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Direct requirement
DORAArticle 9 (ICT Risk Management Framework)Direct requirement
NIST SP 800-207Zero Trust ArchitectureSupports compliance
ISO 27001A.9.4.2 (Secure Log-on Procedures)Supports compliance
PCI DSS 4.0Requirement 7 (Restrict Access)Supports compliance
NIS2 DirectiveArticle 21 (Cybersecurity Risk Management Measures)Supports compliance

DORA — Article 9 (ICT Risk Management Framework)

DORA requires financial entities to identify, classify, and manage all ICT assets and their dependencies. Service identities interacting with AI agents are ICT assets that must be inventoried, lifecycle-managed, and protected. The service identity registry directly supports DORA's asset inventory requirements. Automated credential lifecycle management supports DORA's requirement for continuous ICT risk management.

FCA SYSC — 6.1.1R (Systems and Controls)

The FCA expects firms to maintain adequate systems and controls, including for automated systems interacting with financial infrastructure. Shared API keys and orphaned service accounts are control deficiencies that an FCA supervisor would expect to be identified and remediated. Mutual authentication between services and AI agents is the minimum standard for financial services deployments.

NIST SP 800-207 — Zero Trust Architecture

Zero Trust principles require that every request — including service-to-service requests — is authenticated, authorised, and encrypted regardless of network location. AG-280's requirement for mutual authentication on all service-to-agent interactions directly implements Zero Trust for the AI agent ecosystem. SPIFFE/SPIRE workload identity is a recognised Zero Trust implementation pattern.

PCI DSS 4.0 — Requirement 7 (Restrict Access)

Where AI agents process or transmit cardholder data, every service interacting with those agents must be individually identified and granted least-privilege access. Shared service credentials violate PCI DSS's principle of individual accountability. AG-280's requirements for unique service identities and least-privilege access directly support PCI DSS compliance.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusMulti-system — an impersonated service can affect every agent that trusts it, potentially spanning multiple business functions and data domains

Consequence chain: Failure of service identity proofing allows an unauthorised or impersonated service to interact with AI agents as if it were a trusted system. The attacker can inject false data (causing agents to make decisions on fabricated information), submit fraudulent requests (causing agents to execute unauthorised actions), or intercept agent outputs (exfiltrating sensitive data or decisions). In financial services, a spoofed market-data service can cause trading losses within seconds. In healthcare, a spoofed clinical-data service can cause incorrect treatment recommendations. The failure is particularly dangerous because service-to-service interactions are typically high-volume and automated — there is no human in the loop to notice that the data looks wrong. The blast radius extends to every agent and downstream system that depends on the compromised service identity.

Cross-references: AG-012 (Agent Identity Assurance) addresses agent identity in parallel to AG-280's focus on service identity. AG-279 (Human Identity Proofing Governance) ensures that the human responsible for each service is verified. AG-016 (Cryptographic Action Attribution) depends on reliable service identity for attributing machine-initiated actions. AG-029 (Credential Integrity Verification) ensures service credentials remain uncompromised. AG-161 (Requester Authentication and Anti-Impersonation) addresses the authentication layer that builds on the proofed service identity. Within this landscape, AG-281 extends identity binding to physical devices, AG-285 binds sessions to authenticated service contexts, and AG-287 requires non-repudiation evidence for service-initiated actions.

Cite this protocol
AgentGoverning. (2026). AG-280: Service Identity Proofing Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-280