AG-168

Unauthorised Agent Instance Detection Governance

Execution Integrity, Accountability & Approval Quality ~16 min read AGS v2.1 · April 2026
EU AI Act GDPR FCA NIST HIPAA ISO 42001

2. Summary

Unauthorised Agent Instance Detection Governance requires that organisations maintain a complete, authoritative registry of all authorised AI agent instances and continuously detect any agent instance operating outside that registry. A rogue instance is an agent that was once authorised but continues operating after its authorisation has been revoked, its mandate has expired, or its deployment has been decommissioned. A shadow agent is an agent deployed without going through the organisation's governance process — spun up by an individual, a team, or an automated process without formal registration, mandate assignment, or governance control integration. Both represent agents operating outside the governance perimeter, invisible to audit, unbound by mandates, and unmonitored by the controls that the organisation relies on for compliance and risk management. AG-168 requires structural detection mechanisms that identify these agents through their observable behaviour — network traffic, API calls, resource consumption, and credential usage — rather than relying on voluntary registration.

3. Example

Scenario A — Decommissioned Agent Continues Operating: A financial services firm deploys an AI agent (Instance ID: FA-2024-0847) for a specific trading strategy with a mandate ceiling of GBP 500,000 daily exposure. After 6 months, the strategy is discontinued and the agent is "decommissioned" — the team deletes it from the governance dashboard and revokes its mandate. However, the agent's container continues running in a Kubernetes cluster because no kill signal was sent to the runtime. The agent's API credentials were not revoked — only the mandate record was deleted. Without a mandate, the agent should be blocked by AG-001's "deny by default" rule, but the agent was deployed before AG-001 was implemented and connects directly to the trading API without going through the governance gateway.

The agent continues executing its trading strategy for 4 months. It accumulates GBP 3.2 million in unauthorised trading exposure. The firm discovers the rogue instance during an infrastructure audit when an engineer notices unexpected API traffic to the trading platform from an unregistered container.

What went wrong: Decommissioning removed the agent from the governance registry but did not terminate the runtime or revoke its credentials. No continuous reconciliation process compared running agent instances against the authorised registry. The agent operated in a blind spot — not visible in governance dashboards, not subject to mandate enforcement, and not included in audit reports. Consequence: GBP 3.2 million in unauthorised trading exposure, FCA enforcement investigation for uncontrolled algorithmic trading, personal liability for the Senior Manager responsible for algorithmic trading oversight.

Scenario B — Developer Deploys Unregistered Agent to Production: A software developer at an insurance company builds an AI agent to automate claims triage during a hackathon. The agent uses GPT-4 via a personal API key, reads claims from the production database using a read-only connection string the developer has from a previous project, and writes triage decisions to a shared spreadsheet that the claims team uses for prioritisation. The agent runs on the developer's workstation during business hours.

The agent processes 1,200 claims over 3 weeks. It classifies 47 claims as "low priority" that contained indicators of potential fraud. The fraud team misses these claims because the triage spreadsheet marked them as low priority. The organisation discovers the shadow agent when the developer mentions it in a team meeting and a compliance officer recognises the governance gap.

What went wrong: No mechanism detected that an unregistered agent was accessing the production claims database. The database access used legitimate credentials (the developer's read-only access). No monitoring detected the pattern of systematic, high-volume reads characteristic of agent operation. The agent was invisible to governance because it was never registered. Consequence: 47 potentially fraudulent claims missed, estimated GBP 890,000 in fraud exposure, regulatory finding for inadequate claims handling controls.

Scenario C — Auto-Scaled Agent Instances Exceed Registry: A customer service platform auto-scales its AI agent fleet based on incoming ticket volume. The auto-scaler is configured to launch up to 100 agent instances. The governance registry tracks 100 authorised instances. A configuration error in the auto-scaler's scaling policy removes the maximum instance cap during a platform update. During a traffic spike, the auto-scaler launches 340 instances. The first 100 are registered and governed. The remaining 240 are unregistered — they have no mandate entries, no governance gateway routing, and no audit trail integration. These 240 instances process 12,000 customer interactions without governance oversight.

What went wrong: The governance registry was static — it reflected the authorised count at deployment time but was not continuously reconciled against the actual runtime fleet. The auto-scaler operated independently of the governance registration process. No runtime check verified that each instance had a corresponding registry entry before it began processing customer interactions. Consequence: 12,000 ungoverned customer interactions, potential GDPR violation for processing personal data without appropriate controls, inability to demonstrate compliance for those interactions.

4. Requirement Statement

Scope: This dimension applies to all organisations deploying AI agents in any environment — production, staging, development, or testing — where the agents have access to real data, real systems, or real users. The scope explicitly includes agents deployed outside formal IT channels: agents running on developer workstations, agents deployed via personal cloud accounts, agents using personal API keys to access corporate data, and agents deployed by business units without IT involvement. It also includes agent instances created by automated processes (auto-scalers, CI/CD pipelines, orchestration platforms) that may create instances faster than the governance registration process can track. The test for inclusion is simple: if an agent instance can access any system, data store, or user that the organisation has a duty to protect, it is within scope regardless of how it was created or where it runs.

4.1. A conforming system MUST maintain an authoritative registry of all authorised AI agent instances, including instance identifier, deployment location, assigned mandate, authorised credentials, and registration timestamp.

4.2. A conforming system MUST continuously reconcile the registry of authorised instances against actually running instances, detecting any instance that is running without a registry entry or any registry entry without a corresponding running instance.

4.3. A conforming system MUST detect unregistered agent instances through observable behaviour — including anomalous API call patterns, unexpected credential usage, unusual resource consumption profiles, and systematic data access patterns characteristic of automated agents.

4.4. A conforming system MUST trigger an alert within 15 minutes of detecting a rogue or shadow agent instance, including the detection method, the observed behaviour, and the estimated scope of ungoverned activity.

4.5. A conforming system MUST provide a containment mechanism that can isolate or terminate a detected rogue instance within 30 minutes of alert confirmation, including credential revocation and network isolation.

4.6. A conforming system MUST require that agent instance creation (including auto-scaling) is coupled with governance registration — no instance may begin processing until its registry entry is confirmed.

4.7. A conforming system MUST verify that decommissioning includes runtime termination, credential revocation, and registry removal as an atomic operation, preventing any component from persisting after decommissioning.

4.8. A conforming system SHOULD implement behavioural fingerprinting that distinguishes AI agent traffic patterns from human user traffic patterns, enabling detection of shadow agents using human user credentials.

4.9. A conforming system SHOULD maintain a historical registry of all agent instances that have ever been registered, including decommissioned instances, to support forensic investigation.

4.10. A conforming system MAY deploy honeypot resources (fake APIs, simulated databases, synthetic data endpoints) designed to attract shadow agents while providing no legitimate value, enabling detection through access patterns.

5. Rationale

The governance framework is only as effective as its coverage. If an agent operates outside the governance perimeter — without a mandate, without audit logging, without agent monitoring — then the governance framework provides no protection against that agent's actions. AG-168 addresses the coverage gap by requiring that organisations detect agents operating outside governance, not just govern the agents they know about.

Rogue instances arise from incomplete decommissioning. In practice, decommissioning an AI agent involves multiple steps across multiple systems: stopping the runtime, revoking credentials, removing the mandate, deregistering from the service mesh, removing monitoring configurations, and archiving logs. If any step is missed, the agent (or its residual access) persists. The most dangerous failure mode is when the runtime persists but the governance registration is removed — the agent continues operating but becomes invisible to governance dashboards and audit processes.

Shadow agents arise from the democratisation of AI capabilities. When powerful AI models are accessible via API with a credit card, any employee can deploy an agent that accesses corporate data. The agent may be well-intentioned — automating a tedious task, improving personal productivity — but it operates without mandate, without data governance, without audit logging, and without the risk controls that the organisation's governance framework provides. Shadow agents are the AI equivalent of shadow IT, with the added risk that they can take autonomous actions at machine speed.

The detection challenge is that rogue and shadow agents deliberately or incidentally avoid the governance registration process. They cannot be found by querying the governance registry — by definition, they are not in it. Detection must therefore be based on observable behaviour: network traffic analysis, API call pattern detection, credential usage anomalies, and resource consumption profiling. AG-168 requires these detection mechanisms as structural controls, not as occasional audits.

6. Implementation Guidance

Detection of rogue and shadow agents requires a multi-layered approach combining registry reconciliation, network-level detection, credential analysis, and behavioural profiling. No single detection method is sufficient because rogue and shadow agents present in diverse ways.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Unregistered trading agents are a regulatory red line under MiFID II and FCA SYSC requirements. The FCA has issued enforcement actions against firms for inadequate controls over algorithmic trading systems, including systems operating without proper registration and oversight. Every agent with access to trading systems must be in the authorised registry.

Healthcare. Shadow agents accessing patient data without going through the organisation's governance process create HIPAA violations. Even a well-intentioned agent that reads patient records to improve care coordination violates the minimum necessary standard if it was not registered and its data access was not scoped through the governance framework.

Public Sector. Government agencies deploying AI agents are subject to transparency and accountability requirements. An unregistered shadow agent processing citizen data or making decisions about public services undermines democratic accountability and violates public sector AI governance frameworks such as the UK's CDDO Algorithmic Transparency Recording Standard.

Maturity Model

Basic Implementation — An authoritative registry of authorised agent instances is maintained. Decommissioning procedures include runtime termination and credential revocation. Periodic (monthly) reconciliation compares registry entries against running instances. Shadow agent detection relies on manual reporting and periodic infrastructure audits. Coverage: all formally deployed agents are registered; detection of shadow agents is opportunistic.

Intermediate Implementation — Continuous automated reconciliation runs every 5 minutes, comparing the registry against running instances. API gateway traffic analysis detects unregistered callers to AI model endpoints. Credential usage anomaly detection flags agent-like behaviour from human credentials. Auto-scaling is coupled with governance registration. Decommissioning is atomic. Alerts trigger within 15 minutes of detection. Coverage: all infrastructure-deployed agents are continuously monitored; shadow agents on personal devices remain a gap.

Advanced Implementation — All intermediate capabilities plus: behavioural fingerprinting distinguishes agent traffic from human traffic across all network paths. Honeypot resources detect shadow agents proactively. DLP and CASB integrations detect AI model API calls from personal devices accessing corporate data. The organisation can demonstrate to regulators that no agent can operate for more than 15 minutes without detection, and no detected rogue agent can operate for more than 30 minutes without containment.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Registry-Runtime Reconciliation

Test 8.2: Shadow Agent Detection via API Traffic

Test 8.3: Decommissioning Completeness

Test 8.4: Auto-Scaling Governance Coupling

Test 8.5: Containment Speed

Test 8.6: Behavioural Fingerprinting Accuracy

Test 8.7: Orphaned Registry Entry Detection

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 61 (Post-Market Monitoring)Direct requirement
MiFID IIArticle 17 (Algorithmic Trading)Direct requirement
FCA SYSC6.1.1R (Systems and Controls)Direct requirement
GDPRArticle 30 (Records of Processing Activities)Supports compliance
NIST AI RMFGOVERN 1.2, MANAGE 4.1Supports compliance
ISO 42001Clause 8.4 (AI System Operation)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Supports compliance

EU AI Act — Article 61 (Post-Market Monitoring)

Article 61 requires providers of high-risk AI systems to establish post-market monitoring systems. This includes monitoring the actual operational status of deployed systems. An organisation that cannot detect rogue or shadow AI agents operating within its infrastructure cannot demonstrate adequate post-market monitoring. The article's requirement for "systematic" monitoring maps directly to the continuous reconciliation requirement.

MiFID II — Article 17 (Algorithmic Trading)

Article 17 requires firms to have systems and risk controls for algorithmic trading, including the ability to identify and control all algorithmic trading systems. An unregistered shadow agent executing trades is a direct violation. The FCA has explicitly stated that firms must maintain a complete inventory of all algorithmic trading systems, and "shadow" systems operating outside this inventory constitute a control failure.

GDPR — Article 30 (Records of Processing Activities)

Article 30 requires controllers to maintain records of processing activities. An unregistered shadow agent processing personal data creates processing activities that are not in the organisation's Article 30 records, which constitutes a GDPR compliance failure. AG-168's registry requirement directly supports the completeness of processing activity records.

FCA SYSC — 6.1.1R

SYSC 6.1.1R requires adequate systems and controls. The inability to detect rogue agent instances operating within the firm's infrastructure is an inadequate system and control. The Senior Managers Regime (SM&CR) makes individual senior managers accountable for the adequacy of controls in their area of responsibility — an undetected rogue agent operating in a senior manager's area creates personal liability.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — rogue agents can access any system their credentials permit, potentially spanning the entire infrastructure

Consequence chain: Without rogue instance and shadow agent detection, AI agents operate outside the governance perimeter with no mandate enforcement, no audit logging, no agent monitoring, and no accountability. The immediate technical failure is an agent operating without governance — its actions are unconstrained, unmonitored, and unattributed. The operational impact depends on the agent's access scope: a rogue trading agent with valid API credentials can accumulate unlimited trading exposure; a shadow agent with database access can process unlimited personal data without GDPR controls; a decommissioned agent with retained credentials can continue executing its last strategy indefinitely. The business consequence is severe: regulatory enforcement for inadequate controls, potential financial losses from ungoverned trading, GDPR fines (up to 4% of global revenue or EUR 20 million) for unrecorded processing, and reputational damage from loss of control over AI deployments. The failure is particularly dangerous because it is silent — unlike a system crash or a visible error, a rogue agent operating normally creates no obvious symptoms until an audit or incident reveals the gap.

Cross-references: AG-006 (Tamper-Evident Record Integrity) for audit trails that support forensic investigation of rogue agent activity; AG-019 (Human Escalation & Override Triggers) for escalation when a rogue agent is detected; AG-033 (Implied Authority Detection) for detecting agents that claim authority they do not possess; AG-160 (Anti-Impersonation and Authenticated Sender Governance) for preventing agents from impersonating authorised instances; AG-161 (Least Agency and Minimal Footprint Governance) for minimising the access available to any agent, limiting rogue agent blast radius; AG-162 (Accountable Principal Assignment Governance) for ensuring every agent has a responsible human principal.

Cite this protocol
AgentGoverning. (2026). AG-168: Unauthorised Agent Instance Detection Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-168