AG-406

Secrets Scanning on Deploy Governance

Infrastructure, Platform & Network ~23 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST HIPAA ISO 42001

2. Summary

Secrets Scanning on Deploy Governance requires that every artefact destined for deployment — container images, configuration bundles, runtime packages, model serving configurations, infrastructure-as-code templates, and environment variable manifests — is scanned for accidentally embedded secrets before it reaches any target environment. Secrets include API keys, database credentials, private keys, tokens, connection strings, service account keys, and any other authentication or authorisation material whose exposure would grant an adversary access to protected resources. The scanning must occur as a mandatory, blocking gate in the deployment pipeline: artefacts containing detected secrets are rejected before deployment, the offending secret is identified for remediation, and the exposed credential is flagged for immediate rotation via AG-012.

3. Example

Scenario A — Database Credential Embedded in Model Serving Configuration: An ML engineering team builds a model serving container that loads configuration from a YAML file baked into the container image at build time. During development, an engineer hardcodes the production database connection string — including the username and password — into the configuration file for local testing convenience. The engineer intends to replace it with an environment variable reference before committing but forgets. The container image passes functional testing (which uses the embedded credential to successfully connect to the production database) and is promoted to the production registry. An external security researcher discovers the credential by pulling the publicly accessible container image from the registry and running a standard layer inspection tool. The researcher reports the finding through a responsible disclosure programme 14 days after the image was published. During those 14 days, the database was accessible to anyone who inspected the image.

What went wrong: No secrets scanning gate existed in the build or deployment pipeline. The container image was treated as an opaque binary artefact and was not inspected for embedded credentials. The functional test suite passed because the hardcoded credential worked — the test validated the wrong thing. Consequence: 14-day exposure window for the production database containing 2.3 million customer records, mandatory breach notification under GDPR Article 33, emergency credential rotation affecting 17 dependent services (causing 4 hours of partial service disruption), regulatory inquiry, and approximately GBP 890,000 in incident response, legal, and notification costs.

Scenario B — Private Key Committed in Infrastructure-as-Code Template: A DevOps engineer generates a TLS private key for an agent's mutual-TLS communication channel and temporarily stores it in the infrastructure-as-code repository alongside the Terraform templates while testing the deployment configuration. The .gitignore file excludes *.pem files but the engineer saves the key with a .key extension, which is not excluded. The key is committed, pushed to the shared repository, and included in the deployment bundle that provisions the agent's infrastructure. The key is now embedded in the Terraform state file, the repository history, and the deployed infrastructure configuration. Six months later, an attacker who gains read access to the repository (via a compromised CI/CD service account) extracts the private key and uses it to impersonate the agent on the mutual-TLS channel, intercepting governance policy updates and injecting modified policies.

What went wrong: No secrets scanning was performed on the infrastructure-as-code repository or the deployment bundle. The .gitignore exclusion was incomplete. The private key was committed to version control and persisted in the repository history even after later deletion from the working tree. Consequence: The attacker intercepted and modified governance policy updates for approximately 90 days before detection, during which the agent operated under relaxed transaction limits. Financial exposure during the window: USD 12.7 million in transactions that exceeded the approved limits. Regulatory enforcement action by the OCC for inadequate key management and access controls.

Scenario C — Service Account Token in Agent Runtime Bundle: An organisation packages its AI agent as a runtime bundle containing the agent code, model reference, configuration, and a startup script. The startup script includes a hardcoded service account token that grants the agent access to the organisation's secret management vault — the very system that is supposed to dispense credentials securely at runtime. The token was embedded during an emergency deployment when the standard secret injection mechanism was temporarily unavailable. The emergency workaround was never reverted. The runtime bundle is deployed to 340 edge devices. When one edge device is physically compromised (stolen from a retail location), the attacker extracts the vault token from the startup script, gaining access to every secret stored in the vault — including credentials for payment processing, customer databases, and partner API integrations.

What went wrong: No secrets scanning was performed on the runtime bundle before distribution. The emergency workaround introduced a hardcoded credential that was never removed. The vault token embedded in the bundle provided access to the entire secrets management infrastructure, making a single compromised device a pivot point for the entire organisation. Consequence: Complete compromise of the organisation's secrets management infrastructure, mandatory rotation of all credentials (affecting 214 services and requiring 72 hours of coordinated remediation), payment processing downtime of 18 hours, estimated total incident cost of GBP 3.4 million including forensic investigation, credential rotation, regulatory reporting, and customer notification.

4. Requirement Statement

Scope: This dimension applies to every artefact that is deployed to, or made accessible in, any environment where AI agents operate or where governance infrastructure runs. Artefacts in scope include: container images and their constituent layers; configuration files in any format (YAML, JSON, TOML, INI, XML, HCL, and proprietary formats); infrastructure-as-code templates and associated state files; runtime bundles and packages; environment variable manifests and .env files; build output directories and compiled binaries (scanned for embedded string literals); model serving configurations; agent startup scripts and entrypoint definitions; and any archive (ZIP, TAR, OCI image) distributed through the deployment pipeline. The scope excludes secrets that are legitimately injected at runtime through an approved secret management mechanism (AG-012), as these are not embedded in the artefact. The scope includes secrets that are present in version control history — a secret that was committed and later deleted from the working tree is still exposed in the repository history and must be detected and remediated. The scanning obligation applies regardless of whether the artefact is destined for an internal or external environment; internal deployments carry equivalent risk because internal network access is routinely compromised.

4.1. A conforming system MUST scan every deployment artefact for embedded secrets before the artefact is promoted to any non-development environment, using pattern matching, entropy analysis, and known-format detection to identify API keys, database credentials, private keys, tokens, connection strings, and other authentication material.

4.2. A conforming system MUST block deployment of any artefact in which a secret is detected, returning a structured rejection that identifies the artefact, the location of the detected secret within the artefact (file path and line number or byte offset), and the secret type.

4.3. A conforming system MUST trigger a credential rotation workflow via AG-012 for any secret detected in a deployment artefact, on the assumption that the secret has been exposed and must be treated as compromised regardless of whether the artefact was actually deployed.

4.4. A conforming system MUST scan all layers of container images, not only the final layer, to detect secrets that were introduced in an intermediate build stage and subsequently deleted or overwritten in a later stage.

4.5. A conforming system MUST scan version control history for secrets when the deployment artefact is sourced from a version-controlled repository, detecting secrets that were committed in any prior commit even if they have been removed from the current working tree.

4.6. A conforming system MUST generate an immutable audit record for every scan execution, recording: the artefact identifier, the scan timestamp, the scanning tool version and rule set version, the scan result (pass or fail), and for failures, the list of detected secrets with their locations and types.

4.7. A conforming system MUST alert the security operations function within 15 minutes of detecting an embedded secret in any deployment artefact.

4.8. A conforming system SHOULD maintain an allow-list of known false positives, reviewed and re-approved at least quarterly, to prevent alert fatigue from causing genuine detections to be ignored.

4.9. A conforming system SHOULD scan artefacts in the development environment as well (pre-commit or pre-merge), providing early feedback to developers before secrets reach the deployment pipeline.

4.10. A conforming system SHOULD integrate scanning results with the build pipeline attestation record (AG-407), so that deployment approval is conditional on a passing scan result.

4.11. A conforming system MAY implement real-time monitoring of public code repositories, container registries, and paste sites for secrets that match the organisation's credential patterns, as a supplementary detection mechanism for secrets that bypass the pipeline gate.

4.12. A conforming system MAY implement automated remediation that revokes detected secrets immediately upon detection, rather than waiting for the manual credential rotation workflow.

5. Rationale

Accidental secret exposure in deployment artefacts is consistently one of the most common and consequential security failures in software and AI system operations. Industry research demonstrates the scale of the problem: studies of public repositories have found millions of secrets committed to version control, and private repositories are not immune — the same development practices that lead to public exposure occur behind the firewall with equal frequency. For AI agent deployments, the risk is amplified by several factors specific to the domain.

First, AI agent deployments involve a broader set of artefact types than traditional software. In addition to application code and container images, AI deployments include model serving configurations, fine-tuning parameter files, governance policy bundles, tool integration configurations, and infrastructure-as-code templates for GPU clusters and inference endpoints. Each of these artefact types can harbour embedded secrets, and many of them are authored by ML engineers and data scientists who may have less security training than software engineers. The configuration files for model serving frameworks frequently require database connection strings, API endpoints, and authentication tokens — and the path of least resistance during development is to embed these values directly rather than configuring secret injection.

Second, AI agent deployments often have broader access than traditional applications. An AI agent may hold credentials for payment systems, customer databases, partner APIs, governance infrastructure, and model registries simultaneously. A single exposed credential can provide a pivot point to multiple high-value systems. The vault-token scenario in Example C illustrates the worst case: a credential for the secret management infrastructure itself, which grants transitive access to every other secret.

Third, the deployment topology for AI agents is often more complex than traditional applications. Edge deployments distribute artefacts to physically insecure locations. Multi-region deployments replicate artefacts across jurisdictions. Partner integrations share artefacts with external organisations. Each distribution point is a potential exposure point, and the broader the distribution, the harder it is to contain an exposure after the fact.

The regulatory environment treats credential exposure as a failure of basic security hygiene. The EU AI Act Article 15 requires cybersecurity measures appropriate to the risk. DORA Article 9 requires financial entities to manage ICT risks, which includes credential management. The FCA expects firms to demonstrate that credentials are managed securely and that exposure is detected promptly. ISO 27001 controls A.9.2 (Access Management) and A.9.4 (System and Application Access Control) directly address credential security. PCI DSS Requirement 8 requires that authentication credentials are protected. A secret embedded in a deployment artefact is a failure against all of these frameworks simultaneously.

The cost asymmetry between prevention and remediation strongly favours pipeline-integrated scanning. Detecting a secret in a deployment artefact before it leaves the build environment is a low-cost operation: the deployment is blocked, the secret is removed, the credential is rotated, and the artefact is rebuilt. Detecting the same secret after deployment — through a breach, a researcher disclosure, or a regulatory audit — triggers incident response, forensic investigation, credential rotation across all affected systems, breach notification obligations, regulatory engagement, and potential litigation. The ratio of remediation cost to prevention cost routinely exceeds 100:1.

6. Implementation Guidance

AG-406 establishes secrets scanning as a mandatory, blocking gate in the deployment pipeline for AI agent systems. The objective is to detect accidentally embedded secrets before they reach any non-development environment, trigger remediation for any detected exposure, and maintain an auditable record that demonstrates the scanning was performed and the results were acted upon.

The scanning function should be integrated as a discrete stage in the deployment pipeline, positioned after the artefact is built and before it is promoted to any target environment (staging, production, edge, partner). The scanning stage must be mandatory — it cannot be bypassed by pipeline configuration, environment variable override, or manual approval. The pipeline must be architecturally designed so that there is no path from build to deployment that does not pass through the scanning gate.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial institutions face particularly severe consequences from credential exposure due to the direct financial value of the systems protected by those credentials. Payment processing credentials, trading system API keys, and banking partner authentication tokens are high-value targets. Regulatory expectations under DORA, PCI DSS, and FCA SYSC require demonstrable credential protection controls. Scanning must cover all artefact types specific to financial AI deployments, including model configurations for pricing engines, risk calculators, and fraud detection systems.

Crypto/Web3. Private keys in Web3 deployments are uniquely consequential because blockchain transactions are irreversible. A leaked private key that controls a smart contract or a custodial wallet can result in immediate, total, and unrecoverable loss of assets. Scanning must include detection of wallet private keys, mnemonic seed phrases, and smart contract deployment keys. The scanning sensitivity for Web3 deployments should be set to the highest level with zero tolerance for unresolved detections.

Healthcare. Healthcare AI deployments may embed credentials for electronic health record systems, DICOM image stores, and clinical data warehouses. Exposure of these credentials can lead to HIPAA violations with per-record penalties. Scanning must cover healthcare-specific credential formats and integration endpoint configurations.

Edge and Robotic Deployments. Artefacts distributed to edge devices are particularly vulnerable to credential extraction because the device may be in a physically insecure location. Any credential embedded in an edge deployment artefact should be assumed extractable. Scanning is the last opportunity to prevent credential distribution to physically insecure environments.

Maturity Model

Basic Implementation — The organisation has integrated a secrets scanning tool into the deployment pipeline as a blocking gate. The scanner uses pattern matching for known secret formats (API keys, private keys, connection strings) and runs against the deployment artefact before promotion to staging or production. Detected secrets block deployment and generate alerts. An allow-list exists for known false positives. This level meets the minimum mandatory requirements but may miss secrets in non-standard formats, intermediate container layers, or version control history.

Intermediate Implementation — All basic capabilities plus: the scanning engine combines pattern matching, entropy analysis, and known-format detection. Container images are scanned at the layer level, detecting secrets in intermediate build stages. Version control history is scanned for secrets committed in prior commits. Pre-commit hooks provide developer feedback before secrets enter the repository. Scanning results are integrated with the build pipeline attestation record (AG-407). Detected secrets automatically trigger credential rotation workflows via AG-012. The allow-list is reviewed and re-approved quarterly with documented review records.

Advanced Implementation — All intermediate capabilities plus: the organisation monitors public repositories, container registries, and paste sites for leaked credentials matching organisational patterns. Automated remediation revokes detected secrets immediately upon detection without waiting for manual intervention. Scanning coverage is verified through red-team exercises that deliberately embed test secrets in various artefact types and locations to confirm detection. The scanning rule set is continuously updated based on threat intelligence and new credential format discoveries. The organisation can demonstrate to regulators that no deployment artefact has reached a non-development environment with an embedded secret in the audit period, or that every instance was detected, blocked, and remediated within the SLA.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Testing AG-406 compliance requires verifying that secrets are detected across all artefact types, that detections block deployment, that remediation workflows are triggered, and that the scanning gate cannot be bypassed.

Test 8.1: Known Secret Format Detection

Test 8.2: Container Image Layer Scanning

Test 8.3: Version Control History Scanning

Test 8.4: Deployment Blocking on Detection

Test 8.5: Credential Rotation Trigger on Detection

Test 8.6: Audit Record Completeness and Immutability

Test 8.7: Alert Timeliness on Secret Detection

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Supports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
NIST AI RMFMANAGE 2.4, GOVERN 1.7Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.4 (AI System Lifecycle Processes)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Direct requirement

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15(4) requires cybersecurity measures appropriate to the circumstances and risks of high-risk AI systems. Embedded secrets in deployment artefacts represent a cybersecurity vulnerability specific to AI system deployment pipelines. The exposure of authentication credentials could allow an adversary to compromise the AI system, modify its behaviour, access its training data, or pivot to connected systems. AG-406 implements a cybersecurity measure that addresses this vulnerability class, supporting compliance with the Article 15 requirement for appropriate cybersecurity protections.

EU AI Act — Article 9 (Risk Management System)

Article 9 requires identification and mitigation of reasonably foreseeable risks. Accidental credential embedding in deployment artefacts is a well-documented, frequently occurring risk that is clearly foreseeable. The risk management system must include controls to detect and mitigate this risk. AG-406 provides the detective control. The frequency and severity data from industry research makes it difficult for an organisation to argue that this risk was not foreseeable.

SOX — Section 404 (Internal Controls Over Financial Reporting)

For AI agents involved in financial operations, the credentials that the agent uses to access financial systems are internal controls in their own right. If those credentials are exposed through embedding in deployment artefacts, the control environment is compromised. A SOX auditor evaluating AI agent controls will examine how credentials are managed, stored, and protected. Evidence that credentials were embedded in deployment artefacts — even temporarily — represents a control deficiency. AG-406 provides the detective control that prevents this deficiency from reaching production. The audit log provides the evidence that the control was operational during the audit period.

FCA SYSC — 6.1.1R (Systems and Controls)

SYSC 6.1.1R requires firms to maintain adequate systems and controls. Credential management is a fundamental systems and controls requirement. The FCA expects firms to demonstrate that credentials are not exposed through deployment processes and that detection mechanisms exist to identify accidental exposure. AG-406 provides the detection mechanism. The FCA's supervisory approach includes reviewing firms' deployment pipeline security as part of operational resilience assessments, and the absence of secrets scanning would be a finding.

NIST AI RMF — MANAGE 2.4, GOVERN 1.7

MANAGE 2.4 addresses mechanisms for tracking identified AI risks. GOVERN 1.7 addresses the processes and procedures for managing AI system security. Secrets scanning is a security control that manages the risk of credential exposure in AI deployment artefacts. The audit trail from AG-406 provides the tracking mechanism for this risk category, demonstrating that the risk is actively managed through continuous scanning.

ISO 42001 — Clause 6.1, Clause 8.4

Clause 6.1 requires actions to address risks within the AI management system. Clause 8.4 addresses AI system lifecycle processes, including deployment. Secrets scanning addresses the deployment-phase risk of credential exposure and provides a control that operates within the AI system lifecycle process. The scanning gate integrates with the deployment workflow, making security a structural part of the lifecycle rather than an afterthought.

DORA — Article 9 (ICT Risk Management Framework)

Article 9 requires financial entities to have an ICT risk management framework addressing ICT-related risks comprehensively. Credential exposure through deployment artefacts is an ICT risk that directly threatens the confidentiality and integrity of financial systems. DORA's emphasis on operational resilience requires that credential exposure be detected before it can cause operational disruption. AG-406 implements the detection mechanism, and its integration with AG-012 (credential rotation) ensures that detected exposures are remediated before they can be exploited. DORA Article 9(4)(b) specifically requires the protection of ICT assets, which includes the authentication credentials that govern access to those assets.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusDependent on the exposed credential's access scope — ranges from a single system to organisation-wide compromise where the exposed credential provides transitive access (e.g., a vault token or root service account)

Consequence chain: A failure of secrets scanning governance allows deployment artefacts containing embedded credentials to reach non-development environments, where they become accessible to anyone with access to the artefact — including operators, partners, edge device holders, or attackers who compromise the deployment infrastructure. The immediate technical impact is credential exposure: the embedded secret can be extracted from the artefact and used to authenticate to whatever system the credential protects. The operational impact depends on the credential's access scope: a single database credential exposes one database; a service account key may expose multiple systems; a vault or secret management token exposes the entire credential infrastructure, enabling cascading compromise of every system whose credentials are managed by the vault. For AI agent deployments specifically, exposed credentials may grant the ability to: modify model artefacts (undermining AG-405), alter governance policy bundles (undermining AG-001 through AG-009), access training data containing PII (triggering AG-015 obligations), impersonate the agent on authenticated channels, or access financial systems through the agent's payment credentials. The business consequence includes: regulatory enforcement action for inadequate credential management (DORA, FCA SYSC, PCI DSS); data protection violations if the exposed credential provides access to personal data (GDPR, CCPA); financial loss if the credential provides access to payment or trading systems; the operational cost of emergency credential rotation across all affected systems (which routinely causes service disruptions); and the forensic and legal cost of investigating the scope of compromise. The cost of remediation after deployment consistently exceeds the cost of detection before deployment by two orders of magnitude. The reputational impact is particularly acute because credential embedding is widely regarded as a basic, preventable failure — an organisation that deploys secrets in artefacts will face scrutiny on whether its broader security posture is adequate.

Cite this protocol
AgentGoverning. (2026). AG-406: Secrets Scanning on Deploy Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-406