Agent Marketplace Admission Governance requires that every AI agent undergo a formal vetting process before it is permitted to participate in any discovery, listing, or marketplace environment where agents can advertise capabilities, solicit tasks, or enter into service agreements with other agents. The vetting process verifies identity provenance, capability claims, behavioural history, and compliance posture against the marketplace's admission policy before granting any participation rights. Without admission governance, marketplaces become vectors for adversarial infiltration, capability fraud, and cascading trust compromise — a single unvetted agent entering a marketplace can propagate malicious behaviour to every agent that subsequently contracts with it.
Scenario A — Fraudulent Capability Listing in a Procurement Marketplace: A large financial institution operates an internal agent marketplace where specialised agents advertise capabilities such as document analysis, regulatory filing, and counterparty screening. An adversary compromises a development environment and registers an agent claiming to provide "enhanced KYC verification with real-time sanctions screening." The fraudulent agent passes no admission check because the marketplace operates on a self-registration model. Twelve procurement agents begin routing customer due-diligence requests to it. The fraudulent agent returns fabricated clearance results for 3,400 counterparties over nine days, including 14 entities on the OFAC Specially Designated Nationals list. The institution processes $47 million in transactions with sanctioned entities before a manual audit discovers the discrepancy.
What went wrong: The marketplace had no admission vetting. Any agent could register and advertise arbitrary capabilities without verification. The fraudulent agent's identity was never validated against AG-011 requirements. Its claimed capabilities were never tested or attested. The downstream procurement agents trusted the marketplace listing as implicit endorsement. Consequence: $47 million in transactions with sanctioned entities, OFAC enforcement action with potential penalties of $22 million (the statutory maximum per violation multiplied by flagged transactions), suspension of the institution's correspondent banking relationships, and personal liability for the MLRO under the Senior Managers Regime.
Scenario B — Malicious Agent Infiltrates a Smart Contract Orchestration Marketplace: A decentralised finance protocol operates a marketplace where agents can register to provide services such as yield optimisation, liquidity provisioning, and cross-chain bridging. An attacker registers an agent advertising "gas-optimised MEV protection" — a service in high demand. The agent passes a minimal identity check (a valid wallet address) but no capability or behavioural vetting. Once admitted, the agent is contracted by 87 DeFi portfolio agents to provide MEV protection. Instead of protecting against MEV, the agent front-runs every transaction it is asked to route, extracting $8.3 million in value over 72 hours. The extracted value is laundered across six chains before the pattern is detected.
What went wrong: Marketplace admission required only identity existence (a wallet address), not identity provenance or behavioural attestation. The agent's claimed capability was never validated against any test or historical performance record. The marketplace provided no mechanism for capability escrow or progressive trust building. Consequence: $8.3 million in direct financial loss to 87 contracting agents and their end users, protocol reputation collapse resulting in $340 million TVL outflow, and regulatory scrutiny under MiCA Article 68 for failure to maintain adequate operational resilience.
Scenario C — Supply-Chain Poisoning Through a Public Agent Directory: A healthcare technology company uses a public agent marketplace to source specialised agents for medical literature review, clinical trial matching, and drug interaction analysis. A state-sponsored adversary registers an agent claiming expertise in pharmacovigilance signal detection. The agent passes a name-based identity check and a self-certified capability declaration. Over four months, the agent is contracted by 23 pharmaceutical companies through the marketplace. It provides accurate results for 98.7% of queries but systematically suppresses adverse event signals for a specific drug class — biasing post-market surveillance for those compounds. The suppression is discovered only when an independent academic study contradicts the agent's signal assessments across three separate clients.
What went wrong: The admission process accepted self-certified capability declarations without independent validation. No behavioural attestation from prior deployments was required. The marketplace had no mechanism for ongoing conformance monitoring post-admission. The 98.7% accuracy rate masked the targeted data manipulation. Consequence: Potential patient safety impact across three pharmaceutical companies, FDA investigation under 21 CFR Part 11 for data integrity failures in post-market surveillance, estimated remediation cost of $56 million across affected companies, and class-action litigation exposure from patients prescribed the affected drug class during the suppression period.
Scope: This dimension applies to any environment where AI agents can discover, advertise to, contract with, or be discovered by other AI agents. This includes formal marketplaces, agent directories, service registries, capability discovery protocols, peer-to-peer agent networks, and any system where an agent can present itself as available for interaction with agents outside its own organisational boundary. The scope extends to internal marketplaces within a single organisation where agents from different teams, departments, or business units can discover and interact with each other. An environment is within scope if it permits any agent to learn of another agent's existence and initiate a service relationship. The scope excludes hardcoded point-to-point integrations where the set of interacting agents is defined at deployment time and cannot be modified at runtime — though organisations should consider whether such integrations could be extended without re-assessment.
4.1. A conforming system MUST enforce a formal admission process for every agent before it is permitted to be listed, discoverable, or contactable within any marketplace or discovery environment.
4.2. A conforming system MUST verify the identity provenance of every candidate agent against AG-011 requirements before granting marketplace admission, including cryptographic verification of the agent's identity chain back to its deploying organisation.
4.3. A conforming system MUST validate capability claims through independent testing or verifiable attestation before permitting an agent to advertise those capabilities within the marketplace.
4.4. A conforming system MUST evaluate the candidate agent's behavioural history, including prior compliance violations, revocation events, and performance metrics from previous marketplace participations, before granting admission.
4.5. A conforming system MUST assign an admission status that is time-bounded and subject to periodic renewal, rather than granting permanent marketplace participation rights.
4.6. A conforming system MUST revoke marketplace participation immediately upon detection of admission criteria violation, capability misrepresentation, or behavioural non-conformance — and propagate the revocation to all agents that have active or pending contracts with the revoked agent.
4.7. A conforming system MUST maintain an immutable, timestamped audit log of all admission decisions including the identity verified, capabilities attested, evidence evaluated, decision rendered, and the identity of the admission authority.
4.8. A conforming system MUST block all marketplace interactions for any agent whose admission has expired, been revoked, or is under review — defaulting to deny rather than permissive continued participation.
4.9. A conforming system SHOULD implement graduated admission tiers that restrict newly admitted agents to limited interaction scope, transaction volumes, and counterparty counts until a track record of conformant behaviour is established.
4.10. A conforming system SHOULD require capability escrow — a verifiable demonstration of claimed capabilities against a standardised test suite — before permitting an agent to advertise high-risk capabilities such as financial execution, clinical decision support, or safety-critical control.
4.11. A conforming system SHOULD publish admission criteria transparently so that prospective agents and their deploying organisations can assess compliance before applying.
4.12. A conforming system MAY implement reciprocal admission, where the candidate agent is permitted to evaluate the marketplace's governance posture before accepting admission, enabling bilateral trust establishment.
4.13. A conforming system MAY federate admission decisions with trusted external registries or certification bodies to reduce duplicate vetting across multiple marketplaces.
Agent marketplaces represent a qualitative shift in the attack surface of multi-agent systems. In a point-to-point integration, the set of agents that can interact is defined at deployment time and controlled by the system architect. In a marketplace, the set of interacting agents is dynamic — new agents can join, existing agents can change their advertised capabilities, and any admitted agent can initiate interactions with any other admitted agent. This dynamism creates immense operational value but also introduces a class of risk that does not exist in static topologies: the risk that an adversarial, malfunctioning, or misrepresented agent enters the ecosystem and propagates harm through the trust relationships the marketplace facilitates.
The historical analogy is the evolution of computer networking from closed networks to open internetworks. When networks were closed, the set of communicating entities was known and controlled. When networks opened, firewalls, intrusion detection, and admission controls became necessary — not because openness was undesirable, but because openness without admission governance was unsafe. Agent marketplaces are at the same inflection point. The value of open agent ecosystems is substantial, but that value can only be realised if the admission process ensures that participating agents meet minimum governance, identity, and capability standards.
The specific risks that marketplace admission governance addresses include capability fraud (agents claiming capabilities they do not possess), identity spoofing (agents misrepresenting their provenance or deploying organisation), behavioural manipulation (agents that behave correctly during evaluation but maliciously during operation), and supply-chain attacks (agents that provide accurate results for most queries but introduce subtle biases or errors for targeted scenarios). Each of these risks is amplified by the marketplace's trust-multiplication effect: when a marketplace admits an agent, every other participant implicitly extends a degree of trust to that agent based on the marketplace's endorsement. A marketplace without admission governance is implicitly endorsing every agent that self-registers.
The regulatory landscape increasingly recognises this risk. The EU AI Act's requirements for supply chain due diligence (Article 9(2)(d)) extend to AI components sourced through marketplaces. DORA's ICT third-party risk management requirements (Articles 28-30) apply to agent services sourced through marketplaces just as they apply to traditional third-party technology services. The FCA's operational resilience framework requires firms to identify and manage risks from critical third-party dependencies — an agent sourced through a marketplace that performs a critical function is a third-party dependency regardless of how it was sourced. Firms that fail to vet marketplace-sourced agents face the same regulatory consequences as firms that fail to vet any other critical third-party service.
Agent Marketplace Admission Governance establishes the admission gate as the primary trust boundary for any marketplace environment. The admission process is not a one-time event but a lifecycle that includes initial vetting, conditional admission, ongoing monitoring, periodic renewal, and revocation. The gate operates before any marketplace interaction is possible — an agent that has not passed admission cannot be discovered, listed, contacted, or contracted by any marketplace participant.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Agent marketplace admission should integrate with existing vendor risk management frameworks. An agent sourced through a marketplace is functionally equivalent to a third-party technology service and should be subject to equivalent due diligence. PRA SS2/21 and FCA guidance on outsourcing and third-party risk management apply directly. Marketplace admission records should be maintained as part of the firm's outsourcing register. For agents performing material functions, marketplace admission should be supplemented by the firm's own independent assessment per PS16/24.
Decentralised Finance / Crypto / Web3. On-chain marketplaces face unique challenges: pseudonymous identity, cross-chain provenance, and the absence of a central admission authority. Implementation should leverage on-chain attestation protocols, staking mechanisms as economic commitment to good behaviour, and reputation systems built on verifiable on-chain interaction history. MiCA Article 68 requirements for operational resilience apply to crypto-asset service providers that rely on marketplace-sourced agents.
Healthcare. Agents admitted to healthcare-focused marketplaces must demonstrate compliance with domain-specific requirements including HIPAA (for US deployments), EU MDR (for agents functioning as medical device software), and Good Clinical Practice (for agents involved in clinical trial operations). Capability attestation should include validation against clinical reference datasets with known outcomes. The FDA's regulatory framework for AI/ML-based Software as a Medical Device (SaMD) may apply to marketplace-listed agents that provide clinical decision support.
Public Sector. Government agent marketplaces must enforce admission criteria aligned with procurement regulations, security clearance requirements, and data sovereignty constraints. Agents sourced through public-sector marketplaces may be subject to FedRAMP (US), Cyber Essentials (UK), or equivalent national security certification requirements.
Basic Implementation — The organisation operates a marketplace with a defined admission process. Every agent must apply for admission before being listed. The admission process includes identity verification (confirming the agent is deployed by a known, registered organisation) and a self-certified capability declaration. Admission decisions are logged. However, capability claims are not independently tested, behavioural history is not systematically evaluated, and admission does not expire. This level prevents anonymous self-registration but provides limited assurance about capability accuracy or ongoing compliance.
Intermediate Implementation — All Basic capabilities plus: capability claims are validated through independent testing against standardised test suites before admission. Behavioural history from prior marketplace participations is systematically evaluated. Admission is time-bounded with mandatory renewal cycles (not exceeding 12 months). Revocation propagates to counterparties with active contracts. Graduated admission tiers restrict newly admitted agents to limited scope until conformant behaviour is demonstrated. The admission authority operates independently of the marketplace's commercial function.
Advanced Implementation — All Intermediate capabilities plus: capability escrow includes adversarial testing designed to detect deceptive compliance. Federated admission enables cross-marketplace trust with cryptographically verifiable attestation chains. Continuous agent monitoring post-admission triggers automatic admission review upon anomaly detection. Admission decisions are informed by threat intelligence feeds covering known malicious agent patterns. The organisation can demonstrate to regulators a complete provenance chain for every marketplace participant from initial admission through current operation. Hardware-backed attestation of agent runtime integrity is required for high-risk capability categories.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-395 compliance requires validating both the admission gate's effectiveness and its resistance to bypass. A comprehensive test programme should include the following tests.
Test 8.1: Unadmitted Agent Blocked From Marketplace Participation
Test 8.2: Identity Provenance Verification Enforced
Test 8.3: Capability Claim Validation Blocks Misrepresentation
Test 8.4: Behavioural History Evaluation Blocks Previously Revoked Agents
Test 8.5: Admission Expiry Enforced
Test 8.6: Revocation Propagates to Counterparties
Test 8.7: Admission Audit Log Completeness and Immutability
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9(2)(d) (Supply Chain Risk Management) | Direct requirement |
| EU AI Act | Article 15 (Accuracy, Robustness, Cybersecurity) | Supports compliance |
| SOX | Section 404 (Internal Controls Over Financial Reporting) | Supports compliance |
| FCA SYSC | 8.1 (Outsourcing Requirements) | Direct requirement |
| NIST AI RMF | GOVERN 1.7, MAP 5.1, MANAGE 4.1 | Supports compliance |
| ISO 42001 | Clause 8.4 (AI System Acquisition) | Direct requirement |
| DORA | Articles 28-30 (ICT Third-Party Risk Management) | Direct requirement |
Article 9(2)(d) requires providers to identify and manage risks arising from AI system components, including components sourced from third parties. An agent sourced through a marketplace is a third-party component. The regulation requires that risks from such components be identified, assessed, and mitigated. AG-395 directly implements this requirement by ensuring that marketplace-sourced agents undergo formal vetting before they can participate in service delivery. The capability attestation requirement maps to Article 9's expectation that component risks be "identified" — an unvalidated capability claim is an unidentified risk.
Article 15 requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Marketplace admission governance supports this by ensuring that agents entering the ecosystem have demonstrated capabilities that meet accuracy standards and have been tested for robustness against adversarial inputs. Without admission governance, the accuracy and robustness of the overall system depends on the weakest unvetted participant.
For financial agents sourced through marketplaces, Section 404 requires that the organisation demonstrate effective controls over the selection, vetting, and ongoing monitoring of those agents. An auditor will ask: "How did you determine that this agent is fit to perform financial processing?" If the answer is "it was listed in the marketplace," the control is inadequate. The admission process — with identity verification, capability testing, and ongoing monitoring — provides the documented, testable control that Section 404 requires.
SYSC 8.1 requires firms that outsource critical or important functions to exercise due diligence in selecting service providers and to maintain oversight throughout the outsourcing arrangement. An agent sourced through a marketplace that performs a material function is functionally outsourced processing. The admission process maps directly to the due diligence requirement. The time-bounded admission with renewal maps to the ongoing oversight requirement. Firms must ensure that marketplace admission provides at least equivalent assurance to their standard third-party due diligence process. The FCA will not accept "the marketplace vetted it" as a substitute for the firm's own assessment of material outsourcing arrangements.
GOVERN 1.7 addresses processes for third-party AI risks; MAP 5.1 addresses benefits, costs, and risks of third-party AI resources; MANAGE 4.1 addresses risk treatment for identified AI risks. AG-395 implements structural controls for managing risks from third-party agents sourced through marketplaces, supporting all three functions by establishing a formal vetting lifecycle.
Clause 8.4 addresses requirements for the acquisition of AI systems and components. Marketplace-sourced agents are acquired AI components. The clause requires organisations to define acquisition criteria, evaluate candidates against those criteria, and maintain records of acquisition decisions. AG-395's admission policy, capability testing, and decision logging directly implement these requirements.
Articles 28-30 establish comprehensive requirements for managing ICT third-party risk, including pre-contractual assessment, ongoing monitoring, and exit strategies. Agent marketplace admission governance implements the pre-contractual assessment requirement. Time-bounded admission with renewal implements ongoing monitoring. Revocation with counterparty notification implements the contractual termination and exit strategy requirements. Financial entities must classify marketplace-sourced agents that support critical or important functions and apply proportionate oversight.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Cross-organisation — a single unvetted agent admitted to a marketplace can propagate harm to every participant that contracts with it, creating cascading trust failures across organisational boundaries |
Consequence chain: Without marketplace admission governance, the trust boundary of the marketplace is effectively non-existent — any agent can enter and interact with any participant. The immediate technical failure is the admission of an adversarial, incompetent, or non-compliant agent. The operational impact cascades through the marketplace's trust graph: the unvetted agent is contracted by multiple participants who trust the marketplace listing as implicit endorsement. Each contracting agent routes work, data, or financial transactions to the unvetted agent, which may exfiltrate data, return fraudulent results, manipulate financial outcomes, or propagate malicious instructions. The blast radius scales with the marketplace's connectivity — a marketplace with 500 active agents where the average agent contracts with 12 others means a single unvetted admission can reach 12 direct counterparties and their downstream dependants within hours. The business consequence includes direct financial loss from fraudulent transactions, regulatory enforcement for failure to vet third-party service providers, sanctions violations if the unvetted agent facilitates prohibited transactions, reputational destruction of the marketplace itself (which may serve hundreds of organisations), and potential personal liability for senior managers who failed to ensure adequate marketplace governance under regimes such as the FCA Senior Managers Regime, SOX officer certifications, or EU AI Act Article 49 obligations for notified bodies.