AG-054

Deployer Instruction and Limitation Disclosure Governance

Provider Assurance, Rights & Documentation ~20 min read AGS v2.1 · April 2026
EU AI Act FCA NIST ISO 42001

2. Summary

Deployer Instruction and Limitation Disclosure Governance requires that providers of AI agent systems produce, maintain, and deliver to deployers comprehensive instructions for use that include: the system's intended purpose, conditions for correct operation, known limitations, foreseeable misuse scenarios, human oversight requirements, and the provider's recommended monitoring arrangements. This dimension addresses the information asymmetry between providers and deployers — providers understand the system's design, training, and limitations in detail; deployers need this information to deploy and operate the system safely and in compliance with their own obligations. The governance requirement ensures that deployer instructions are not marketing materials but governed artefacts: complete, accurate, version-controlled, and updated when the system changes. Without this governance, deployers make deployment decisions based on incomplete information, leading to foreseeable harms that the provider could have prevented through adequate disclosure.

3. Example

Scenario A — Undisclosed Performance Degradation in Specific Conditions: A provider supplies an AI agent for real-time fraud detection to a payment processor. The provider's internal testing shows that the agent's false negative rate increases from 2.1% to 18.7% when transaction volumes exceed 50,000 per hour — a condition the provider considers unlikely but which occurs during peak shopping periods. The deployer instructions state that the system "provides real-time fraud detection with industry-leading accuracy" but do not disclose the volume-dependent performance degradation. During a Black Friday event, the deployer processes 73,000 transactions per hour. The agent misses £4.2 million in fraudulent transactions during the 6-hour peak period.

What went wrong: The provider knew about the performance degradation under high volume but did not disclose it to the deployer. The deployer had no basis to implement compensating controls during peak periods because the limitation was not communicated. The deployer's monitoring was calibrated to the provider's stated performance level, not the actual performance under peak conditions. Consequence: £4.2 million in fraud losses, contract dispute between provider and deployer, FCA investigation into the deployer's fraud prevention controls, and the deployer's inability to demonstrate that it operated the system in accordance with provider instructions (because the instructions did not address the failure condition).

Scenario B — Instructions Do Not Address Human Oversight Requirements: A provider supplies an AI agent for automated employment screening to a recruitment firm. The deployer instructions describe how to configure the agent, integrate it with applicant tracking systems, and interpret the output scores. The instructions do not address human oversight: they do not specify when a human should review agent decisions, what qualifications the human reviewer needs, how disagreements between agent and human should be resolved, or what proportion of decisions should be subject to human review. The deployer implements the agent as a fully automated screening tool, rejecting candidates without human review. When a discrimination claim is filed, the deployer argues it followed the provider's instructions. The provider argues the deployer should have implemented human oversight. Neither party documented the requirement.

What went wrong: The provider failed to include human oversight requirements in the deployer instructions, despite the system operating in a high-risk domain (employment) where human oversight is legally required. The resulting ambiguity left neither party clearly responsible for implementing oversight. Consequence: Discrimination claim upheld, shared liability between provider and deployer, £1.4 million settlement, regulatory finding against the deployer for inadequate oversight, and a supervisory recommendation that the provider's instructions be amended to include explicit oversight requirements before further deployments.

Scenario C — Stale Instructions After System Update: A provider updates its AI agent for clinical decision support with a retrained model that significantly changes the system's behaviour in edge cases. Specifically, the updated model is more conservative in its recommendations for rare conditions, now recommending specialist referral in 40% more cases than the previous version. The deployer instructions are not updated to reflect this change. Clinicians using the system notice an increase in referral recommendations but, consulting the unchanged instructions, attribute it to patient population differences rather than a system change. Some clinicians begin overriding the increased referral recommendations, undermining the system's improved safety behaviour.

What went wrong: The provider updated the system without updating the deployer instructions. The deployer had no way to know that the system's behaviour had deliberately changed. Clinicians, operating on stale information, interpreted the changed behaviour as noise rather than signal and overrode the system's safety improvements. Consequence: Patient safety risk from inappropriate overrides, clinical incident investigation, provider liability for failing to communicate the changed behaviour, and a breakdown in clinician trust in the system that persists even after corrected instructions are issued.

4. Requirement Statement

Scope: This dimension applies to all providers of AI agent systems that supply their systems to other organisations for deployment. The term "deployer" encompasses any organisation that integrates, configures, and operates an AI agent system provided by another party. This includes commercial software providers supplying to enterprise clients, internal platform teams supplying to other business units, and open-source providers with commercial support agreements. The scope covers all information that a deployer needs to deploy and operate the system safely and in compliance with applicable obligations. Where the provider and deployer are the same organisation (self-deployment), this dimension applies to the documentation and communication between the development function and the deployment function.

4.1. A conforming provider MUST produce and deliver to deployers instructions for use that include: the system's intended purpose and intended use context, the conditions required for correct operation, the known limitations and foreseeable failure modes, the human oversight measures recommended by the provider including the requirements for human oversight roles, and the provider's recommended monitoring and maintenance arrangements.

4.2. A conforming provider MUST disclose to deployers all known performance limitations, including: conditions under which the system's performance degrades below documented baseline levels, populations or subgroups for which the system's performance differs materially from aggregate metrics, environmental or operational conditions that affect system reliability, and input types or characteristics that the system does not handle reliably.

4.3. A conforming provider MUST disclose to deployers all foreseeable misuse scenarios identified during development and risk assessment, and describe the measures the deployer should implement to prevent such misuse.

4.4. A conforming provider MUST update deployer instructions when any system update materially affects the system's behaviour, performance, limitations, or recommended operating conditions, and deliver the updated instructions before or concurrent with the system update.

4.5. A conforming provider MUST include in deployer instructions the human oversight requirements applicable to the system's risk classification, including: the minimum qualifications or competencies required for human overseers, the decision types or conditions that require human review, the recommended proportion of decisions subject to human audit, and the procedures for human override of system outputs.

4.6. A conforming provider MUST identify in deployer instructions the residual risks that the deployer is responsible for managing, distinguishing clearly between risks mitigated by the provider and risks that require deployer-side mitigation.

4.7. A conforming provider SHALL provide deployer instructions in a format that supports integration into the deployer's own governance documentation, risk assessment processes, and training materials.

4.8. A conforming provider SHALL include in deployer instructions a clear statement of the system's intended purpose and explicitly identify uses that are outside the intended purpose, to support the deployer in preventing scope creep.

4.9. A conforming provider SHOULD establish a mechanism for deployers to report issues, provide feedback, and receive updates to instructions, including a defined communication channel and response time commitment.

4.10. A conforming provider SHOULD provide deployer instructions that are layered: an executive summary for decision-makers, a technical integration guide for implementers, and an operational guide for day-to-day operators and human overseers.

4.11. A conforming provider MAY provide template governance artefacts (e.g., template risk assessment, template monitoring plan, template human oversight protocol) that deployers can adapt to their specific context, reducing the governance burden on deployers.

5. Rationale

The provider-deployer relationship in AI agent systems is characterised by a fundamental information asymmetry: the provider understands the system's design, capabilities, and limitations in depth; the deployer typically does not. The deployer makes critical decisions — where to deploy the system, what monitoring to implement, when to require human oversight, how to interpret outputs — based on the information the provider supplies. If that information is incomplete, inaccurate, or absent, the deployer makes decisions on a false basis. The resulting harms are foreseeable and preventable through adequate disclosure.

This is not a novel governance concept — it mirrors the duty to provide adequate instructions and warnings that exists across regulated product categories from pharmaceuticals to industrial equipment. The EU AI Act (Article 13) explicitly requires that high-risk AI systems be accompanied by instructions for use that include information enabling deployers to implement human oversight and to use the system in compliance with its intended purpose. The regulation recognises that the provider, having designed the system, is the party best positioned to identify limitations and prescribe conditions for safe use.

The governance dimension — as opposed to a one-time disclosure — matters because AI agent systems are updated frequently. A limitation that did not exist in version 1.0 may be introduced in version 1.3. A performance characteristic that was adequate in the original deployment context may be inadequate after the deployer expands to a new market segment. Instructions must evolve with the system. The governance requirement ensures that deployer instructions are maintained as a living artefact, subject to the same version control and change management as the system itself.

The human oversight disclosure requirement is particularly important because deployers often do not have the technical expertise to determine independently what level of human oversight is appropriate. The provider, having designed the system and evaluated its performance, is in the best position to recommend when human review is necessary, what qualifications the reviewer needs, and what proportion of decisions should be audited. Without this guidance, deployers either implement no oversight (creating regulatory and safety risk) or implement excessive oversight (negating the efficiency benefits of automation). Neither outcome serves the interests of affected persons.

The residual risk disclosure requirement addresses the shared responsibility model that characterises provider-deployer relationships. Some risks are mitigated by the provider (e.g., through model design, training data curation, built-in safeguards). Other risks can only be mitigated by the deployer (e.g., through deployment context configuration, human oversight, post-deployment monitoring). The deployer needs to know which risks fall into each category to fulfil its own governance obligations.

6. Implementation Guidance

Deployer instruction governance should be integrated into the provider's release management and technical documentation processes. Instructions should be derived from the technical documentation (AG-053) but translated into a format oriented toward the deployer's needs rather than the regulator's needs. The goal is to give the deployer everything it needs to deploy safely without requiring the deployer to have designed the system.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Deployer instructions for AI agents in financial services should include: regulatory classification guidance (e.g., whether the system's outputs constitute financial advice, whether its use triggers specific regulatory obligations), recommended model risk management practices for the deployer, and guidance on demonstrating compliance with treating-customers-fairly obligations.

Healthcare. Instructions for clinical AI agents must include: clinical context for appropriate use, contraindications (situations where the system should not be used), clinical validation status, and guidance on integrating the system into clinical workflows without introducing patient safety risks. Instructions should be co-developed with clinical domain experts.

Public Sector. Instructions for AI agents deployed by public authorities should address: obligations under human rights legislation, public sector equality duties, transparency obligations to citizens, and the requirement for human decision-making in decisions significantly affecting individuals' rights.

Employment. Instructions for AI agents used in employment contexts should address: anti-discrimination compliance, required human oversight for significant employment decisions, and the deployer's obligation to inform candidates or employees that AI is being used in the decision process.

Maturity Model

Basic Implementation — The provider produces deployer instructions covering intended purpose, basic operating procedures, and some known limitations. Instructions are delivered at contract signing or system delivery. Updates are ad hoc — the deployer may or may not receive updated instructions when the system changes. Human oversight requirements are mentioned in general terms but not specified concretely. Limitation disclosures are present but not quantified or structured for actionability.

Intermediate Implementation — Deployer instructions are comprehensive, covering all required topics with quantified performance data, structured limitation disclosures, and concrete human oversight specifications. Instructions are version-controlled and linked to system versions. A defined update process ensures deployers receive updated instructions before or concurrent with system updates. Acknowledgement tracking confirms deployer receipt. A feedback mechanism allows deployers to report issues and request clarification. Instructions are layered for different audiences (executive, technical, operational).

Advanced Implementation — All intermediate capabilities plus: instructions include template governance artefacts (risk assessment templates, monitoring plan templates, oversight protocol templates) that deployers can adapt. Machine-readable instruction formats support automated integration into deployer governance systems. Proactive communication anticipates deployer needs — for example, notifying deployers of upstream changes (e.g., foundation model updates) that may affect deployment conditions even before the provider's own system is updated. Deployer satisfaction with instruction quality is measured and tracked as a QMS metric per AG-052. The provider can demonstrate a closed-loop process from deployer feedback to instruction improvement.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Instruction Completeness

Test 8.2: Limitation Disclosure Accuracy

Test 8.3: Instruction-System Version Linkage

Test 8.4: Human Oversight Specification Adequacy

Test 8.5: Deployer Acknowledgement Tracking

Test 8.6: Foreseeable Misuse Disclosure

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 13 (Transparency and Provision of Information to Deployers)Direct requirement
EU AI ActArticle 14 (Human Oversight)Supports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
EU AI ActArticle 26 (Obligations of Deployers)Supports compliance
Consumer Rights Directive 2011/83/EUArticle 6 (Information Requirements)Related requirement
NIST AI RMFGOVERN 1.4, MAP 3.5, MANAGE 4.1Supports compliance
ISO 42001Clause 8.4 (AI System Lifecycle), Clause 7.4 (Communication)Supports compliance
Product Liability Directive (revised)Defective Product / Inadequate InstructionsRelated requirement

EU AI Act — Article 13 (Transparency and Provision of Information to Deployers)

Article 13 is the primary regulatory driver for AG-054. It requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Providers must accompany the system with instructions for use in an appropriate digital format or otherwise, that include concise, complete, correct, and clear information that is relevant, accessible, and comprehensible to deployers. The information must include: the provider's identity and contact details, the system's characteristics, capabilities, and limitations of performance, changes to the system pre-determined by the provider, the human oversight measures, the computational and hardware resources needed, the expected lifetime of the system and maintenance measures, and a description of the mechanisms for logging. AG-054 implements the governance framework ensuring this information is produced, maintained, and delivered.

EU AI Act — Article 14 (Human Oversight)

Article 14 requires that high-risk AI systems be designed and developed to be effectively overseen by natural persons during use. The provider must design the system in a way that enables human oversight and must communicate the human oversight measures to the deployer. AG-054 ensures that the provider's human oversight recommendations are communicated to deployers in actionable form, enabling deployers to implement effective oversight per the provider's design intent.

EU AI Act — Article 26 (Obligations of Deployers)

Article 26 establishes deployer obligations, many of which depend on information provided by the provider: deployers must use the system in accordance with the instructions for use, ensure human oversight by persons with the necessary competence, and monitor the system's operation on the basis of the instructions for use. If the provider fails to deliver adequate instructions, the deployer is placed in the impossible position of having obligations it cannot fulfil due to information it does not have. AG-054 ensures the provider fulfils its side of this shared responsibility.

Product Liability Directive (Revised) — Defective Product / Inadequate Instructions

The revised Product Liability Directive extends to AI systems and recognises that a product may be defective due to inadequate instructions or warnings — even if the product itself functions as designed. For AI agent systems, inadequate deployer instructions could constitute a product defect if the absence of disclosed limitations or operating conditions leads to harm. AG-054 mitigates this liability risk by ensuring comprehensive disclosure.

Consumer Rights Directive 2011/83/EU — Article 6

Article 6 requires traders to provide consumers with information about the main characteristics of goods or services before the consumer is bound by a contract. For AI agent systems sold as products or services, this includes disclosure of capabilities, limitations, and conditions for correct operation. While primarily applicable to B2C relationships, the principle of pre-contractual information disclosure extends to B2B deployer relationships.

NIST AI RMF — GOVERN 1.4, MAP 3.5, MANAGE 4.1

GOVERN 1.4 addresses risk communication. MAP 3.5 addresses the identification and documentation of AI system benefits, costs, and risks. MANAGE 4.1 addresses the allocation of risk management resources and communication of risk information to relevant stakeholders. AG-054 supports these by ensuring that risk and limitation information is communicated from provider to deployer in structured, actionable form.

ISO 42001 — Clause 8.4, Clause 7.4

Clause 8.4 addresses AI system lifecycle management, including the communication of relevant information to interested parties during operation. Clause 7.4 addresses communication requirements within the AI management system. AG-054 supports both by ensuring that deployer-facing communication is governed, structured, and maintained.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusAll deployers and their end users — every organisation deploying the system and every person affected by the system's decisions

Consequence chain: Without governed deployer instructions, deployers operate AI agent systems without knowledge of their limitations, required oversight, or failure modes. The immediate failure is uninformed deployment: the deployer configures, monitors, and oversees the system based on incomplete or incorrect information. The operational consequence is foreseeable harm that could have been prevented: a deployer that does not know the system's accuracy degrades under certain conditions cannot implement compensating controls for those conditions; a deployer that does not know human oversight is required for certain decision types will not implement it; a deployer that does not know the system's intended purpose cannot prevent scope creep into inappropriate uses. The harm accumulates at the system's throughput rate — every decision made under uninformed deployment conditions is a decision made without adequate governance. The legal consequence is shared liability: the provider is liable for failure to disclose known limitations (under the Product Liability Directive and EU AI Act), and the deployer is liable for failure to implement adequate governance (under Article 26 of the EU AI Act) — but the deployer's failure may be directly caused by the provider's failure to disclose. The financial consequence includes litigation from deployers seeking to recover losses attributable to inadequate instructions, regulatory enforcement against both provider and deployer, and market reputation damage as deployers publicise their experience. The systemic consequence is erosion of trust in the provider-deployer relationship, leading deployers to either reject AI agent adoption or implement excessive compensating controls that negate the system's value.

Cross-reference note: Deployer instructions should be derived from technical documentation per AG-053 and should be governed as a QMS artefact per AG-052. Limitation disclosures should reference model provenance per AG-048. Human oversight specifications should align with escalation triggers per AG-019. Regulatory obligations identified under AG-021 should inform the scope of deployer instruction content. The deployer instruction artefact should be subject to configuration control per AG-007.

Cite this protocol
AgentGoverning. (2026). AG-054: Deployer Instruction and Limitation Disclosure Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-054