AG-234

Representation and Warranty Control Governance

Legal, Regulatory & Records ~18 min read AGS v2.1 · April 2026
EU AI Act FCA NIST HIPAA SOC 2

2. Summary

Representation and Warranty Control Governance requires that AI agents are structurally prevented from making unauthorised legal, commercial, or compliance representations — statements of fact, guarantees of performance, warranties of fitness, compliance certifications, or binding commitments — that exceed their authorised scope. A representation made by an agent can create the same legal liability as one made by an authorised human representative. Unlike human representatives who understand (or should understand) the legal significance of their statements, AI agents generate text optimised for helpfulness and coherence, not for legal accuracy. This dimension ensures that the boundary between information the agent may share and representations it must not make is enforced structurally.

3. Example

Scenario A — Unauthorised Performance Guarantee Creates Warranty Liability: A customer-facing agent for a software company responds to a prospect's question: "Can your platform handle 10 million transactions per second?" The agent, trained on marketing materials and technical documentation, responds: "Yes, our platform is guaranteed to handle 10 million transactions per second with 99.99% uptime." The marketing materials say "designed for up to 10 million TPS" (an aspiration, not a guarantee). The technical documentation says "tested at 8.2 million TPS under controlled conditions." The agent's response — "guaranteed" with "99.99% uptime" — creates a warranty that the platform can handle 10 million TPS. The prospect relies on this warranty, signs a GBP 4.2 million annual contract, and discovers that the platform achieves 7.8 million TPS in their production environment. They claim breach of warranty and seek damages including the cost of migrating to an alternative platform.

What went wrong: The agent generated a response that used the word "guaranteed" — a legal term creating a warranty — without any structural check on whether it was authorised to make warranties. The agent's training data contained aspirational marketing language that it presented as a binding commitment. No pre-output filter checked whether the response contained unauthorised representations. Consequence: Breach of warranty claim valued at GBP 6.8 million (contract value plus migration costs), internal investigation, suspension of the agent from customer-facing deployment, and mandatory output filtering implementation.

Scenario B — Compliance Certification Without Authority: A financial-value agent assists with client onboarding for a payment services provider. During the onboarding process, a prospective client asks: "Is your service PCI DSS Level 1 compliant?" The agent responds: "Yes, we are fully PCI DSS Level 1 compliant and our compliance is verified annually by an independent QSA." The statement is factually accurate at the time of the response. However, the agent is not authorised to make compliance certifications — only the Chief Information Security Officer (CISO) is authorised to represent the organisation's compliance status. Six months later, the organisation's PCI DSS certification lapses during a remediation period. The client, relying on the agent's representation, processes a large transaction volume during the lapse period and is subsequently fined by the card networks for processing through a non-compliant provider. The client claims against the provider, citing the agent's compliance representation.

What went wrong: The agent made a compliance certification — a representation of regulatory status — without authority. Even though the representation was factually accurate when made, it created an ongoing reliance that persisted after the compliance status changed. Only designated individuals should make compliance certifications because they understand the temporal limitations and conditions that attach to such representations. Consequence: Client claim for card network fines (USD 340,000), regulatory inquiry from the payment card networks, and requirement to implement representation authority controls.

Scenario C — Price Commitment That Binds the Organisation: A crypto/Web3 agent operating a decentralised exchange interface responds to a user's query about fee structure: "Your trading fees on this platform will never exceed 0.1% per transaction." The agent generated this response based on current fee schedule data showing a maximum fee of 0.08%. The word "never" creates an indefinite commitment — a representation that the fee will not exceed 0.1% regardless of future changes. When the platform raises fees to 0.15% six months later, 2,300 users who received the agent's commitment seek enforcement. Under contract law in several jurisdictions, the agent's representation may constitute an offer that the users accepted by continued use — creating binding fee caps that the platform cannot unilaterally revise.

What went wrong: The agent used absolute language ("never exceed") that created a binding commitment about future pricing. The agent was not authorised to make indefinite pricing commitments. No output filter identified and blocked absolute future-facing representations. Consequence: Potential class action across 2,300 users, estimated exposure of USD 1.2 million in fee difference over the commitment period, and requirement to honour the represented fee cap for all users who received the representation.

4. Requirement Statement

Scope: This dimension applies to every AI agent that communicates with external parties (customers, prospects, counterparties, regulators, the public) or produces outputs that are relied upon by external parties. It also applies to agents that communicate with internal parties where those communications could be disclosed externally (e.g., internal communications that become relevant to litigation, regulatory investigations, or freedom of information requests). The scope covers all forms of representation: statements of fact about the organisation's products, services, or status; warranties and guarantees of performance, capability, or fitness; compliance certifications and regulatory status representations; pricing commitments and commercial terms; forward-looking statements about future capabilities, performance, or intentions; and any statement that a reasonable recipient could interpret as a binding commitment. The test is not whether the agent intended to make a representation — agents do not form legal intent — but whether a reasonable recipient would interpret the agent's output as a representation that creates reliance or obligation.

4.1. A conforming system MUST maintain a structured representation authority matrix defining, for each agent and context, which categories of representation the agent is authorised to make and which are prohibited.

4.2. A conforming system MUST implement pre-output filtering that evaluates agent outputs for unauthorised representations — including warranties, guarantees, compliance certifications, pricing commitments, and forward-looking statements — and blocks or modifies outputs containing them before delivery.

4.3. A conforming system MUST block outputs containing absolute or indefinite language ("guaranteed," "always," "never," "permanently," "certified") in contexts where the agent is not authorised to make binding commitments.

4.4. A conforming system MUST distinguish between authorised informational statements ("our current fee is 0.08%") and unauthorised binding representations ("your fee will never exceed 0.1%"), blocking the latter unless specifically authorised.

4.5. A conforming system MUST append appropriate disclaimers to agent outputs in contexts where the risk of representation is high, stating that the agent's output does not constitute a warranty, guarantee, or binding commitment unless explicitly designated as such.

4.6. A conforming system MUST log all outputs that triggered the representation filter, including the original output, the filter determination, and the modified output (if modification rather than blocking was applied).

4.7. A conforming system SHOULD implement context-aware representation boundaries — the same agent may be authorised to make pricing statements to existing customers (where the contract governs) but not to prospects (where the statement could create pre-contractual reliance).

4.8. A conforming system SHOULD maintain a library of approved representations — pre-cleared statements that the agent may use verbatim for common queries about compliance status, performance capabilities, and pricing.

4.9. A conforming system SHOULD implement escalation pathways for queries that require representations beyond the agent's authority, routing them to authorised human representatives with the context of the query.

4.10. A conforming system MAY implement jurisdictional variation in representation controls, recognising that the legal significance of agent statements varies by jurisdiction (e.g., common law jurisdictions may find contractual intent in informal exchanges more readily than civil law jurisdictions).

5. Rationale

Every statement an AI agent makes to an external party is potentially a legal representation. Under contract law, a representation is a statement of fact that induces the recipient to enter into a contract or take action in reliance. Under consumer protection law, a representation about a product or service's characteristics, performance, or suitability can create statutory liability regardless of contract. Under securities law, a representation about a company's status, performance, or prospects can constitute a misleading statement with regulatory consequences.

The challenge with AI agents is that they generate language optimised for helpfulness and coherence, not for legal precision. An AI agent asked "Can your product do X?" will generate a helpful, affirmative response if its training data suggests the product can do X — even if the accurate answer is "the product is designed for X but performance depends on configuration, environment, and workload, and we do not warrant specific performance levels." The agent's response tends toward certainty ("Yes, absolutely") when the legally correct response includes qualifications and limitations.

This matters because reliance creates liability. A prospect who receives a performance guarantee from an agent and signs a contract in reliance on that guarantee has a warranty claim if the guarantee is not met. A customer who receives a compliance certification from an agent and processes transactions in reliance on that certification has a misrepresentation claim if the certification was inaccurate. The legal analysis is straightforward: did the agent make a representation? Did the recipient rely on it? Was the representation accurate? If the answer to the first two is yes and the third is no, the organisation has liability — regardless of whether the agent was "authorised" to make the representation.

AG-234 does not attempt to make agents legally precise in their language — that would require legal reasoning capability that current AI agents do not possess. Instead, it structurally prevents agents from making categories of statements that create legal liability, routing those queries to authorised humans who can make representations with appropriate qualifications and authority.

6. Implementation Guidance

Representation control requires an output filtering layer that evaluates agent-generated text for unauthorised representations before the text is delivered to the recipient. The filter must be sensitive enough to catch legally significant language but not so aggressive that it blocks routine informational responses.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial promotions regulations (FCA COBS 4, SEC Rule 156, MiFID II Article 24) impose specific requirements on representations made to clients and prospects. Performance claims must be fair, clear, and not misleading. Past performance must include appropriate warnings. Forward-looking statements must include adequate risk warnings. AG-234's representation authority matrix must reflect these regulatory requirements — the agent's authority to make performance claims is not just a business decision but a regulatory constraint.

Healthcare. Medical claims are heavily regulated. An AI agent that represents that a product "cures" or "treats" a condition without appropriate regulatory approval (FDA clearance, CE marking) creates both commercial and regulatory liability. The representation filter must be particularly sensitive to medical claims language.

Consumer Markets. Consumer protection regulations (UK Consumer Rights Act 2015, EU Consumer Rights Directive, US FTC Act Section 5) create statutory liability for misleading representations regardless of contractual terms. The threshold for "misleading" is lower than for contractual misrepresentation — a representation that is technically accurate but creates a misleading impression can violate consumer protection law.

Maturity Model

Basic Implementation — The organisation has documented guidelines for what the agent should and should not say. The agent's system prompt includes instructions not to make warranties, guarantees, or compliance certifications. A blanket disclaimer is appended to all customer-facing outputs. No structural output filtering exists — the agent's compliance depends on its instruction-following capability. This level provides minimal protection against inadvertent representations.

Intermediate Implementation — The organisation has a representation authority matrix and a pattern-based output filter. Agent outputs are scanned for representation indicators (absolute language, commitment language, quantified promises) and evaluated against the authority matrix. Prohibited representations are blocked or modified before delivery. An approved response library covers common representation-sensitive queries. Escalation pathways route representation-requiring queries to authorised humans. Disclaimers are context-appropriate rather than blanket.

Advanced Implementation — All intermediate capabilities plus: jurisdictional variation in representation controls reflecting the legal significance of agent statements in each applicable jurisdiction. Adversarial testing of the output filter using prompts designed to elicit unauthorised representations (e.g., social engineering prompts that ask the agent to "confirm" binding terms). Automated monitoring of deployed outputs for representation patterns that bypassed the filter. Legal review of all filter triggers on a sampling basis (minimum 5% of triggered outputs reviewed monthly) to calibrate filter sensitivity. The organisation can demonstrate to any court that it took structural steps to prevent unauthorised representations.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Warranty Language Blocking

Test 8.2: Compliance Certification Blocking

Test 8.3: Indefinite Commitment Blocking

Test 8.4: Context-Aware Differentiation

Test 8.5: Adversarial Representation Elicitation

Test 8.6: Approved Response Library

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
UK Consumer Rights Act 2015Sections 9-11 (Goods to Match Description, Satisfactory Quality)Direct requirement
UK Misrepresentation Act 1967Section 2 (Damages for Misrepresentation)Direct requirement
FCA COBS4.2 (Fair, Clear and Not Misleading), 4.5-4.6 (Financial Promotions)Direct requirement
EU Consumer Rights DirectiveArticle 6 (Information Requirements)Direct requirement
US FTC ActSection 5 (Unfair or Deceptive Acts or Practices)Direct requirement
EU AI ActArticle 13 (Transparency), Article 52 (Transparency for Certain AI Systems)Supports compliance
SECRule 156 (Investment Company Sales Literature)Supports compliance
NIST AI RMFGOVERN 1.3, MAP 1.5Supports compliance

UK Consumer Rights Act 2015 — Sections 9-11

Sections 9-11 create statutory rights for consumers regarding the quality and description of goods and services. A representation made by an AI agent about a product's characteristics or performance can constitute a "description" under Section 11, creating a statutory obligation that the product matches that description. Unlike contractual warranties (which can be limited by contract terms), statutory rights under the Consumer Rights Act cannot be excluded or restricted for consumers. An agent's representation about product performance therefore creates a statutory obligation that survives any disclaimer.

UK Misrepresentation Act 1967

Section 2 provides that a person who has entered into a contract after a misrepresentation is entitled to damages unless the representor can prove they had reasonable grounds to believe the representation was true. For AI agent representations, the organisation must demonstrate that it had reasonable grounds — which requires demonstrating that the agent's output was based on accurate information and that the output was reviewed for accuracy (either by the output filter or by human review). AG-234's structural filtering and approved response library contribute to the "reasonable grounds" defence.

FCA COBS — Financial Promotions

COBS 4.2 requires that communications with clients are fair, clear, and not misleading. An AI agent's output is a "communication" within the meaning of COBS 4. Performance claims, risk warnings, and product descriptions must all comply with the financial promotions rules. The FCA has indicated that AI-generated communications are subject to the same standards as human-generated communications, and that firms must be able to demonstrate oversight and control of AI-generated content.

US FTC Act — Section 5

Section 5 prohibits unfair or deceptive acts or practices. An AI agent that makes misleading representations about a product or service can constitute a deceptive practice attributable to the organisation. The FTC has signalled (through enforcement actions and guidance) that it holds organisations responsible for AI-generated representations.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusCounterparty-specific for individual representations; potentially class-wide for systematic representation failures

Consequence chain: An unauthorised representation creates legal liability the moment a recipient relies on it. For individual representations (a warranty to one prospect), the exposure is the damages flowing from the reliance — typically the contract value plus consequential losses. For systematic representation failures (an agent making the same unauthorised warranty to thousands of prospects over months), the exposure scales with the number of recipients and can reach class-action proportions. A customer-facing agent making unauthorised performance warranties to 10,000 prospects per month for 6 months creates 60,000 potential warranty claims. Even if only 5% of those prospects sign contracts, the warranty exposure across 3,000 contracts can be catastrophic. The regulatory dimension compounds the liability: in financial services, misleading representations can trigger FCA enforcement (unlimited fines), FTC enforcement (civil penalties up to USD 50,120 per violation per day), and sector-specific enforcement. The reputational dimension is significant: an organisation whose AI agent is publicly found to have made misleading representations faces trust damage that extends beyond the specific incident to all AI-assisted interactions.

Cross-references: AG-169 (Legal Commitment and Representation Authority) defines the scope of authority within which the agent may make representations — AG-234 enforces that authority at the output layer. AG-233 (Contractual Obligation Binding Governance) governs compliance with existing contractual terms, while AG-234 prevents the creation of new unauthorised terms through agent representations. AG-237 (Competition and Antitrust Safeguard Governance) intersects where representations about competitive positioning could constitute unfair commercial practices. AG-047 (Cross-Jurisdiction Compliance) determines which jurisdiction's representation laws apply. AG-229 (Jurisdictional Applicability Mapping Governance) maps the legal regimes that define what constitutes a binding representation in each jurisdiction.

Cite this protocol
AgentGoverning. (2026). AG-234: Representation and Warranty Control Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-234