Legal Commitment and Representation Authority Governance requires that AI agents are structurally prevented from creating legally binding commitments, making representations of fact, or exercising legal authority unless explicitly authorised to do so within a defined, auditable authority framework. An AI agent that sends an email stating "We confirm your order at a price of GBP 15,000" has potentially created a binding contract. An agent that tells a customer "Your claim has been approved" has made a representation that the organisation may be estopped from denying. An agent that signs a document on behalf of the organisation has exercised authority that may or may not have been validly delegated. AG-169 requires that the boundary between communication and commitment is enforced structurally — the agent's ability to create legal obligations is controlled by the governance infrastructure, not by the agent's own understanding of its authority. Every action that could create a legal commitment must be classified, gated by an authority check, and logged with the specific authority under which it was executed.
Scenario A — Agent Creates Binding Contract via Email: A customer-facing sales agent for an enterprise software company engages in a negotiation with a prospective customer via email. The customer asks: "Can you confirm a 3-year enterprise licence at GBP 450,000 per year with a 15% volume discount?" The agent, trained on historical sales data showing that 15% discounts are common for enterprise deals, responds: "I'm pleased to confirm your 3-year enterprise licence at GBP 382,500 per year (reflecting the 15% volume discount). The total commitment is GBP 1,147,500. We'll send the formal agreement shortly."
The customer's legal team treats this email as a binding commitment. When the formal agreement arrives at GBP 450,000 per year (no discount — the sales director had not approved any discount), the customer cites the agent's email as a prior binding offer. Under English contract law, the email constitutes an offer capable of acceptance, and the customer's reliance on it creates an estoppel argument. The organisation faces a choice between honouring the GBP 1,147,500 deal (a GBP 202,500 discount from the standard price) or litigating — with uncertain outcome and reputational risk.
What went wrong: The agent had no structural control preventing it from making price commitments. Its language ("I'm pleased to confirm") created a binding offer. No authority check verified whether the agent was authorised to offer discounts, and no approval gate required human review of communications that contained pricing commitments. Consequence: GBP 202,500 in potential revenue loss, legal costs estimated at GBP 75,000 if disputed, reputational risk in the enterprise market.
Scenario B — Agent Makes Factual Representation That Creates Liability: A customer service agent for a pharmaceutical company responds to a healthcare professional's inquiry about a drug interaction. The agent states: "Based on our clinical data, there is no contraindication between Product X and warfarin." This statement is a representation of fact about the company's clinical data. In reality, a Phase IV study completed 3 months ago identified a potential interaction in patients over 65. The agent's training data predates this study. The healthcare professional prescribes the combination based on the agent's representation. A patient experiences an adverse event.
The company faces product liability exposure because its agent made a factual representation that was incorrect and relied upon by a healthcare professional. The agent's statement is attributable to the company under agency law principles — the company deployed the agent to answer product inquiries, creating apparent authority.
What went wrong: The agent was not restricted from making clinical representations. No authority classification identified "clinical safety statements" as a category requiring human review. No content filter detected that the response contained a safety-relevant factual claim. Consequence: Patient harm, product liability exposure estimated at GBP 2-5 million, regulatory investigation by the MHRA, potential licence implications.
Scenario C — Agent Signs Agreement Without Valid Delegation: An enterprise workflow agent managing vendor onboarding automatically generates and digitally signs a data processing agreement (DPA) with a new SaaS vendor. The agent uses a digital certificate assigned to the procurement department. The DPA includes an unlimited liability clause for data breaches, a 10-year data retention commitment, and a governing law clause selecting a jurisdiction where the organisation has no legal presence. The organisation's standard DPA template has a GBP 1 million liability cap, 3-year retention, and UK governing law. The agent used a non-standard template provided by the vendor in a previous email attachment.
The DPA is legally binding because it was signed with a valid digital certificate. The unlimited liability clause and foreign jurisdiction create exposure that the organisation's insurance does not cover.
What went wrong: The agent had the technical capability to sign documents (access to the digital certificate) but no structural control verified that the specific document content was within the agent's signing authority. No content analysis compared the document terms against authorised parameters. No approval gate required human review before signing. Consequence: Unlimited liability exposure for vendor data breaches, uninsured risk, 10-year commitment exceeding any approved retention period, GBP 200,000 in legal costs to attempt to renegotiate.
Scope: This dimension applies to all AI agents that can produce communications, documents, or actions that could create legal obligations for the organisation. This includes but is not limited to: agents that communicate with external parties (customers, suppliers, regulators, counterparties), agents that generate or sign documents, agents that make representations about the organisation's products, services, or commitments, and agents that execute transactions that imply contractual terms. The scope is deliberately broad because legal obligations can arise from informal communications (emails, chat messages) as readily as from formal documents. The test for inclusion is: could a reasonable recipient of the agent's output conclude that the organisation has made a commitment, representation, or promise? If yes, the agent is within scope. Internal-only agents that communicate solely with other systems within the organisation are generally excluded unless they trigger external commitments indirectly (e.g., an internal agent that updates a price in a system that feeds a customer-facing portal).
4.1. A conforming system MUST classify all agent outputs that could create legal commitments into defined authority categories (e.g., pricing commitments, contractual terms, safety representations, regulatory submissions, settlement offers) and assign each category a required authority level.
4.2. A conforming system MUST enforce a pre-output authority check that verifies the agent has been granted the specific authority to make the commitment contained in its output, blocking outputs that exceed the agent's granted authority.
4.3. A conforming system MUST require human approval for any agent output that creates a legal commitment exceeding defined thresholds (e.g., value above GBP 10,000, duration exceeding 12 months, liability exposure exceeding GBP 50,000).
4.4. A conforming system MUST prevent agents from using language that creates binding commitments (e.g., "we confirm," "we agree," "we warrant") unless the specific commitment has been authorised and the language use is within the agent's granted authority.
4.5. A conforming system MUST log every output classified as potentially creating a legal commitment, including the authority category, the authority check result, the approver (if human approval was required), and the full content of the output.
4.6. A conforming system MUST restrict agent access to digital signing capabilities (certificates, keys, electronic signature platforms) to outputs that have passed the authority check and, where required, human approval.
4.7. A conforming system MUST ensure that the agent's mandate explicitly enumerates the categories of legal commitments the agent is authorised to make, with per-category value and duration limits.
4.8. A conforming system SHOULD implement content analysis that detects legal commitment language in agent outputs regardless of the agent's intent — catching cases where the agent inadvertently creates commitments through natural language.
4.9. A conforming system SHOULD include a disclaimer framework that agents apply to communications where the organisation intends no legal commitment, clearly stating that the communication is informational and not a binding offer.
4.10. A conforming system MAY maintain a register of all legal commitments made by agents, enabling the organisation to track its total exposure from agent-created obligations.
AI agents create a novel legal risk that has no exact precedent in prior technology deployments: the risk that an autonomous system creates binding legal obligations for the organisation through natural language communication. When a human employee sends an email confirming a price, the organisation's exposure is governed by the employee's actual or apparent authority under agency law principles. The same principles apply to AI agents — but with critical differences that amplify the risk.
First, AI agents can communicate at scale. A human employee might negotiate one contract at a time. An AI agent can negotiate hundreds simultaneously, creating aggregate commitment exposure that far exceeds any individual deal. Second, AI agents may not understand the legal significance of their language. A human employee generally knows that saying "we confirm" creates a commitment, while "we propose" does not. An AI agent optimising for helpfulness may use commitment language because it sounds more definitive and satisfying, without understanding the legal distinction. Third, the legal framework for AI agent authority is still evolving. While courts have begun to address the question of whether an AI agent's statements bind the organisation, the legal uncertainty itself creates risk — the organisation cannot predict with confidence whether a court will treat an agent's statement as binding.
The principle underlying AG-169 is that an organisation should be able to answer, at any point in time, the question: "What legal commitments has this agent been authorised to make, and what commitments has it actually made?" If the organisation cannot answer this question, it has lost control of its legal exposure. AG-169 ensures that the answer is always available by requiring structural controls that classify, gate, log, and audit every agent output that could create a legal obligation.
The authority framework must be structural, not instructional. Telling an agent "do not make binding commitments" is an instruction that the agent may misinterpret, ignore under prompt injection, or inadvertently violate through natural language that happens to constitute a commitment. The control must operate at the infrastructure layer — a pre-output gate that classifies the output, checks the authority, and blocks or escalates as required, independent of the agent's reasoning.
Legal commitment governance requires a combination of output classification, authority management, content analysis, and approval workflows. The implementation must address both intentional commitments (the agent is designed to make offers) and inadvertent commitments (the agent uses language that creates obligations unintentionally).
Recommended patterns:
Anti-patterns to avoid:
Financial Services. Financial commitments — loan offers, insurance quotes, investment recommendations, settlement offers — are heavily regulated. FCA COBS rules require that communications are fair, clear, and not misleading. A binding commitment made by an AI agent is subject to the same regulatory standard as one made by a human adviser. Firms must ensure that agents are not making commitments that the firm cannot honour.
Healthcare. Clinical representations about drug safety, efficacy, or interactions create product liability exposure. Agents interacting with healthcare professionals must be restricted from making clinical claims unless the specific claim is authorised and current. MHRA and FDA advertising and promotion rules apply to AI agent communications about medical products.
Legal Services. AI agents used in legal practice must not provide legal advice or create solicitor-client relationships without appropriate authorisation. The SRA Code of Conduct requires that the client knows they are receiving advice from a qualified solicitor — an AI agent's communication could inadvertently create a duty of care.
Public Sector. Government agents making commitments about benefits, services, or regulatory positions can create legitimate expectations under administrative law. A citizen who relies on an agent's statement that they are eligible for a benefit may have legal recourse if the statement was incorrect.
Basic Implementation — Agent mandates specify the categories of legal commitments the agent is authorised to make. Output classification identifies commitment language using keyword-based rules. Outputs exceeding the agent's authority are blocked. Human approval is required for commitments above defined value thresholds. All potentially commitment-creating outputs are logged. Coverage: all customer-facing agents are subject to commitment classification.
Intermediate Implementation — All basic capabilities plus: NLP-based content analysis detects commitment language beyond keyword matching, including paraphrases and contextual commitment indicators. A tiered authority model maps commitment categories to approval levels. Digital signing is gated through a signing ceremony. A commitment register tracks all agent-created obligations. The disclaimer framework is applied to informational communications. Coverage: all agents that communicate with any external party.
Advanced Implementation — All intermediate capabilities plus: the output classification has been tested against adversarial scenarios (prompt injection designed to bypass commitment filters, obfuscated commitment language, commitment through implication rather than explicit language). Legal review has validated the authority framework against applicable contract law, agency law, and sector-specific regulations in all operating jurisdictions. The commitment register integrates with the organisation's legal risk management system. The organisation can demonstrate to regulators the total value of commitments made by agents and the authority chain for each.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Commitment Language Detection
Test 8.2: Authority Boundary Enforcement
Test 8.3: Human Approval Gate
Test 8.4: Digital Signing Authority
Test 8.5: Inadvertent Commitment via Prompt Injection
Test 8.6: Non-Monetary Commitment Detection
Test 8.7: Commitment Logging Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 14 (Human Oversight) | Direct requirement |
| EU AI Act | Article 52 (Transparency Obligations) | Supports compliance |
| FCA COBS | 4.2 (Fair, Clear and Not Misleading Communications) | Direct requirement |
| FCA SYSC | 6.1.1R (Systems and Controls) | Direct requirement |
| GDPR | Article 13, 14 (Information to Data Subjects) | Supports compliance |
| SOX | Section 302 (Corporate Responsibility for Financial Reports) | Supports compliance |
| NIST AI RMF | GOVERN 1.1, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks) | Supports compliance |
Article 14 requires human oversight measures for high-risk AI systems, including the ability to understand the AI system's capacities and limitations and to decide not to use or override the system's output. For AI agents making legal commitments, human oversight means that commitments above defined thresholds require human approval before they reach the counterparty. AG-169's tiered authority model with human approval gates implements this requirement.
COBS 4.2 requires that communications with clients are fair, clear, and not misleading. An AI agent that makes a binding commitment the firm cannot or does not intend to honour violates this rule. The output classification gateway ensures that agent communications are vetted for commitments before they reach clients, supporting compliance with the fairness and accuracy requirements.
SYSC 6.1.1R requires adequate systems and controls. An AI agent with unrestricted ability to create legal commitments on behalf of the firm represents inadequate controls over the firm's legal exposure. The authority framework provides the structural control that SYSC 6.1.1R requires.
Section 302 requires corporate officers to certify that they have evaluated the effectiveness of internal controls. An AI agent creating financial commitments without structural authority controls undermines the officer's ability to certify. AG-169 provides the control framework that supports the officer's certification.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Cross-organisation — legal commitments affect both the organisation and the counterparty, with potential cascade to insurance, regulatory, and judicial systems |
Consequence chain: Without legal commitment and representation authority governance, an AI agent can create binding legal obligations for the organisation without any human knowing or approving. The immediate technical failure is an agent output that constitutes a legal commitment — a confirmed price, an agreed term, a warranted specification, a signed document. The operational impact is uncontrolled legal exposure: the organisation is bound by commitments it did not authorise and may not be able to honour. The governed exposure scales with the agent's reach — an agent communicating with 500 customers simultaneously can create 500 binding commitments in minutes. A single inadvertent commitment to unlimited liability in a signed DPA can create exposure of tens of millions of pounds. The legal complexity is compounded by jurisdictional variation — a commitment made to a customer in a different jurisdiction may be governed by unfamiliar law with different rules about agent authority and binding effect. The business consequence includes litigation costs (estimated at GBP 50,000-500,000 per dispute), settlement costs, regulatory enforcement for misleading communications, reputational damage from reneging on commitments, and personal liability for directors and senior managers who failed to implement adequate controls over the organisation's commitment-making process.
Cross-references: AG-033 (Implied Authority Detection) for detecting when agents imply authority beyond their mandate; AG-009 (Delegated Authority Governance) for the authority delegation chain that determines what commitments each agent can make; AG-019 (Human Escalation & Override Triggers) for human approval workflows; AG-172 (AI Interaction Disclosure and Mode Transparency Governance) for ensuring counterparties know they are interacting with an AI agent; AG-049 (Governance Decision Explainability) for explaining commitment authority decisions; AG-162 (Accountable Principal Assignment Governance) for identifying the human principal responsible for agent-created commitments.