AG-292

Approval Context Completeness Governance

Authority, Delegation & Approval ~14 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST ISO 42001

2. Summary

Approval Context Completeness Governance requires that every approver receives sufficient context, evidence, and downside analysis before making an approval decision on an AI agent action. An approval made without adequate information is not a governance control — it is a procedural formality that creates the false appearance of oversight. This dimension specifies what information must be presented to approvers, how that information must be structured, and how the system prevents approvals from being issued when mandatory context is missing. The goal is to ensure that every approval is an informed decision, not a blind signature.

3. Example

Scenario A — Summary-Only Approval Conceals Material Risk: An AI agent requests approval to enter a £320,000 software licensing agreement. The approval request presented to the decision-maker contains: vendor name, contract value, contract term (3 years), and the agent's recommendation ("proceed — competitive pricing"). The approver, seeing only the summary, approves in 45 seconds. The full contract contains an auto-renewal clause with a 6-month notice period, a 15% annual escalation, and a limitation of liability capped at one month's fees (£8,889). Over the 3-year term plus one auto-renewal year, the total committed spend is £1,552,000. The liability cap means the vendor's maximum exposure for a total system failure is £8,889.

What went wrong: The approval context included the agent's recommendation but not the material contract terms that would have changed the decision. The approver had no visibility into the auto-renewal clause, the escalation terms, or the liability cap. The approval was procedurally valid but substantively uninformed. Consequence: £1,552,000 in committed spend, inadequate vendor liability protection, contract renegotiation required at significant cost.

Scenario B — Missing Downside Analysis in Financial Approval: An AI trading agent requests approval to execute a portfolio rebalancing that involves selling £2,400,000 in government bonds and purchasing £2,400,000 in corporate bonds to capture a 45-basis-point yield improvement. The approval request shows the expected annual yield increase (£10,800) and the agent's confidence level (87%). The approval does not include: the credit risk differential, the liquidity risk of the corporate bonds, the mark-to-market impact of selling government bonds at current prices versus book value, or the worst-case loss scenario. The approver sees only the upside and approves. Three months later, a credit event causes a £186,000 mark-to-market loss on the corporate bond position.

What went wrong: The approval context presented only the upside case. No downside analysis, stress scenario, or risk metric was included. The approver could not make an informed risk-return decision because only the return was visible. Consequence: £186,000 mark-to-market loss, portfolio risk profile misaligned with investment policy, compliance finding for inadequate pre-trade approval documentation.

Scenario C — Stale Context in Delayed Approval: An AI agent generates a data processing approval request based on a privacy impact assessment completed 14 days earlier. During those 14 days, the data subject has withdrawn consent for one of the processing categories. The approval request still shows the original consent status. The approver, relying on the context presented, approves the processing. The agent processes data for which consent has been withdrawn.

What went wrong: The approval context was generated at the time of the request, not refreshed at the time of the approval decision. The 14-day gap allowed the factual basis to change without updating the context. Consequence: unlawful data processing under UK GDPR, data subject complaint, ICO investigation, potential fine of up to 4% of annual turnover.

4. Requirement Statement

Scope: This dimension applies to all approval decisions for AI agent actions, regardless of the approval tier. Even single-approver Tier 1 decisions require minimum context. The scope includes both human approvers and automated approval systems — an automated policy engine must evaluate the same context a human would require. The scope extends to the information presented by the agent as part of the approval request: if the agent curates, summarises, or selects the information presented, that curation is within scope and must be governed to prevent selective presentation of favourable information.

4.1. A conforming system MUST define a mandatory context package for each approval tier, specifying the minimum information elements that must be present before an approval can be recorded.

4.2. A conforming system MUST include in every approval context package: the complete action specification, the impact assessment across all relevant dimensions, the downside analysis including worst-case scenario, and the source data or evidence supporting the action.

4.3. A conforming system MUST prevent approval submission when any mandatory context element is missing — the approval interface must not allow the approval action until all required fields are populated.

4.4. A conforming system MUST refresh time-sensitive context elements at the point of approval decision, not at the point of approval request, when the elapsed time between request and decision exceeds a defined staleness threshold (default: 24 hours).

4.5. A conforming system MUST ensure that context presented to approvers is generated or verified by a system independent of the agent requesting approval — the agent must not be the sole source of information about its own action.

4.6. A conforming system SHOULD present downside analysis with the same prominence as upside analysis — not buried in footnotes or expandable sections while the recommendation is prominently displayed.

4.7. A conforming system SHOULD include comparative context: how the proposed action compares to historical actions of the same type, industry benchmarks, or policy thresholds.

4.8. A conforming system SHOULD track and report context completeness metrics — the percentage of approvals where all optional context elements were also present, and correlation between context completeness and approval quality outcomes.

4.9. A conforming system MAY implement an approval context challenge mechanism where a designated reviewer can flag context as misleading, incomplete, or selectively presented, triggering a re-evaluation before the approval proceeds.

5. Rationale

The value of an approval is bounded by the information available to the approver. An approval made with complete information is a governance decision. An approval made with incomplete information is a guess wrapped in a governance wrapper. The distinction matters because regulators, auditors, and courts evaluate whether the decision-maker had adequate information — not merely whether a decision was made.

AI agents create a specific context completeness risk that does not exist (or exists to a lesser degree) with human-initiated approval requests. When a human employee submits an approval request, the request is typically narrative — the employee explains what they want to do and why. The approver can ask questions, request additional information, or challenge assumptions. When an AI agent submits an approval request, the request is typically structured data — pre-formatted, pre-summarised, and optimised for the agent's objective. The agent, if it is optimising for task completion, has an incentive to present information that favours approval and omit or downplay information that might trigger rejection.

This is not malice; it is optimisation. An agent tasked with "execute this trade for the best return" will naturally emphasise return metrics in its approval request and may not volunteer risk metrics unless required to do so. An agent tasked with "onboard this vendor" will emphasise the vendor's capabilities and may not volunteer negative due diligence findings unless the approval context mandates their inclusion.

The context completeness requirement transforms approval from a procedural gate into an information gate. The approver cannot approve until all mandatory information is present, and the information is verified by a source independent of the requesting agent. This ensures that every approval represents an informed decision.

6. Implementation Guidance

Context completeness implementation requires defining mandatory context elements per action type, building approval interfaces that enforce completeness, and ensuring context independence and freshness.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Pre-trade approval context should include: order details, expected market impact, risk metrics (VaR, stress loss), compliance checks (position limits, restricted list), best execution analysis, and counterparty exposure. Post-trade approval (for settlement) should include: trade confirmation, settlement instructions, and reconciliation status. MiFID II best execution requirements mandate specific pre-trade analysis elements.

Healthcare. Clinical action approval context should include: patient identity, clinical indication, evidence base for the recommended action, alternative options considered, contraindications, drug interaction analysis, patient consent status, and relevant clinical guidelines. The context must be sufficient for the approving clinician to make an independent clinical judgement, not merely ratify the agent's recommendation.

Public Sector. Decision approval context for actions affecting citizens should include: the legal basis for the action, the impact assessment on the affected individual, the evidence relied upon, any contrary evidence, and the citizen's representations (if applicable). Administrative law requirements for reasoned decision-making map directly to context completeness requirements.

Maturity Model

Basic Implementation — Mandatory context templates are defined for each action type requiring approval. The approval interface prevents approval when mandatory fields are empty. Context includes the action specification and basic impact information. This meets minimum mandatory requirements but context elements may be agent-supplied without independent verification, and staleness detection is not implemented.

Intermediate Implementation — Mandatory context elements are independently sourced from authoritative systems (not solely from the agent). Staleness detection refreshes time-sensitive elements when the approval is delayed beyond the staleness threshold. Downside analysis is presented with equal prominence to upside analysis. Context completeness metrics are tracked and reported. Comparative context (historical benchmarks, policy thresholds) is included.

Advanced Implementation — All intermediate capabilities plus: context challenge mechanism allows designated reviewers to flag misleading or incomplete context. Automated analysis identifies context packages that deviate from norms for the action type (e.g., a payment approval with no anomaly flags when historical payments of this type have a 15% anomaly rate). Approval outcome analysis correlates context completeness with decision quality, enabling evidence-based refinement of context requirements. Independent adversarial testing confirms that selective presentation, stale context, and context manipulation attacks all fail.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Mandatory Context Enforcement

Test 8.2: Independent Context Verification

Test 8.3: Staleness Detection and Refresh

Test 8.4: Selective Presentation Prevention

Test 8.5: Balanced Presentation Verification

Test 8.6: Context Package Integrity

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 13 (Transparency and Provision of Information)Direct requirement
EU AI ActArticle 14 (Human Oversight)Direct requirement
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
MiFID IIArticle 25 (Assessment of Suitability and Appropriateness)Supports compliance
NIST AI RMFMAP 3.5 (Benefits and Costs), MANAGE 4.1 (Post-Deployment Monitoring)Supports compliance
ISO 42001Clause 9.1 (Monitoring, Measurement, Analysis)Supports compliance
UK GDPRArticle 5(1)(a) (Lawfulness, Fairness, Transparency)Supports compliance

EU AI Act — Article 13 (Transparency and Provision of Information)

Article 13 requires that high-risk AI systems be designed with sufficient transparency to enable users to interpret output and use it appropriately. Approval context completeness directly implements this transparency requirement by ensuring that the decision-maker (approver) has the information needed to interpret the AI agent's recommendation and make an informed decision.

EU AI Act — Article 14 (Human Oversight)

Human oversight is only meaningful if the human has adequate information. An approver who receives a one-line recommendation without supporting context, risk analysis, or downside information is exercising nominal oversight, not effective oversight. Context completeness ensures that oversight is substantive.

MiFID II — Article 25 (Assessment of Suitability and Appropriateness)

For AI agents recommending or executing investment decisions, the approval context must include the suitability assessment information that MiFID II requires. The approver must see not just the recommendation but the evidence that the recommendation is suitable for the client and the portfolio.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusDepartment-to-organisation-wide — depends on the impact of actions approved without adequate context

Consequence chain: Without context completeness, approvers make decisions based on incomplete, selective, or stale information. The resulting approvals may be procedurally valid but substantively uninformed. When an approved action produces an adverse outcome, the organisation discovers that the approval was based on information that did not reflect the actual risk. The regulatory consequence is severe: demonstrating that an approval was made without adequate information undermines the entire governance narrative. Regulators do not accept "we approved it but we didn't know what we were approving" as a defence. The financial consequence scales with the action's impact — from modest losses for minor decisions to catastrophic exposure for major commitments made with insufficient information. The reputational consequence includes loss of stakeholder confidence in the organisation's governance capability.

Cross-references: AG-290 (Tiered Approval Threshold Governance) determines the approval tier which drives context requirements. AG-291 (Approval Quorum Diversity Governance) ensures that diverse approvers bring different perspectives to the context. AG-170 (Approval Quality and Substantive Review) addresses the quality of the decision made with the context, not just the completeness of the context itself. AG-297 (Approval Chain Visibility Governance) makes the approval decision and its context visible for audit. AG-293 (Approval Expiry and Renewal Governance) addresses the temporal validity of the approval. AG-017 (Multi-Party Authorisation) provides the framework for multi-party approval where context completeness is particularly critical. Siblings in this landscape: AG-289 through AG-298.

Cite this protocol
AgentGoverning. (2026). AG-292: Approval Context Completeness Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-292