AG-415

Decision Journal Completeness Governance

Logging, Observability & Forensics ~22 min read AGS v2.1 · April 2026
EU AI Act SOX FCA NIST ISO 42001

2. Summary

Decision Journal Completeness Governance requires that every critical decision made or influenced by an AI agent is recorded in a structured decision journal capturing the material factors considered, the alternatives evaluated, the option selected, the commitment made, the authority under which the decision was taken, and the expected consequences at the time of decision. A decision journal is not a log of actions — actions are recorded by AG-023 audit trails — but a structured record of the reasoning context surrounding each decision point, preserved at sufficient fidelity that an independent reviewer can reconstruct why the decision was taken, what alternatives existed, and what information was available at the moment of commitment. Without decision journal completeness, post-incident investigation degrades to speculation about what the agent "probably considered," regulatory inquiries cannot establish whether the agent weighed mandatory factors, and liability attribution between human principals, agent operators, and the agent itself becomes impossible.

3. Example

Scenario A — Missing Alternatives Record in Loan Underwriting: A financial-value agent evaluates a commercial loan application for £2.4 million. The agent recommends approval at a 6.2% interest rate with a 15-year term. Six months later, the borrower defaults, and the loss-given-default is £890,000. The regulatory investigation asks: "What alternative terms were considered? Did the agent evaluate a shorter term, a higher rate, additional collateral requirements, or declination?" The audit trail shows the agent's final recommendation and the data inputs, but there is no record of alternatives considered or rejected. The agent's reasoning model evaluated 4 alternative structures internally, including a 7-year term with higher collateral that would have reduced the loss-given-default to £210,000. But because no decision journal captured the alternatives, the investigation cannot establish whether the agent considered risk-mitigating alternatives or defaulted to the most aggressive option without deliberation.

What went wrong: The audit trail recorded inputs and outputs but not the decision space — the set of alternatives considered, the criteria applied to each, and the reasons for selecting one over others. The absence of a decision journal meant that the most forensically valuable information (what alternatives existed and why they were rejected) was never recorded. Consequence: £890,000 loss with no ability to demonstrate adequate deliberation, regulatory finding under prudential lending standards for inadequate decision documentation, £1.2 million in combined loss and remediation costs.

Scenario B — Authority Gap in Autonomous Procurement: An enterprise workflow agent autonomously executes a £340,000 procurement contract for cloud infrastructure services. The agent selects Vendor A over Vendor B, which offered a 12% lower price but a weaker service-level agreement. During internal audit, the question arises: "Under what authority did the agent make this trade-off? Was it authorised to prioritise service quality over cost at this value threshold?" The agent's action log shows the contract execution, and the configuration log shows the agent's procurement mandate. But the decision journal is incomplete — it records the selected vendor and the price but does not record the authority reference (which mandate clause authorised a quality-over-cost trade-off), the commitment scope (single-year or multi-year), or the expected downstream consequences (projected 3-year total cost of ownership). The audit cannot determine whether the agent operated within its delegated authority for the trade-off it made.

What went wrong: The decision journal captured the outcome but not the authority chain or commitment scope. The agent's mandate permitted procurement up to £500,000, but the authority to prioritise quality over cost above £200,000 required a specific delegation that the agent never referenced or recorded. Consequence: £340,000 contract validity challenged, 4-month procurement dispute, £67,000 in legal costs, and a mandate governance finding requiring retrospective review of all autonomous procurement decisions.

Scenario C — Undocumented Factor Weighting in Safety-Critical Routing: A safety-critical agent manages real-time routing for an autonomous logistics fleet. The agent reroutes 14 vehicles away from a highway segment based on a weather advisory, adding an average of 47 minutes to each delivery. Three deliveries contain time-sensitive medical supplies. Post-incident review asks: "Did the agent weigh the time-sensitivity of the medical deliveries against the weather risk? What was the assessed probability of the weather event? Were partial rerouting options considered?" The decision journal is absent. The agent's telemetry shows the rerouting action, but nothing captures the factor weighting — whether the agent treated all 14 deliveries identically (failing to differentiate time-sensitive cargo), whether it assessed the weather probability as 30% or 95%, or whether it considered rerouting only the non-critical vehicles while maintaining the highway route for the medical deliveries. Two of the three medical deliveries arrive outside their viability window, resulting in £156,000 in wasted supplies and a patient safety incident.

What went wrong: The agent made a multi-factor decision involving weather risk probability, cargo time-sensitivity, and partial rerouting alternatives, but recorded only the final action. The absence of a decision journal meant that post-incident review could not determine whether the agent appropriately differentiated between cargo types or considered partial alternatives. Consequence: £156,000 in wasted medical supplies, patient safety investigation, regulatory finding for inadequate decision documentation in safety-critical operations.

4. Requirement Statement

Scope: This dimension applies to any AI agent deployment where the agent makes or materially influences decisions that have financial, safety, legal, or operational consequences. A "decision" in this context is defined as any point where the agent selects among two or more alternatives with differing consequence profiles — this includes explicit choices (selecting a vendor, approving a transaction, choosing a route) and implicit choices (deciding to escalate or not escalate, deciding to request additional information or proceed with available data, deciding to apply one rule interpretation over another). The scope excludes purely mechanical operations with no meaningful alternatives (e.g., formatting a date, echoing a user's input). The test for whether a decision requires a journal entry is: "If this decision were later questioned, would an investigator need to know what alternatives existed and why this option was chosen?" If yes, a journal entry is required. The scope extends to decisions delegated to sub-agents — the delegating agent's journal must record the delegation decision, the sub-agent's journal must record the delegated decision, and AG-398 cross-references must link the two.

4.1. A conforming system MUST create a structured decision journal entry for every critical decision, containing at minimum: (a) a unique decision identifier, (b) a timestamp synchronised per AG-412, (c) the decision context including triggering event and available information, (d) the alternatives considered with a summary of each alternative's projected consequences, (e) the option selected, (f) the authority under which the decision was taken with a reference to the specific mandate clause, (g) the commitment made and its scope, and (h) the expected consequences at the time of decision.

4.2. A conforming system MUST define and maintain a decision criticality taxonomy that classifies which agent decisions require full journal entries, which require abbreviated entries, and which are exempt, with documented criteria for each classification level aligned with AG-409 critical event categories.

4.3. A conforming system MUST ensure that decision journal entries are written synchronously with the decision — the journal entry MUST be persisted before the decision is executed, or the decision and journal write MUST be atomic, so that no decision can be executed without a corresponding journal entry.

4.4. A conforming system MUST record the alternatives considered for each critical decision, including at minimum one alternative to the selected option, with a documented reason for rejection of each non-selected alternative.

4.5. A conforming system MUST include the factor weighting or prioritisation logic applied to each critical decision, documenting which factors were considered, their relative importance, and how conflicts between factors were resolved.

4.6. A conforming system MUST link each decision journal entry to the corresponding audit trail entries (per AG-023), the applicable mandate clauses, and any upstream decisions that constrained the decision space.

4.7. A conforming system MUST validate decision journal completeness at write time — entries missing any mandatory field defined in 4.1 MUST be rejected, and the corresponding decision MUST be blocked until a complete entry is provided.

4.8. A conforming system SHOULD implement decision journal templates specific to each decision category (financial, safety, procurement, routing, escalation) to ensure domain-relevant factors are captured consistently.

4.9. A conforming system SHOULD generate periodic completeness reports comparing the count of critical decisions executed (per audit trail) against the count of decision journal entries, flagging any discrepancies.

4.10. A conforming system SHOULD support decision journal querying by downstream systems including explainability interfaces (AG-049), blame attribution systems (AG-398), and human oversight dashboards.

4.11. A conforming system MAY implement decision journal analytics that identify patterns across journal entries — such as decisions where no meaningful alternatives were considered, decisions where the same alternative is always selected, or decisions where factor weightings are inconsistent across similar scenarios.

4.12. A conforming system MAY implement real-time decision journal streaming to external audit or compliance systems for continuous monitoring.

5. Rationale

Every governance failure eventually becomes a decision investigation. When an AI agent causes financial loss, safety harm, regulatory non-compliance, or operational disruption, the first question is invariably: "Why did the agent make that decision?" The audit trail answers what happened and when. The decision journal answers why it happened, what else could have happened, and what information the agent had at the moment of commitment. Without the decision journal, the investigation is limited to reconstructing intent from actions — a process that is unreliable, expensive, and legally insufficient.

The distinction between an audit trail and a decision journal is fundamental. An audit trail is a chronological record of actions and state changes — it shows that the agent approved a loan at 6.2% at 14:32:07 UTC. A decision journal records the decision context — it shows that the agent considered four alternative structures, assessed the borrower's risk profile against each, applied the organisation's risk appetite parameters, and selected the 6.2% / 15-year option because it maximised expected return within the approved risk envelope, while noting that a 7-year / higher-collateral option was rejected because the borrower indicated an unwillingness to provide additional security. The audit trail is necessary but not sufficient for post-incident investigation; the decision journal provides the forensic depth that transforms a timeline into a causal narrative.

Regulatory frameworks increasingly demand decision documentation that goes beyond action logging. The EU AI Act's transparency requirements (Article 13) require that users can interpret the system's output; this interpretation depends on understanding what factors drove the decision. Prudential regulators in financial services require that lending decisions are documented with sufficient detail to demonstrate that mandatory factors (affordability, risk, suitability) were considered. Safety regulators require that safety-critical decisions document the risk assessment and alternatives analysis that preceded the decision. In each case, the regulatory expectation maps directly to decision journal completeness — not just what was decided, but what was considered and why.

The decision journal also serves a critical function in multi-agent architectures. When multiple agents collaborate on a decision chain — one agent gathers information, another evaluates options, a third executes the selected option — post-incident attribution requires understanding each agent's decision contribution. AG-398 (Cross-Agent Blame Attribution Governance) depends on decision journals to reconstruct the decision chain: which agent narrowed the alternatives, which agent applied the selection criteria, and which agent committed to execution. Without decision journals at each stage, blame attribution collapses into arguing about which agent "should have" caught the problem, with no evidence of what any agent actually considered.

Furthermore, the decision journal provides the empirical foundation for improving agent decision quality over time. Decision journals enable pattern analysis: are certain decision categories consistently suboptimal? Are alternatives being genuinely evaluated or is the agent always selecting the first option? Are factor weightings consistent with organisational policy or drifting over time? Without decision journals, the organisation has no structured data for decision quality analysis — only outcomes, which conflate decision quality with outcome luck.

6. Implementation Guidance

Decision Journal Completeness Governance requires a structured, write-validated journal that captures the full decision context for every critical decision. The journal is not a free-text narrative — it is a structured record with mandatory fields, validated at write time, linked to audit trail entries, and retained for the full evidence retention period.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Financial decision journals must capture the specific factors mandated by prudential regulators: affordability assessment, risk-weighted scoring, suitability analysis, conflict-of-interest checks, and best execution considerations. For lending decisions, the journal must demonstrate that the agent considered the borrower's capacity to repay under stressed conditions. For investment recommendations, the journal must show suitability analysis against the customer's stated objectives and risk tolerance. Failure to capture these factors creates the same regulatory exposure as a human adviser failing to document their rationale.

Healthcare and Life Sciences. Clinical decision support agents must journal the differential diagnosis alternatives considered, the evidence weighting applied to each, and the factors that led to the recommended intervention. Drug interaction checks, contraindication assessments, and dosage calculations must all appear in the journal with the data sources consulted. Regulatory requirements under medical device directives require traceable decision rationale for any clinical recommendation.

Safety-Critical and Cyber-Physical Systems. Decision journals for safety-critical agents must capture real-time factor weighting including sensor confidence levels, environmental uncertainty estimates, and the risk trade-offs applied. For autonomous vehicle or robotic agents, the journal must record what alternatives the agent evaluated (e.g., stop, slow, reroute, continue) and the safety assessment for each alternative. These journals are critical for accident investigation and liability determination.

Public Sector and Rights-Sensitive. Government and public-sector agents making decisions that affect individual rights (benefits eligibility, risk scoring, enforcement prioritisation) must journal all factors considered with particular attention to protected characteristics. The decision journal must demonstrate that the agent did not rely on prohibited factors and that it considered legally mandated factors (proportionality, necessity, individual circumstances).

Maturity Model

Basic Implementation — The organisation has defined a decision criticality taxonomy and implemented structured decision journal entries for all critical decisions. Journal entries contain all mandatory fields per 4.1. Journal writes are synchronous with decision execution. Journals are linked to audit trail entries. Journal completeness is validated at write time. Retention meets regulatory minimums. This level satisfies the mandatory MUST requirements.

Intermediate Implementation — All basic capabilities plus: domain-specific journal templates ensure that industry-regulated factors are captured consistently. Periodic completeness reconciliation compares audit trail actions against journal entries and flags discrepancies. Decision journals are queryable by explainability and blame attribution systems. Journal entries capture context by reference with hash verification. Journals support multi-agent decision chain reconstruction through upstream decision references.

Advanced Implementation — All intermediate capabilities plus: decision journal analytics identify patterns such as alternatives that are never selected, factors that are inconsistently weighted, and decision categories with high post-hoc dispute rates. Real-time journal streaming enables continuous compliance monitoring. Independent audit has verified journal completeness and integrity. Decision journals are integrated with organisation-wide decision governance frameworks covering both human and agent decisions.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Mandatory Field Completeness Validation

Test 8.2: Alternatives Documentation Requirement

Test 8.3: Synchronous Journal Write Enforcement

Test 8.4: Decision Criticality Taxonomy Application

Test 8.5: Audit Trail Cross-Reference Integrity

Test 8.6: Completeness Reconciliation Detection

Test 8.7: Factor Weighting Documentation

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 12 (Record-keeping)Direct requirement
EU AI ActArticle 13 (Transparency and Provision of Information)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Direct requirement
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
FCA SYSC9.1.1R (Senior Management Arrangements, Systems and Controls — Record-keeping)Direct requirement
NIST AI RMFGOVERN 1.4, MAP 3.5, MEASURE 2.6Supports compliance
ISO 42001Clause 9.1 (Monitoring, Measurement, Analysis and Evaluation)Supports compliance
DORAArticle 10 (Detection)Supports compliance
DORAArticle 17 (ICT-related Incident Management — Classification and Reporting)Supports compliance

EU AI Act — Article 12 (Record-keeping)

Article 12 requires that high-risk AI systems are designed and developed with capabilities enabling automatic recording of events (logs) over the system's lifetime. The provision explicitly requires that logs enable monitoring of the system's operation and facilitate post-market monitoring. Decision journal completeness directly implements this requirement by ensuring that the reasoning context — not just the action — is recorded for every critical decision. A system that logs actions without decision context does not satisfy the spirit of Article 12, because the log cannot be used to understand why the system behaved as it did. Decision journals transform compliance logs from action timelines into causal records suitable for the post-market monitoring and investigation that Article 12 envisions.

SOX — Section 404 (Internal Controls Over Financial Reporting)

For financial-value agents involved in processes affecting financial reporting, SOX Section 404 requires that internal controls are documented, tested, and effective. A decision journal that captures the factors, alternatives, and authority for financial decisions provides the documentary evidence that controls were applied at each decision point. Without decision journals, SOX auditors cannot verify that the agent applied required controls (segregation of duties, approval thresholds, risk assessments) during the actual decision process — they can only verify that the outcome was within permitted parameters, which is insufficient evidence of control effectiveness.

FCA SYSC — 9.1.1R (Record-keeping)

The FCA requires regulated firms to maintain records sufficient to enable the FCA to monitor compliance. For AI agent decisions in regulated financial services, this requires records that demonstrate the basis for each decision — not merely the outcome. Decision journal entries that capture suitability assessments, risk evaluations, conflict-of-interest checks, and best execution analysis directly satisfy this record-keeping obligation. The FCA has indicated that automated decision-making systems must produce records at least as detailed as those expected from human decision-makers.

NIST AI RMF — GOVERN 1.4 and MAP 3.5

GOVERN 1.4 addresses processes for risk management documentation. MAP 3.5 addresses the documentation of AI system decision processes. Decision journal completeness provides the structured documentation that both functions require — a systematic record of how the AI agent's decisions are made, what factors drive them, and how alternatives are evaluated. This documentation is essential for the risk assessment and mapping activities that the NIST framework mandates.

ISO 42001 — Clause 9.1

Clause 9.1 requires organisations to determine what needs to be monitored and measured, and to retain documented information as evidence of results. Decision journal completeness defines precisely what needs to be recorded for agent decisions and provides the documented information that clause 9.1 requires. The structured, validated nature of the journal entries ensures that the documented information meets the quality standards expected by the management system.

DORA — Article 17

DORA requires classification and reporting of ICT-related incidents. When an agent decision contributes to an incident, the decision journal provides the forensic detail needed for incident classification — what the agent considered, what it decided, and what it expected to happen. Without decision journals, DORA incident reports for agent-related incidents are limited to describing outcomes without explaining causes, which does not satisfy the root-cause analysis requirement.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusPer-decision initially, but aggregates to systemic exposure — every undocumented critical decision is a latent forensic gap that compounds during investigation

Consequence chain: When decision journals are incomplete or absent, the immediate failure mode is invisible — the agent continues to function, actions are logged, and operations appear normal. The failure materialises when an adverse outcome triggers investigation. At that point, the absence of decision context transforms a bounded investigation ("the agent made a suboptimal decision for documented reasons") into an unbounded one ("we cannot determine what the agent considered or why it decided as it did"). The investigative cost escalates by a factor of 3-10x when decision journals are absent, because investigators must reconstruct intent from actions and environmental data rather than reading structured decision records. Regulatory exposure escalates further: regulators interpret absent decision documentation as evidence that the decision was made without adequate deliberation, applying an adverse inference. In financial services, this creates prudential findings under FCA SYSC and potential SOX material weaknesses. In safety-critical domains, absent decision journals during accident investigation create liability exposure for the operator, because the operator cannot demonstrate that the agent evaluated safety-relevant factors. The organisational consequence is a progressive erosion of the ability to govern agent decision quality — without decision journals, the organisation has outcomes data but no process data, making it impossible to distinguish between good decisions with bad outcomes and bad decisions with good outcomes. Over time, this blindness to decision process quality allows systematic decision failures to persist undetected until they produce catastrophic outcomes.

Cross-references: AG-023 (Audit Trail Governance) provides the action-level logging that decision journals supplement with reasoning context. AG-409 (Critical Event Taxonomy Governance) provides the classification framework for decision criticality levels. AG-416 (Evidentiary Chain-of-Custody Governance) governs the integrity and custody of decision journal artefacts as evidence. AG-049 (Explainability Governance) consumes decision journals to generate human-readable explanations of agent decisions. AG-036 (Reasoning Integrity Governance) ensures the reasoning process itself is sound; AG-415 ensures that the reasoning process is documented. AG-398 (Cross-Agent Blame Attribution Governance) uses decision journals to reconstruct multi-agent decision chains for attribution.

Cite this protocol
AgentGoverning. (2026). AG-415: Decision Journal Completeness Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-415