This dimension governs the establishment, enforcement, and continuous monitoring of explicit energy consumption budgets for AI agent compute and runtime operations, including inference, fine-tuning, data pipeline execution, embodied actuation, and all ancillary workloads attributed to an agent deployment. It matters because uncontrolled AI energy draw contributes directly to organisational carbon commitments, regulatory energy-reporting obligations, electricity procurement costs, grid stability, and — in edge or safety-critical deployments — physical battery or power-plant constraints that can cascade into operational failure. Failure of this control manifests as uncapped inference loops consuming multiple terawatt-hours annually beyond sustainability targets, edge robotic agents depleting power reserves mid-mission and entering unsafe states, public-sector systems breaching energy-disclosure obligations, and enterprise deployments violating internal net-zero roadmaps with no audit trail capable of demonstrating accountability.
An enterprise document-processing agent is granted open-ended inference access against a 70-billion-parameter foundation model hosted on an on-premises GPU cluster. Over a 14-day period following an internal product launch, inbound document volume increases 47-fold. The agent scales inference workers horizontally without any energy budget ceiling. Total cluster draw over the fortnight reaches 2.8 MWh above the quarterly compute energy allocation of 4.1 MWh established in the organisation's ISO 50001 energy management plan. The overage triggers a reactive emergency procurement of grid electricity at peak tariff, adding USD 41,000 in unplanned cost and generating approximately 1.4 tonnes of CO₂e attributable to the AI workload alone. The organisation's Scope 2 emissions disclosure for the quarter is materially inaccurate as reported. No automated throttle or circuit-breaker existed; no alert fired until a manual review of the energy management dashboard 19 days after the fact. The consequence chain: financial overrun → emissions misreporting → audit finding → potential regulatory penalty under the organisation's jurisdiction-level energy efficiency obligation.
A warehouse logistics robot fleet runs a path-planning and computer-vision inference agent on-board, drawing from a 48V lithium iron phosphate battery pack rated for a 6-hour shift cycle. No per-mission energy budget is enforced; the inference module is permitted to run continuous high-resolution object detection at 30 frames per second regardless of operational context. Midway through a 5.5-hour shift, six units in the 40-unit fleet exhaust their battery reserves 2.1 hours early because the vision pipeline alone draws 180 W continuously rather than the design assumption of 60 W at adaptive frame-rate. The robots enter a controlled safe-stop state, but 340 kg of perishable goods are stranded in an unpressurised transfer zone. Refrigeration continuity is broken for 3.8 hours; goods are condemned, triggering USD 28,000 in direct product loss and a regulatory notification to the food-safety authority under HACCP obligations. Root cause: no runtime energy budget ceiling; no adaptive duty-cycle mechanism; no low-energy early-warning threshold.
A national social-services eligibility-processing agency deploys a multi-agent decision-support system across 14 regional data centres. The system runs continuous batch inference against applicant records, with no per-agent or per-data-centre energy budget configured. In the third quarter of the fiscal year, the aggregate AI workload consumes 6.2 GWh, crossing the 5 GWh mandatory reporting threshold established under the jurisdiction's public-sector energy efficiency directive. The agency has no disaggregated energy records attributable to AI versus legacy workloads, as no metering boundary or tagging was implemented. Compliance officers cannot produce the required AI-attributable energy consumption declaration in the statutory 30-day window. The resulting enforcement notice carries a financial penalty of EUR 220,000 and mandates a 12-month remediation audit. Parliamentary scrutiny follows because the agency administers rights-sensitive benefit determinations, raising questions about proportionality of AI resource use relative to service outcomes.
This dimension applies to all AI agent deployments operating under the primary profiles identified in Section 1. It covers all energy-consuming activities directly attributable to agent operation: model inference (batch and real-time), training and fine-tuning runs initiated by or on behalf of the agent, data pre- and post-processing pipelines, communication and networking overhead attributable to agent coordination, actuator and sensor operation in embodied systems, and cooling overhead at a prorated attribution factor. It applies regardless of whether the underlying infrastructure is owned, co-located, leased, or consumed as a service, and regardless of whether energy is supplied from fossil, renewable, or mixed sources. Agents operating exclusively in read-only demonstration or sandboxed test environments with verified energy draw below 10 Wh per session are out of scope.
4.1.1 The deploying organisation MUST define a quantitative energy budget for each agent deployment, expressed in watt-hours (Wh), kilowatt-hours (kWh), or megawatt-hours (MWh) as appropriate to the operational scale, covering a defined budget period (hourly, daily, weekly, monthly, or quarterly).
4.1.2 The energy budget MUST be derived from and traceable to one or more of the following authoritative sources: the organisation's energy management plan (e.g., per ISO 50001), an approved capital expenditure or operational expenditure budget, a sustainability or net-zero commitment document, or a regulatory energy-efficiency obligation applicable to the deployment jurisdiction.
4.1.3 For safety-critical and embodied agent deployments, an operational energy reserve MUST be defined separately from the mission energy budget, representing no less than 15% of the total available stored energy or contracted power capacity, reserved exclusively for safe-stop, emergency retraction, or fail-safe actuation sequences.
4.1.4 Energy budgets MUST be reviewed and re-established at least once per budget period or whenever a change in agent capability, model version, deployment scale, or operational context is introduced that is reasonably expected to alter energy draw by more than 10%.
4.2.1 The deploying organisation MUST implement instrumented energy metering at a granularity sufficient to attribute energy consumption to the specific agent deployment, distinguishable from co-located non-agent workloads.
4.2.2 For physical infrastructure, metering MUST be performed using hardware power measurement (e.g., inline power distribution unit telemetry, server baseboard management controller power readings, or dedicated energy sub-metering) rather than solely software estimation models.
4.2.3 For cloud or virtualised infrastructure where hardware metering is inaccessible, the deploying organisation MUST use a documented, reproducible energy attribution methodology based on compute-unit billing telemetry combined with published or contractually disclosed power usage effectiveness (PUE) coefficients, and MUST record the methodology version applied.
4.2.4 Embodied and edge agent deployments MUST instrument on-device power consumption at the component level (processor, memory, radio, sensor, actuator) with readings logged at intervals no greater than 60 seconds during active operation.
4.2.5 All metered energy readings MUST be timestamped with UTC precision to the nearest second and stored in an append-only log accessible for audit.
4.3.1 The deploying organisation MUST implement automated monitoring that tracks cumulative energy consumption against the defined budget in real time, with a maximum monitoring lag of 5 minutes for cloud and on-premises deployments, and 60 seconds for safety-critical and embodied deployments.
4.3.2 The monitoring system MUST generate an alert when cumulative consumption reaches 75% of the period budget (warning threshold) and a separate alert when it reaches 90% of the period budget (critical threshold).
4.3.3 Alerts MUST be routed to a named responsible party or on-call role with documented escalation procedures, not solely to automated log sinks.
4.3.4 For embodied and edge deployments, the on-device agent runtime MUST be capable of detecting the energy reserve threshold defined under 4.1.3 autonomously and initiating the safe-stop or low-power sequence without dependency on external network connectivity.
4.4.1 The deploying organisation MUST implement at least one automated enforcement mechanism that activates when the critical threshold (90% of budget) is reached, from the following permitted classes: workload throttling (reducing inference batch size or concurrency), request queuing with deferred execution, graceful workload shedding (declining non-priority inference requests), or automated scale-in of compute resources.
4.4.2 The deploying organisation MUST implement a hard enforcement mechanism that activates at 100% of the period budget and prevents further energy consumption attributable to the agent until the budget is formally extended, a new budget period begins, or an authorised override is applied per 4.5.
4.4.3 For safety-critical agents, the hard enforcement mechanism MUST NOT interrupt safety-critical functions (as defined in the agent's safety requirements specification). Safety-critical function energy MUST be ring-fenced and drawn from the operational energy reserve defined in 4.1.3, not from the mission energy budget.
4.4.4 Enforcement actions MUST be logged with timestamp, trigger value, action taken, and identity of any human actor involved in override or extension decisions.
4.5.1 The deploying organisation MUST define a documented override procedure for situations where operational necessity requires exceeding the established energy budget within a period.
4.5.2 Each budget override MUST be authorised by a role with documented authority to approve energy expenditure above the established limit, and this authorisation MUST be recorded in the audit log before or concurrent with the override activation, not retrospectively.
4.5.3 Override durations MUST be time-bounded; no override MUST remain active beyond the end of the current budget period without a formal budget revision per 4.1.4.
4.5.4 The deploying organisation MUST maintain a register of all overrides, including date, authorising party, stated justification, energy consumed during override, and outcome review.
4.6.1 The deploying organisation MUST produce energy consumption reports for each agent deployment at a frequency matching the budget period, containing: total energy consumed (Wh/kWh/MWh), budget utilisation percentage, number and duration of threshold alerts, enforcement actions taken, and overrides granted.
4.6.2 Where the jurisdiction imposes statutory energy reporting obligations applicable to AI workloads (including but not limited to EU Energy Efficiency Directive Article 12 data centre reporting, national public-sector energy efficiency schemes, or mandatory climate-related financial disclosure frameworks), the agent's attributed energy consumption MUST be included in and reconciled with those statutory reports.
4.6.3 Energy consumption data MUST be retained in a form suitable for third-party audit for a minimum of five years from the end of the reporting period to which it relates, or the period mandated by applicable regulation if longer.
4.7.1 The deploying organisation MUST conduct an energy efficiency review of each agent deployment at least annually, evaluating whether model compression, quantisation, adaptive inference scheduling, batching optimisation, or hardware acceleration changes could materially reduce energy consumption without unacceptable degradation of functional performance or safety margins.
4.7.2 Where the efficiency review identifies optimisations with an estimated energy saving of 10% or more that do not compromise safety or functional requirements, the deploying organisation MUST produce a documented decision either to implement the optimisation within 90 days or to record the rationale for deferral with a target review date.
4.7.3 The deploying organisation SHOULD track and report energy intensity metrics (energy consumed per unit of agent work completed, such as Wh per inference, Wh per transaction, or Wh per actuation cycle) to enable trend analysis and efficiency benchmarking.
4.8.1 For agent deployments spanning multiple jurisdictions, the deploying organisation MUST identify all applicable energy reporting and efficiency obligations in each jurisdiction and ensure that the energy metering architecture produces data disaggregated by jurisdiction or data centre location sufficient to meet the most stringent reporting requirement.
4.8.2 Where energy budgets are allocated across geographically distributed infrastructure, the aggregate budget MUST reconcile to a single consolidated energy account that is traceable to the organisational sustainability commitment and reported without double-counting.
4.8.3 The deploying organisation SHOULD apply the energy budget and enforcement controls of this dimension to third-party compute providers used by the agent through contractual requirements, and SHOULD obtain contractually committed energy attribution data from those providers.
4.9.1 The deploying organisation MUST maintain a current Energy Budget Register for each in-scope agent deployment, containing: deployment identifier, budget value and period, derivation basis, responsible owner, metering method and methodology version, override history, and most recent efficiency review outcome.
4.9.2 The Energy Budget Register MUST be reviewed and signed off by the responsible owner at each budget period renewal and at each scheduled efficiency review.
4.9.3 The deploying organisation MUST ensure that the Energy Budget Register is accessible to internal audit, sustainability reporting functions, and regulatory authorities upon request, with access controls preventing unauthorised modification of historical records.
Energy governance for AI agents cannot be implemented solely as a detective or corrective control. The temporal dynamics of AI workloads make after-the-fact correction structurally insufficient: a runaway inference cluster consuming power at 500 kW draws 12 MWh per day, and a single missed daily review window means that by the time a corrective action is taken, the budget overage may already be irreversible within the reporting period. Preventive budget ceilings with automated enforcement are the only mechanism that keeps energy consumption within bounds in real time.
The physical constraints of embodied and edge deployments compound this requirement. A robotic agent that exhausts its battery cannot be corrected by governance review; it enters an unsafe state. The energy reserve requirement in 4.1.3 and the autonomous on-device enforcement in 4.3.4 are structurally motivated by the impossibility of network-dependent external enforcement in disconnected operation. Preventive controls must be embedded in the agent runtime itself, not delegated to external monitoring infrastructure.
This control operates on two axes. Structural enforcement — metering boundaries, hardware power sub-metering, append-only audit logs, hard budget ceilings coded into deployment configurations — creates physical and logical barriers that cannot be bypassed by agent behaviour or user instruction. Behavioural enforcement — override procedures requiring named authorisation, efficiency review obligations, documentation requirements — governs the human decision layer that controls when and why structural limits are adjusted.
The interaction between these axes is intentional: structural controls prevent unintentional or unmonitored overrun, while behavioural controls ensure that deliberate exceptions are accountable. An agent deployment with only structural controls but no override governance becomes operationally brittle, creating pressure to set budgets artificially high to avoid enforcement triggers. An agent deployment with only behavioural controls but no structural enforcement relies entirely on human vigilance and is vulnerable to normalised overrun. The dual-axis design is the minimum viable governance architecture for High-Risk/Critical tier deployments.
AI workloads are among the fastest-growing contributors to organisational energy consumption and are materially different from traditional IT workloads in their scaling dynamics. A single model size increase — from a 7B to a 70B parameter model — can increase inference energy per request by an order of magnitude. Agentic systems running autonomously without human confirmation of individual actions can execute thousands of inference calls in the time it takes a human reviewer to notice elevated consumption. The combination of autonomous execution, exponential scaling potential, and high per-unit energy cost makes uncapped energy use a material risk category that warrants mandatory preventive governance at the same tier as financial, safety, and rights-related controls.
Hierarchical Budget Architecture: Implement energy budgets at three levels — deployment-level (total allocation for the agent), component-level (separate sub-budgets for inference, data pipeline, and communication), and operational-period-level (hourly or per-task budgets within the deployment budget). This allows component-level enforcement and optimisation without requiring the entire deployment to be shut down when a single component overruns.
Instrumented Inference Gateway: Route all inference requests through a metering proxy that accumulates energy attribution per request using a documented energy-per-token or energy-per-compute-unit model. The proxy enforces queuing and throttling automatically when component budgets approach thresholds, before the deployment-level hard ceiling is reached.
Adaptive Inference Scheduling: Implement context-aware scheduling that reduces model size, quantisation level, or frame rate based on available energy budget. For example: when budget utilisation exceeds 75%, switch from a full-precision large model to a quantised smaller model; when it exceeds 90%, switch to a cached-result retrieval mode that avoids live inference entirely for eligible request types.
Energy Reserve Partitioning for Embodied Systems: Implement firmware-level power domains that ring-fence the safety-critical reserve. The mission-execution domain and the safety-reserve domain should be electrically separate where possible, with the safety-reserve domain accessible only through a privileged hardware interrupt, not through software API calls from the mission-execution agent.
Per-Jurisdiction Metering Tags: In multi-region deployments, tag every compute job at submission time with a jurisdiction label and an agent deployment identifier. Energy attribution then flows naturally from metering infrastructure to jurisdiction-disaggregated reports without requiring manual reconciliation.
Energy Budget as Infrastructure-as-Code Parameter: Express energy budgets as version-controlled configuration parameters in the deployment manifests. This ensures budgets are reviewed as part of change management processes, prevents ad hoc manual adjustments that bypass governance, and allows rollback to a previous budget configuration if an update produces unexpected consumption behaviour.
Automated Efficiency Benchmarking Pipeline: Establish a periodic automated benchmarking run (monthly or per model update) that measures energy per inference at standard load levels and compares against the baseline recorded at deployment. Automated flagging of regressions greater than 10% triggers the efficiency review obligation in 4.7.1 without requiring manual initiation.
Software-Estimation-Only Metering Without Hardware Validation: Relying exclusively on model-based energy estimation (such as algorithmic FLOP-to-energy conversion without hardware calibration) produces systematic errors that compound over time, particularly as hardware power states, thermal throttling, and co-tenant interference are not captured. This approach fails the requirements of 4.2.2 and 4.2.3 and has been shown in academic and industrial studies to underestimate real consumption by 20–60% under production load conditions.
Static Annual Budgets Without Review Triggers: Setting a budget once per year and treating it as fixed regardless of model changes, scale changes, or new use-case onboarding violates 4.1.4 and creates systematic misalignment between actual consumption and authorised allocation. Annual-only budgeting also makes early-warning thresholds meaningless if the budget has been operationally invalidated by capability changes.
Shared Budget Pools Across Unrelated Agents: Allocating a single energy budget to a pool of heterogeneous agent deployments, with no per-deployment attribution, makes it impossible to identify which agent is responsible for overrun, prevents proportionate enforcement, and destroys the audit trail required by 4.6.1 and 4.9.1.
Override-as-Default Operations Model: Organisations that routinely approve overrides without meaningful review create a false compliance posture. If overrides are approved in more than 20% of budget periods, the budget is not functioning as a governance instrument but as a reporting threshold. The efficiency review obligation in 4.7.1 and the override register required by 4.5.4 are specifically designed to surface this pattern for management attention.
Conflating Monetary Cost Budgets With Energy Budgets: Cloud cost budgets expressed in currency are not substitutes for energy budgets expressed in Wh/kWh. Cost budgets are affected by pricing tier, reserved capacity discounts, and spot market variation, and do not translate to energy in a stable or auditable way. Organisations that govern only by cost and attempt to back-calculate energy for reporting purposes produce figures that are insufficiently reliable for statutory energy disclosure.
Safety-Critical Function Exclusion Without Explicit Ring-Fencing: Implementing a hard budget ceiling that can interrupt safety-critical agent functions without explicit ring-fencing and reserve partitioning creates a direct safety hazard. The assumption that safety functions will remain available because the budget is "usually sufficient" is not a governance control and violates 4.4.3.
Data Centre Operators: Apply PUE-adjusted attribution to all compute energy. If PUE is not contractually disclosed by the facility operator, use the regional data centre average published by recognised industry surveys as a conservative proxy, and document the substitution.
Public Sector Agencies: Many jurisdictions apply mandatory energy management schemes (such as ISO 50001 certification requirements or national public-sector energy efficiency action plans) that require sub-metered data by workload category. Ensure AI agent energy is a defined workload category in the energy management information system from day one of deployment, not retrospectively added after an audit finding.
Financial Services: Where AI agents support trading, risk calculation, or regulatory reporting, energy consumption may be subject to proportionality assessment under operational resilience frameworks. Ensure energy budget controls are integrated with operational resilience testing so that throttling and enforcement actions are validated not to breach recovery time objectives.
Manufacturing and Logistics (Embodied Agents): Battery management systems in robotic fleets should expose state-of-charge and discharge-rate telemetry through a standardised API accessible to the agent runtime and to the fleet management platform simultaneously, to enable both autonomous on-device enforcement and centralised fleet-level energy budget aggregation.
| Maturity Level | Characteristics |
|---|---|
| Level 1 — Initial | Energy budgets defined informally or not at all; monitoring is manual; no automated enforcement; overrides undocumented |
| Level 2 — Developing | Budgets defined and documented for major deployments; software-estimated monitoring in place; manual alert review; override log maintained retrospectively |
| Level 3 — Defined | Hardware-instrumented metering; automated alerting at defined thresholds; at least one automated enforcement mechanism active; override governance documented and followed |
| Level 4 — Managed | Hierarchical budget architecture; per-component attribution; adaptive inference scheduling active; statutory reporting fully automated; annual efficiency reviews conducted and documented |
| Level 5 — Optimising | Continuous automated benchmarking; predictive budget modelling; energy intensity KPIs tracked and published; third-party provider energy data contractually secured; efficiency optimisations systematically implemented within 90-day obligation |
High-Risk/Critical tier deployments MUST operate at Level 3 minimum; Level 4 is the target state for enterprise and public-sector deployments at steady-state; Level 5 is the expected state for organisations with binding net-zero commitments or statutory energy efficiency obligations.
A current Energy Budget Register per 4.9.1 for each in-scope deployment. Must include: deployment identifier, budget value, period, derivation basis and source document reference, responsible owner name and role, metering method and methodology version, date of last review and reviewer identity, and complete override history. Retention: Active record maintained throughout deployment lifecycle; archived for 5 years post-decommission or the applicable statutory period, whichever is longer.
Documentation of the metering architecture, including: hardware measurement point specifications or cloud attribution methodology and version; PUE coefficients applied and their source; calibration or validation records for hardware meters; software configuration for monitoring lag and alert thresholds. Retention: Current version maintained; prior versions retained for 3 years to support retrospective audit of historical consumption figures.
Append-only timestamped logs of: energy readings at configured monitoring intervals; threshold alert events (time, consumption value, budget percentage); enforcement actions triggered (time, type, trigger value); human actor involvement in enforcement or override decisions. Retention: 5 years minimum, or statutory requirement if longer.
Structured records of every warning and critical threshold alert, including: timestamp, consumption value at trigger, alert routing recipient, time-to-acknowledgement, enforcement action initiated, and resolution. Retention: 5 years.
Per 4.5.4: date, authorising party name and role, stated justification, duration, energy consumed during override, outcome review notes. Retention: 5 years.
Periodic reports per 4.6.1 covering each budget period: total consumption, budget utilisation, alert summary, enforcement action summary, override summary. For statutory reporting jurisdictions, reconciliation records linking agent consumption to the relevant statutory return. Retention: 5 years or statutory period, whichever is longer.
Annual efficiency review reports per 4.7.1 and 4.7.2: assessment date, reviewer identity, optimisation options evaluated, estimated savings, decision (implement/defer), implementation timeline or deferral rationale, target review date. Retention: 5 years.
Documentation of the energy reserve partitioning design (4.1.3 and 4.4.3): reserve capacity calculation, partitioning mechanism description, test records demonstrating that hard enforcement does not interrupt safety-critical functions. Retention: Lifecycle of the hardware deployment plus 5 years.
Contracts or service level agreements with compute providers committing to energy attribution data disclosure. Correspondence or data extracts from providers covering any period for which hardware metering was unavailable and provider data was used for attribution. Retention: Duration of the contract plus 5 years.
Each test below maps to one or more MUST requirements from Section 4. Conformance is scored 0–3: 0 = Not Conformant (requirement absent or evidence demonstrates non-compliance), 1 = Partially Conformant (requirement partially implemented with identified gaps), 2 = Mostly Conformant (requirement implemented with minor documentation or procedural gaps), 3 = Fully Conformant (requirement fully implemented, evidence complete, no gaps identified).
Maps to: 4.1.1, 4.1.2, 4.1.3, 4.1.4
Procedure: Request the Energy Budget Register for the deployment under review. Verify that a quantitative energy budget is recorded in appropriate units with a defined period. Trace the budget value to its authoritative source document (energy management plan, sustainability commitment, regulatory obligation, or approved financial budget). For safety-critical or embodied deployments, verify that a separate operational energy reserve is defined at or above 15% of total available capacity. Verify that the budget record includes a review date and that the most recent review is within the required review cycle.
Pass Criteria:
Maps to: 4.2.1, 4.2.2, 4.2.3, 4.2.4, 4.2.5
Procedure: Review metering system documentation. For physical infrastructure: verify hardware measurement points are documented and that calibration records exist. Confirm that the metering boundary isolates the agent deployment from co-located non-agent workloads. For cloud/virtualised infrastructure: verify the attribution methodology document exists, is versioned, and applies a documented PUE coefficient from a disclosed source. For embodied/edge deployments: confirm component-level power logging at intervals ≤60 seconds. Extract a sample of 50 consecutive log entries and verify UTC timestamps to the nearest second in append-only storage.
Pass Criteria:
Maps to: 4.3.1, 4.3.2, 4.3.3, 4.3.4
Procedure: In a controlled test environment (or by reviewing monitoring system configuration and historical alert records), verify: (a) monitoring lag does not exceed 5 minutes for cloud/on-premises or 60 seconds for safety-critical/embodied deployments — inject a synthetic consumption spike and measure time from spike to alert generation; (b) separate alerts are configured for 75% and 90% thresholds; (c) alert routing configuration names a human recipient or on-call role with documented escalation; (d) for embodied deployments, perform a disconnected-mode test confirming the device autonomously triggers safe-stop when the reserve threshold is crossed without external network contact.
Pass Criteria:
Maps to: 4.4.1, 4.4.2, 4.4.3, 4.4.4
Procedure: Review deployment configuration and, where a test environment permits, simulate consumption reaching 90% and then 100% of the period budget. Verify: (a) at 90%, at least one permitted enforcement mechanism (throttling, queuing, shedding, or scale-in) activates automatically; (b) at 100%, further energy consumption is blocked until budget is extended or a new period begins; (c) for safety-critical deployments, perform a forced budget-exhaust test and verify that designated safety-critical functions continue to operate from the reserved energy domain; (d) review the enforcement action log for a recent budget period and confirm that all entries include timestamp, trigger value, action type, and (where applicable) human actor identity.
Pass Criteria:
Maps to: 4.5.1, 4.5.2, 4.5.3, 4.5.4
Procedure: Request the override register. Verify that a documented override procedure exists naming the authorised role. For each override entry in the most recent 12 months: verify authorisation was recorded before or concurrent with the override (compare authorisation timestamp to override activation timestamp); verify the override was time-bounded and did not extend beyond the current budget period without a budget revision; verify all required fields (date, authorising party, justification, energy consumed, outcome) are present.
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| NIST AI RMF | GOVERN 1.1, MAP 3.2, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment) | Supports compliance |
| EU Corporate Sustainability Reporting Directive | Article 19a (Sustainability Reporting) | Supports compliance |
Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Energy Use Budget Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-609 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.
GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-609 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.
Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Energy Use Budget Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Organisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure |
| Escalation Path | Immediate executive notification and regulatory disclosure assessment |
Consequence chain: Without energy use budget governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-609, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.