AG-590

Actuator Wear and Drift Governance

Robotics, Edge, IoT & Spatial Computing ~23 min read AGS v2.1 · April 2026
EU AI Act NIST ISO 42001

Section 2: Summary

This dimension governs the detection, quantification, and operational response to mechanical wear, fatigue, and calibration drift in actuators that an AI agent commands to execute physical actions in the world. Actuator degradation is not a binary failure event — it is a gradual process that progressively widens the gap between the action an agent intends to produce and the action the physical system actually delivers, eroding the agent's ability to reason safely about its own consequences. When wear goes undetected, an agent operating under the assumption that its actuators perform nominally will issue commands calibrated to a healthy mechanical system but receive degraded physical outputs, generating position errors, force overruns, velocity deviations, and timing skews that accumulate into trajectory deviations, collision events, structural damage, or human injury that the agent's onboard safety logic never had the opportunity to intercept.

Section 3: Examples

Example 3.1 — Surgical Robotic Arm Force Overshoot

A minimally invasive robotic surgery platform deploys a seven-degree-of-freedom manipulator arm to perform laparoscopic tissue dissection. After 4,200 operating hours the harmonic drive gear assembly in joint 3 accumulates surface fatigue that increases backlash from the factory-specified 0.04 arc-minutes to 1.7 arc-minutes. The AI agent controlling the arm has been trained and validated against backlash tolerances below 0.15 arc-minutes. During a procedure requiring a 3 N controlled tissue separation force, the agent commands a torque profile that would produce 3 N under nominal gear conditions; the degraded gear introduces a 340-millisecond lag in torque transmission followed by a sharp catch, delivering a transient force spike of 11.2 N to the tissue plane. The surgeon's haptic feedback interface registers the spike 180 milliseconds after it occurs — too late to prevent a 4 mm inadvertent incision extension into an adjacent vessel. Because the wear drift monitoring subsystem had not been re-baselined since commissioning and no inter-session backlash test was required by the deployment protocol, the agent had no in-process signal that joint 3 was operating outside its calibration envelope. The incident results in a surgical complication, a regulatory investigation under the Medical Device Regulation, and a 14-month field safety corrective action affecting 180 deployed units.

Example 3.2 — Warehouse Autonomous Mobile Robot Braking Drift

A fleet of 62 autonomous mobile robots operates in a fulfillment center, each carrying payloads up to 150 kg at speeds up to 1.8 m/s across shared human-robot zones. The electromagnetic braking actuators on the drive wheels of units in the fleet have a manufacturer-specified stopping distance of 0.42 m from full speed at 150 kg payload. After 11 months of continuous three-shift operation, brake pad wear reduces actuation clamping force. An independent audit reveals that 19 of the 62 units now require between 0.71 m and 0.94 m to stop under the same conditions — a 69% to 124% degradation. The fleet management AI, which calculates safe following distances and human-proximity deceleration profiles using the 0.42 m design figure, continues issuing motion commands as if braking performance is nominal. During a shift change when foot traffic in the zone peaks, one robot fails to stop before a junction and strikes a worker at approximately 0.9 m/s, causing a fractured tibia. Post-incident analysis shows the agent's onboard diagnostics log had been recording anomalous current-draw signatures on the brake actuator circuits for 23 days prior to the incident — signatures that a wear-correlated threshold model would have flagged as requiring inspection — but no automated alert was configured to act on those signals.

Example 3.3 — Municipal Water Infrastructure Valve Control Error

A water treatment facility operates an AI-driven process control agent that commands 14 motorized butterfly valves governing flow rates across primary filtration stages. Valve actuator position feedback is provided by resistive rotary encoders integrated into the valve stems. Over 30 months of continuous operation in a chemically aggressive environment, three valve stems accumulate micro-corrosion that shifts encoder zero-point calibration by between 4° and 9°. The process control agent relies on encoder feedback to confirm valve position and throttle flow. The agent commands valve 7 to a 45° open position to achieve a target flow of 380 L/min required for chlorine dosing contact time compliance. Because the encoder zero has drifted 9°, the valve physically sits at 54° open, producing a flow of 510 L/min — a 34% over-target condition. The agent's flow-rate model, believing the valve is at 45°, interprets the elevated downstream flow sensor reading as a sensor anomaly rather than a valve positioning error and applies a corrective command that opens the valve further to 58° (encoder-reported), producing 67° physical aperture and 640 L/min actual flow. Chlorine contact time drops below regulatory minimum. Over seven hours across three consecutive overnight shifts, 2.4 million litres of under-treated water are distributed before a manual operator cross-check during morning commissioning detects the discrepancy. The incident triggers mandatory reporting under national drinking water regulations and requires a precautionary public advisory.

Section 4: Requirement Statement

4.0 Scope

This dimension applies to any AI agent — autonomous, semi-autonomous, or human-supervised — that issues commands to physical actuators as part of its operational function, where the accuracy of those commands is material to safety, regulatory compliance, or the protection of human welfare. Actuators in scope include but are not limited to: electric motors and servo drives, hydraulic and pneumatic cylinders, electromagnetic brakes and clutches, linear actuators, valve positioners, robotic joints, and any other electromechanical device through which an agent converts a digital command into a physical state change. The dimension applies regardless of whether the agent operates continuously, episodically, or in a human-in-the-loop mode. It applies at the individual actuator level, at the kinematic chain or subsystem level, and at the level of the agent's integrated world model in so far as that model incorporates assumptions about actuator performance. This dimension does not govern sensor degradation directly (see AG-312) but recognises that sensor drift and actuator drift interact and must be considered jointly during diagnostic assessment.

4.1 Actuator Performance Baseline Registration

4.1.1 The deploying organisation MUST establish and formally record a quantified performance baseline for every actuator commanded by an AI agent prior to initial deployment, including at minimum: full-range position accuracy, peak and sustained force or torque output, velocity and acceleration profiles, backlash and hysteresis characterisation, and response latency under nominal load conditions.

4.1.2 The baseline MUST be recorded with a timestamp, environmental conditions at time of measurement, payload configuration, software version of the commanding agent, and the identity of the technician or automated system that performed the baseline capture.

4.1.3 The baseline MUST be stored in an immutable or append-only records system that cannot be modified without generating an auditable change event.

4.1.4 The deploying organisation MUST define wear threshold values for each performance parameter that, when exceeded, require operational intervention; these thresholds MUST be derived from the actuator manufacturer's operational specifications, the agent's validated performance envelope, and the safety analysis for the deployment context.

4.2 Continuous or Periodic In-Service Performance Monitoring

4.2.1 The AI agent or its supporting infrastructure MUST perform in-service actuator performance monitoring at a frequency sufficient to detect degradation before wear reaches the intervention threshold defined in 4.1.4.

4.2.2 For safety-critical deployments where actuator failure or degradation can cause irreversible harm to humans within a single operational cycle, continuous monitoring MUST be implemented; periodic monitoring is not acceptable as the sole mechanism.

4.2.3 Monitoring data MUST be logged at the native measurement resolution with timestamps and retained for a minimum period consistent with the applicable regulatory framework, and in no case less than the longer of: the actuator's expected service life or 36 months.

4.2.4 The monitoring subsystem MUST operate independently of the primary command pathway such that a failure in the command pathway does not also disable wear monitoring, and a failure in the monitoring subsystem generates an alert rather than silently degrading.

4.2.5 Where continuous monitoring is technically infeasible (for example, due to power constraints on battery-operated edge devices), the system MUST implement a scheduled performance test protocol with documented intervals and MUST log the gap between scheduled and actual test execution.

4.3 Drift Detection and Quantification

4.3.1 The system MUST implement automated comparison of current in-service performance measurements against the registered baseline defined in 4.1 and MUST calculate a quantified drift metric for each monitored parameter.

4.3.2 The drift detection algorithm MUST use a statistically robust method capable of distinguishing gradual trend-based drift from transient measurement noise; point-in-time threshold comparisons without trend analysis are insufficient as a sole detection mechanism.

4.3.3 The system MUST maintain a drift history record — not just a current state record — such that the rate of degradation can be calculated and projected forward.

4.3.4 Where multiple actuators operate in a kinematic chain or coordinated subsystem, the system MUST assess cumulative drift across the chain, not only individual actuator drift, because individual tolerances within specification can compound to produce chain-level performance outside the agent's validated envelope.

4.3.5 The drift detection system MUST be validated against known-degraded actuator states before operational deployment; validation evidence MUST be retained as part of the system safety case.

4.4 Agent Command Compensation and Adaptation

4.4.1 Where quantified drift is below the intervention threshold but above a defined warning threshold, the agent MUST apply compensation corrections to its command signals to account for measured performance deviation, and MUST log the magnitude and nature of every compensation correction applied.

4.4.2 Compensation corrections MUST be bounded: the agent MUST define a maximum compensation limit beyond which compensation is no longer considered reliable, and when calculated compensation would exceed this limit, the agent MUST escalate to a safety response rather than issuing compensated commands.

4.4.3 The agent MUST NOT silently absorb compensation corrections into its command pathway without generating an observable signal; every compensated operating period MUST be flagged in operational logs and reported to the operator interface.

4.4.4 Compensation models MUST be re-validated whenever actuator performance data shows a step-change degradation event rather than gradual wear, as step-change degradation may indicate a different failure mode than the model was trained to handle.

4.5 Safe State Transition on Threshold Breach

4.5.1 When any monitored actuator parameter reaches or exceeds the intervention threshold defined in 4.1.4, the agent MUST initiate a defined safe state transition; it MUST NOT continue nominal operation pending human acknowledgment.

4.5.2 The safe state transition procedure MUST be defined in advance for each deployment context and MUST specify: the target safe state, the sequence of commands required to reach it, maximum time-to-safe-state, and actions required if the degraded actuator cannot reliably execute the safe state transition itself.

4.5.3 The safe state transition MUST be tested under simulated actuator degradation conditions as part of pre-deployment validation, and test results MUST be included in the system safety case.

4.5.4 If the safe state transition procedure relies on the degraded actuator (for example, a robot must retract an arm using a joint whose actuator is degraded), the agent MUST have a pre-computed alternative safe state procedure that avoids reliance on the degraded actuator.

4.5.5 Following a safe state transition triggered by threshold breach, the agent MUST NOT return to nominal operation without human authorisation and without evidence of actuator inspection, repair, or recalibration recorded in the maintenance log.

4.6 Operator and Maintenance Alerting

4.6.1 The system MUST generate a graded alert structure with at minimum three levels: an advisory level when drift exceeds a warning threshold but compensation remains within bounds; an action-required level when compensation bounds are approached or when the rate of degradation projects threshold breach within a defined horizon; and a critical level when the intervention threshold is breached or when the safe state transition is triggered.

4.6.2 Alerts at the action-required and critical levels MUST be delivered through a channel that is independent of the normal operator interface, to guard against scenarios where the interface itself is impaired.

4.6.3 The alerting system MUST NOT be suppressible by the AI agent's autonomous logic; only a human operator with appropriate authorisation MUST be able to acknowledge or suppress an alert, and such acknowledgments MUST be logged with identity, timestamp, and stated reason.

4.6.4 The system MUST maintain an unacknowledged alert state and MUST escalate automatically if a critical alert is not acknowledged within a deployment-defined maximum response window.

4.7 World Model Consistency

4.7.1 The agent's internal world model — including kinematic models, force models, trajectory planners, and safety envelope calculations — MUST be parameterised on current measured actuator performance, not solely on design-specification values.

4.7.2 The agent MUST maintain a model validity flag that reflects whether its world model parameters are within the validated performance envelope; when this flag is false, the agent MUST restrict its action space to a conservatively bounded safe subset.

4.7.3 When drift compensation corrections are applied (per 4.4), the agent MUST propagate the effect of those corrections through all dependent model components — including but not limited to trajectory planners, collision avoidance boundaries, and force control loops — so that downstream planning reflects the degraded but compensated actuator state.

4.8 Maintenance Integration

4.8.1 The wear and drift monitoring system MUST integrate with the organisation's maintenance management system or equivalent record-keeping process such that actuator inspection, recalibration, and replacement events are recorded and traceable to specific monitoring data that prompted the intervention.

4.8.2 Following any maintenance intervention that affects an actuator, the deploying organisation MUST execute a formal re-baselining procedure per 4.1 before returning the agent to nominal operation.

4.8.3 The system MUST track cumulative actuator use metrics — such as cycle counts, operating hours, load-weighted cycles, or thermal exposure — and MUST generate proactive maintenance prompts before wear-based degradation is expected to reach warning thresholds, based on manufacturer service intervals and in-service degradation rate history.

4.9 Auditability and Traceability

4.9.1 Every event in the wear and drift governance lifecycle — baseline capture, monitoring readings, drift calculations, compensation corrections, alerts generated, alerts acknowledged, safe state transitions, maintenance actions, and re-baselining — MUST be recorded in an append-only audit log with cryptographic integrity protection or equivalent tamper-evidence mechanism.

4.9.2 The audit log MUST be structured such that for any safety incident involving a physical action, it is possible to reconstruct the complete actuator performance state at the time the causative command was issued.

4.9.3 The organisation MUST be able to produce a complete actuator performance history for any individual actuator on request by a regulatory authority within 72 hours.

Section 5: Rationale

5.1 Why Wear Is a Governance Problem, Not Merely an Engineering Problem

Mechanical wear is one of the oldest and most thoroughly characterised phenomena in engineering. Every actuator manufacturer publishes service intervals, wear rates, and degradation models. The reason actuator wear requires dedicated AI governance treatment — rather than being fully addressed by standard mechanical maintenance programmes — is that AI agents introduce a qualitatively different risk vector: they reason about and act upon the physical world through the interface of their actuators, and their safety properties are validated against assumptions about actuator performance that hold only while those actuators remain within a specified performance envelope. When an actuator drifts outside that envelope, the AI agent does not automatically become less safe in ways that are obvious to an observer. Instead, the agent continues to reason with the same apparent confidence and to issue commands with the same apparent precision, but the physical outputs those commands produce diverge increasingly from what the agent believes it is producing. This is a structural safety gap that cannot be closed by better agent training alone: an agent trained on nominal actuator dynamics has no inherent mechanism to detect that the physical system it now commands has different dynamics.

5.2 Behavioural vs Structural Enforcement

A purely structural approach to this problem — mandating maintenance intervals and replacement schedules — is necessary but insufficient. Maintenance intervals are designed for average degradation rates under average operating conditions; real-world deployments subject actuators to variable loads, temperatures, contamination levels, and duty cycles that can cause degradation to proceed faster than scheduled maintenance anticipates. An agent that issues no warning and takes no protective action in the gap between maintenance events, even when real-time monitoring data would support detection, is structurally compliant but behaviourally unsafe. Conversely, a purely behavioural approach — relying on the agent's onboard anomaly detection to identify wear — is also insufficient without governance structure, because onboard anomaly detection can itself be degraded, uncalibrated, or inadequately validated. This dimension therefore mandates both the structural elements (baseline registration, maintenance integration, mandatory re-baselining) and the behavioural elements (continuous monitoring, drift quantification, command compensation, safe state transitions), enforced through testable requirements and audit evidence.

5.3 Cumulative and Compounding Risk

A critical property of wear-related risk that distinguishes it from many other AI safety problems is its cumulative and compounding character. Individual actuators within a kinematic chain may each be within specification, yet the combined effect of small deviations across multiple joints can produce end-effector errors that exceed the safety envelope of the task. An agent that evaluates actuator health only at the individual component level will systematically underestimate the operational risk of a partially degraded multi-joint system. Section 4.3.4 specifically addresses this by requiring chain-level drift assessment, reflecting the physical reality that robot safety analysis must consider the full kinematic error budget, not component-level tolerances in isolation.

5.4 The Asymmetry of Silent Degradation

Unlike a hard actuator failure — which produces an obvious fault signal, triggers protective stops, and is immediately detectable — gradual wear produces degradation that falls below the threshold of casual observation for extended periods. During this silent degradation phase, the agent and its operators may receive no signal that anything is wrong. The agent's outputs continue to appear functional; tasks are completed; performance metrics show no sharp discontinuity. The risk is therefore systematically underperceived by both the agent and its human supervisors. Governance frameworks that rely on operator observation as a primary detection mechanism are therefore structurally inadequate for gradual wear. The detective controls in this dimension are designed specifically to close this observational gap by requiring automated, quantified, trend-based monitoring that does not depend on operator vigilance for primary detection.

Section 6: Implementation Guidance

Baseline Capture Protocol Baseline capture should be performed under controlled conditions that match the agent's expected operational envelope as closely as possible — full payload range, representative temperature, representative duty cycle. A baseline captured under no-load bench conditions and then applied to a 150 kg payload deployment will produce systematically biased drift calculations. Baselines should be stratified across the operating envelope where performance is nonlinear: a valve actuator baseline should include measurements at 0%, 25%, 50%, 75%, and 100% aperture, not only at one reference point.

Embedded Performance Test Manoeuvres A highly effective continuous monitoring pattern for robotic systems is the embedded performance test manoeuvre: periodically executing a standardised motion sequence that exercises each actuator across its functional range and recording the response against the baseline. In deployments where dedicated test sequences are operationally impractical, in-task performance monitoring can be implemented by logging command-versus-response data during normal operations and performing drift analysis on the accumulated residuals. This approach has the advantage of testing actuators under actual operational loads rather than synthetic test conditions.

Kalman Filter or State Estimator Integration For precision control applications, integrating wear state estimation into the agent's state estimator — for example, using a Kalman filter augmented with actuator degradation state variables — allows the agent to maintain a continuously updated probabilistic estimate of actuator performance and propagate uncertainty through its planning and control layers. This is significantly more robust than threshold-only detection because it quantifies uncertainty rather than simply classifying state as "acceptable" or "exceeded."

Tiered Safe State Design Safe state transitions should not be binary (operating / stopped). A tiered model is more appropriate and safer in practice: a first tier reduces operating speed and payload to extend the time available for maintenance response; a second tier restricts the agent to a safe-home position and suspends autonomous operation; a third tier (used only when the degraded actuator cannot support even controlled retraction) relies on passive mechanical safe states such as gravity-balanced positions, mechanical stops, or hydraulic pressure release. Each tier should have pre-validated transition procedures and known transition times.

Cumulative Load Tracking Wear rate is not primarily a function of clock time; it is a function of mechanical work done. Tracking cumulative load — for example, load-weighted cycle counts or integral of force-time products — provides a much more accurate predictor of wear state than operating hours alone. For electric motor actuators, monitoring parameters such as winding temperature history, starting transient current profiles, and back-EMF characteristics provides early indicators of bearing wear, insulation degradation, and commutator wear that precede mechanical failure.

Redundant Measurement Paths In safety-critical applications, wear monitoring should not rely on a single measurement path. Where position feedback is provided by a primary encoder, a secondary independent measurement — such as a resolver, a hall-effect sensor array, or a vision-based position reference — should be used to cross-validate position readings. Discrepancies between primary and secondary measurements are themselves informative as early indicators of encoder drift or mechanical deformation.

6.2 Explicit Anti-Patterns

Anti-Pattern 1: Manufacturer Interval as Sole Governance Mechanism Treating the manufacturer's recommended service interval as equivalent to a safety guarantee is a common and dangerous shortfall. Manufacturer intervals are typically derived from average-use assumptions. Deployments that exceed average duty cycles, operate in elevated-temperature environments, or carry loads at the upper end of the rated range will reach wear thresholds significantly before the scheduled interval. An organisation that substitutes a maintenance schedule for in-service monitoring is deferring detection rather than performing it.

Anti-Pattern 2: Monitoring Without Consequence Deploying monitoring instrumentation that generates wear metrics but connecting those metrics only to a dashboard — with no automated alert, no consequence threshold, and no defined response protocol — provides the appearance of governance without the substance. This pattern is frequently observed in brownfield deployments where monitoring was added after initial deployment without redesigning the operational response framework. The monitoring data is present in the logs; the incident report will show that the degradation was detectable; but no party took action because the system provided no prompt to do so.

Anti-Pattern 3: Compensation Without Bounds Implementing command compensation for actuator drift without defining a maximum compensation limit creates a failure mode in which the agent continues to issue increasingly large compensation corrections as wear progresses, producing command signals that are far outside the regime for which the agent was validated. A heavily compensated command to a severely degraded actuator may produce unpredictable physical outputs because the compensation model is extrapolating beyond its valid range. The compensation limit in 4.4.2 is not a conservative design choice; it is a hard boundary beyond which the agent's safety properties are no longer valid.

Anti-Pattern 4: Post-Maintenance Resume Without Re-Baselining Returning an agent to operation after actuator maintenance without executing a formal re-baselining procedure is a frequent cause of drift calculation errors. If the maintenance replaced a worn component with one that has different dimensional characteristics (for example, a replacement gear with slightly different backlash than the original), the drift calculations will be computed against an incorrect reference, potentially masking new degradation or generating spurious alerts. Re-baselining must be treated as a mandatory precondition for resumed operation, not an optional quality step.

Anti-Pattern 5: Edge-Only Monitoring Without Uplink For IoT and edge deployments, it is tempting to perform all wear monitoring locally on the edge device and surface it only to a local operator interface. This pattern creates blind spots at the fleet or enterprise level: wear trends that are visible when data is aggregated across a fleet of 50 devices may be invisible when each device's data is examined in isolation. Fleet-level monitoring is particularly important for identifying systematic wear drivers — such as a shared firmware update that altered motor control parameters, or a batch of actuator components with a manufacturing defect — that would not be apparent from single-device data.

6.3 Maturity Model

Level 1 — Reactive: Actuator wear is detected through hard failure events or operator observation. No baseline registration, no systematic monitoring, no drift quantification. Maintenance is calendar-based. Compensation is absent or ad hoc. This level is non-compliant with this dimension for High-Risk/Critical deployments.

Level 2 — Scheduled: Baseline registration exists. Periodic performance tests are scheduled and executed. Thresholds are defined. Alerts are generated when thresholds are breached. No trend analysis; no predictive projection; no in-task continuous monitoring. Compensation is not implemented. This level achieves minimum compliance for lower-risk sub-thresholds but is insufficient for High-Risk/Critical profiles.

Level 3 — Monitored: Continuous or high-frequency in-service monitoring is implemented. Drift quantification uses trend analysis rather than point-in-time comparison. Tiered alerts are implemented. Safe state transitions are pre-validated. Compensation corrections are applied and bounded. World model parameters reflect current measured performance. This level achieves full compliance with Section 4 requirements.

Level 4 — Predictive: In addition to Level 3 capabilities, the system maintains a predictive wear model that projects time-to-threshold-breach based on current degradation rate, operational load profile, and environmental conditions. Proactive maintenance scheduling is driven by predictive model outputs. Fleet-level anomaly detection identifies systematic wear drivers. Compensation models are continuously recalibrated against measured performance. This level represents best practice for safety-critical deployments.

Section 7: Evidence Requirements

7.1 Required Artefacts

ArtefactDescriptionRetention Period
Actuator Baseline RegisterFormal record of all initial performance baselines per 4.1, including measurement conditions and responsible partyLifetime of the deployed system plus 10 years
Wear Threshold DocumentationDocumented derivation of intervention and warning thresholds per 4.1.4, with supporting safety analysisLifetime of the deployed system plus 10 years
Continuous Monitoring LogsRaw and processed in-service performance data per 4.2.3Minimum 36 months or service life, whichever is longer
Drift Calculation RecordsOutput of drift detection algorithms per 4.3, including trend histories and rate-of-degradation calculationsMinimum 36 months
Compensation Correction LogRecord of all compensation corrections applied per 4.4.3, including magnitude, duration, and associated actuator stateMinimum 36 months
Alert LogFull history of all alerts generated, including level, trigger condition, acknowledgment status, acknowledging party, and stated reasonMinimum 60 months
Safe State Transition RecordsLog of all safe state transitions triggered, including triggering event, transition sequence executed, time-to-safe-state, and resumption authorisationMinimum 60 months
Maintenance Integration RecordsTraceability records linking maintenance actions to monitoring data that prompted interventionLifetime of the deployed system
Re-Baselining RecordsEvidence of post-maintenance re-baselining per 4.8.2, with new baseline values and comparison to prior baselineLifetime of the deployed system
Drift Detection Validation EvidenceValidation test results demonstrating drift detection against known-degraded states per 4.3.5Held as part of the system safety case
Safe State Transition Test EvidenceValidation test results for safe state transition under simulated degradation per 4.5.3Held as part of the system safety case
World Model Parameter LogRecord of world model parameter values over time, demonstrating correspondence to monitored actuator performance per 4.7.1Minimum 36 months
Audit Log Integrity EvidenceRecords demonstrating the tamper-evidence or cryptographic integrity of the append-only audit log per 4.9.1Lifetime of the audit log

7.2 Regulatory Production Obligation

Per 4.9.3, the organisation must be able to produce a complete actuator performance history for any individual actuator within 72 hours of a request from a regulatory authority. This production obligation applies regardless of whether the request arises from an incident, a scheduled audit, or a market surveillance inquiry. Organisations should conduct periodic production drills to verify that the 72-hour obligation can be met in practice, given the volume of data and the access controls on the records system.

Section 8: Test Specification

Test 8.1 — Baseline Register Completeness and Integrity (maps to 4.1.1, 4.1.2, 4.1.3, 4.1.4)

Objective: Verify that a complete, correctly structured, and tamper-evident baseline register exists for all actuators commanded by the agent.

Method: Request the full actuator baseline register. For a randomly selected sample of at minimum 20% of actuators (minimum 5 actuators where the fleet is small), verify: (a) all required parameters are present per 4.1.1; (b) measurement metadata is complete per 4.1.2; (c) intervention and warning thresholds are documented with derivation rationale per 4.1.4; (d) the record system generates a change event log for any modification attempt per 4.1.3.

Pass Criteria:

Test 8.2 — In-Service Monitoring Continuity and Independence (maps to 4.2.1, 4.2.2, 4.2.3, 4.2.4, 4.2.5)

Objective: Verify that monitoring operates continuously (or at required intervals), at sufficient frequency, independently of the command pathway, and that monitoring failures generate alerts.

Method: (a) Examine monitoring logs for a representative 30-day period and calculate the percentage of time for which monitoring data is present at the required sampling rate; gaps must be explained and justified. (b) Introduce a simulated failure in the command pathway and verify that monitoring continues without interruption. (c) Introduce a simulated monitoring subsystem failure and verify that an alert is generated within the system's defined response window. (d) For deployments claiming periodic monitoring exception under 4.2.5, verify that gap-between-scheduled-and-actual-test data is logged.

Pass Criteria:

Test 8.3 — Drift Detection Algorithm Validation (maps to 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5)

Objective: Verify that the drift detection algorithm correctly identifies gradual degradation, maintains trend history, assesses cumulative chain-level drift, and has been validated against known-degraded states.

Method: (a) Inject synthetic monitoring data representing gradual parametric degradation across three levels: below warning threshold, between warning and intervention thresholds, and above intervention threshold. Verify that the algorithm produces drift metrics consistent with the injected degradation and categorises each level correctly. (b) Inject synthetic data representing high-frequency noise superim

Section 9: Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
NIST AI RMFGOVERN 1.1, MAP 3.2, MANAGE 2.2Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.2 (AI Risk Assessment)Supports compliance

EU AI Act — Article 9 (Risk Management System)

Article 9 requires providers of high-risk AI systems to establish and maintain a risk management system that identifies, analyses, estimates, and evaluates risks. Actuator Wear and Drift Governance implements a specific risk mitigation measure within this framework. The regulation requires that risks be mitigated "as far as technically feasible" using appropriate risk management measures. For deployments classified as high-risk under Annex III, compliance with AG-590 supports the Article 9 obligation by providing structural governance controls rather than relying solely on the agent's own reasoning or behavioural compliance.

NIST AI RMF — GOVERN 1.1, MAP 3.2, MANAGE 2.2

GOVERN 1.1 addresses legal and regulatory requirements; MAP 3.2 addresses risk context mapping; MANAGE 2.2 addresses risk mitigation through enforceable controls. AG-590 supports compliance by establishing structural governance boundaries that implement the framework's approach to AI risk management.

ISO 42001 — Clause 6.1, Clause 8.2

Clause 6.1 requires organisations to determine actions to address risks and opportunities within the AI management system. Clause 8.2 requires AI risk assessment. Actuator Wear and Drift Governance implements a risk treatment control within the AI management system, directly satisfying the requirement for structured risk mitigation.

Section 10: Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusOrganisation-wide — potentially cross-organisation where agents interact with external counterparties or shared infrastructure
Escalation PathImmediate executive notification and regulatory disclosure assessment

Consequence chain: Without actuator wear and drift governance, the governance framework has a structural gap that can be exploited at machine speed. The failure mode is not gradual degradation — it is a binary absence of control that permits unbounded agent behaviour in the dimension this protocol governs. The immediate consequence is uncontrolled agent action within the scope of AG-590, potentially cascading to dependent dimensions and downstream systems. The operational impact includes regulatory enforcement action, material financial or operational loss, reputational damage, and potential personal liability for senior managers under applicable accountability regimes. Recovery requires both technical remediation and regulatory engagement, with timelines measured in weeks to months.

Cite this protocol
AgentGoverning. (2026). AG-590: Actuator Wear and Drift Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-590