AG-072

Change Impact Assessment Governance

Lifecycle, Release & Change Governance ~18 min read AGS v2.1 · April 2026
EU AI Act GDPR SOX FCA NIST HIPAA ISO 42001

2. Summary

Change Impact Assessment Governance requires that every change to an AI agent's configuration, model, prompt, tooling, permissions, data sources, or operating environment is subjected to a structured impact assessment before the change is approved or deployed. The assessment must evaluate the change's effects on governance compliance, safety properties, performance characteristics, dependent systems, and risk posture. This dimension governs the governance of change itself — it is meta-governance that ensures changes to governed systems are themselves governed. Without it, an organisation may maintain rigorous governance of its deployed agents while making unassessed changes that silently degrade that governance posture.

3. Example

Scenario A — Cascading Governance Failure from Tooling Change: An organisation deploys an enterprise workflow agent that processes employee expense claims. The agent uses a tool integration with the finance system to validate expense categories and submit approved claims. The IT team upgrades the finance system API from v2 to v3, which changes the response format for expense category validation. The agent's tool integration is updated to handle the new response format. No impact assessment is performed because the change is classified as a "technical integration update." In the new API version, the field indicating whether an expense category requires manager approval has been renamed from requires_approval to approval_required. The tool integration update does not map this field correctly, defaulting to false for all categories. The agent processes 1,200 expense claims over 3 weeks without routing any for manager approval, including £340,000 in claims that the policy requires managerial sign-off.

What went wrong: The API change was treated as a technical update without impact assessment. No one evaluated whether the change affected the agent's governance behaviour — specifically, its escalation and approval routing. The field mapping error was a functional defect, but the governance failure was the absence of an impact assessment that would have identified the approval routing as a critical governance dependency. Consequence: £340,000 in expenses approved without required managerial oversight, internal audit finding, remediation costs of £85,000 for retrospective review of all affected claims.

Scenario B — Model Fine-Tuning Degrades Safety Properties: A customer-facing AI agent is fine-tuned on recent customer interaction data to improve response relevance. The fine-tuning dataset includes 50,000 interactions from the past 6 months. No impact assessment evaluates whether the fine-tuning data contains patterns that could degrade the agent's safety behaviour. The fine-tuning process subtly shifts the agent's response distribution: it becomes more willing to provide specific medical dosage information (because customer interactions frequently asked about dosages and the fine-tuning optimised for helpfulness). The original safety validation tested the agent's refusal to provide specific dosage guidance — but the fine-tuning was not assessed for its impact on safety properties. The degradation is detected 6 weeks later when a user complaint triggers a safety review.

What went wrong: Fine-tuning was treated as a performance improvement activity, not a change requiring impact assessment. No assessment evaluated the fine-tuning data for patterns that could affect safety properties. The fine-tuning optimised for user satisfaction without constraint from safety requirements. The 6-week detection delay meant 23,000 users received interactions from an agent with degraded safety guardrails. Consequence: Regulatory inquiry, 23,000 potentially affected users requiring notification, estimated remediation cost of £420,000 including legal review and user communication.

Scenario C — Permission Scope Expansion Without Governance Assessment: An AI research agent is granted read access to a new internal database to improve its ability to answer employee questions about company policies. The database contains HR records including salary information, disciplinary records, and performance reviews. The access grant is processed through the standard IT access request workflow, which does not include governance impact assessment for AI agents. The agent begins incorporating HR data into its responses. When an employee asks about promotion criteria, the agent references another employee's performance review data in its response. The data breach is reported 4 days later.

What went wrong: The permission scope change was processed through a human-oriented access workflow that does not assess the governance implications of granting an AI agent access to sensitive data. No impact assessment evaluated how the agent would use the new data, what governance constraints were needed, or whether the agent's existing guardrails were sufficient for the sensitivity level of the new data source. Consequence: Data breach affecting employee privacy, mandatory breach notification, ICO investigation, estimated costs of £250,000 including legal fees, notification, and remediation.

4. Requirement Statement

Scope: This dimension applies to all changes that could affect the behaviour, capabilities, governance posture, or risk profile of any AI agent operating in a production environment. Changes in scope include but are not limited to: model version changes, prompt or system instruction modifications, tool or plugin additions or modifications, permission scope changes (new data sources, new API access, new system integrations), fine-tuning or retraining on new data, configuration parameter changes, infrastructure changes that affect the agent's operating environment (API versions, dependency upgrades, runtime environment changes), and changes to governance controls themselves (mandate limits, escalation triggers, monitoring thresholds). The scope extends to changes initiated by the organisation, by third-party vendors (e.g., model provider updates), and by automated systems (e.g., automated retraining pipelines). The test for whether a change is in scope is: could this change affect what the agent does, how it does it, or how it is governed? If the answer is yes or uncertain, the change is in scope.

4.1. A conforming system MUST require a documented impact assessment before any change to an AI agent's model, prompt, tooling, permissions, data sources, configuration, or operating environment is approved for production deployment.

4.2. A conforming system MUST ensure the impact assessment evaluates effects on governance compliance, safety properties, performance characteristics, dependent systems, data handling, and overall risk posture.

4.3. A conforming system MUST classify changes by risk level using defined criteria, and apply assessment rigour proportionate to the risk classification.

4.4. A conforming system MUST identify all governance dimensions affected by the change, and verify that the change does not degrade compliance with any affected dimension.

4.5. A conforming system MUST require that impact assessments are reviewed and approved by a party with authority over the affected governance dimensions, not solely by the party initiating the change.

4.6. A conforming system MUST record the impact assessment, the risk classification, the approval decision, and the rationale as retained governance artefacts.

4.7. A conforming system SHOULD implement a dependency map that identifies, for each agent, the components, data sources, integrations, and governance controls that constitute its operating configuration, enabling systematic identification of change impacts.

4.8. A conforming system SHOULD integrate impact assessment into existing change management workflows so that AI agent changes follow the same governance discipline as other technology changes.

4.9. A conforming system SHOULD assess cumulative impact when multiple changes are made in a short period, rather than assessing each change in isolation.

4.10. A conforming system MAY implement automated impact analysis that pre-populates the assessment based on the type of change and the agent's dependency map.

5. Rationale

Change Impact Assessment Governance addresses the gap between static governance and dynamic operations. An organisation may invest significant effort in validating and governing an AI agent at deployment, establishing compliance with all applicable governance dimensions. However, the governance posture established at deployment is valid only for the specific configuration that was validated. Any subsequent change creates uncertainty about whether the governance posture still holds.

AI agent systems are characterised by complex interdependencies between components. A model change affects output distributions, which affects safety properties, which affects regulatory compliance. A tool integration change affects the agent's capabilities, which affects its risk profile, which affects its mandate requirements. A permission scope change affects what data the agent can access, which affects privacy compliance, which affects regulatory posture. These interdependencies mean that a change to one component can have non-obvious effects on governance dimensions that appear unrelated to the change.

The meta-governance nature of this dimension is critical. AG-072 does not itself prevent harm — it ensures that the mechanisms that prevent harm (all other governance dimensions) remain effective when changes occur. It is the governance of governance change. Without it, an organisation's governance posture degrades silently with each unassessed change, creating an increasing gap between the documented governance state and the actual governance state.

The risk classification requirement addresses a practical challenge: not all changes carry equal risk, and requiring the same assessment rigour for every change would create an unsustainable burden. A minor configuration parameter adjustment carries different risk than a model version change or a permission scope expansion. Risk classification allows the organisation to apply proportionate rigour — a lightweight checklist for low-risk changes, a full multi-dimensional assessment for high-risk changes — while ensuring no change escapes assessment entirely.

The cumulative impact requirement addresses a subtle failure mode: individually low-risk changes that accumulate into a significant governance posture shift. If an organisation makes 20 low-risk changes over 3 months, each individually assessed and approved, the cumulative effect may be a material change in agent behaviour that no single assessment identified. Periodic cumulative review catches this drift.

6. Implementation Guidance

The core implementation principle is that every change to an AI agent's operating configuration passes through a structured assessment that evaluates governance impact before the change proceeds. The assessment is not a bureaucratic overhead — it is the mechanism that maintains governance integrity across the agent's operational lifecycle.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Change impact assessment for AI agents should integrate with the firm's existing model risk management change control processes. The PRA and FCA expect that material model changes follow a documented change management process with independent review. For AI agents performing regulated activities, the impact assessment should explicitly evaluate effects on conduct risk, market risk, and operational risk. Changes that affect the agent's compliance with MiFID II best execution requirements, for example, should be flagged for compliance review.

Healthcare. Changes to clinical AI agents require impact assessment against clinical safety standards. The UK's DCB0160 (clinical risk management for health IT) requires that changes to clinical systems undergo a clinical safety assessment. Impact assessment should explicitly evaluate effects on patient safety, clinical accuracy, and regulatory compliance. Changes that affect the agent's interaction with patient data should be assessed for HIPAA (US) or UK GDPR compliance impacts.

Critical Infrastructure. Changes to AI agents operating in critical infrastructure must assess impact on safety integrity levels and functional safety requirements. A change that could affect the agent's behaviour under fault conditions requires reassessment against IEC 61508 or domain-specific safety standards. Impact assessment should include assessment of effects on cyber-physical safety boundaries per IEC 62443.

Maturity Model

Basic Implementation — The organisation requires impact assessments for major changes to AI agents (model changes, permission scope changes). Assessments are documented in a standard format. A governance reviewer approves changes based on the assessment. Minor changes (configuration parameters, prompt adjustments) may not consistently trigger assessment. There is no dependency mapping — assessors rely on their knowledge of the system. Cumulative impact is not systematically tracked.

Intermediate Implementation — All changes to AI agent operating configuration trigger risk-tiered impact assessment. A dependency map exists for each agent and is referenced during assessment. Assessment is integrated into the change management workflow — the workflow routes changes through governance review based on risk classification. Cumulative impact is reviewed quarterly. Assessment results are stored in a structured format with full traceability. Automated classification pre-populates the risk tier based on the change type and affected components.

Advanced Implementation — All intermediate capabilities plus: automated impact analysis pre-populates assessments based on the dependency map and historical change data. Machine-assisted assessment identifies non-obvious governance impacts by analysing relationships between components and governance dimensions. Cumulative impact tracking provides real-time visibility into governance posture drift. The organisation can demonstrate to regulators a complete change history with impact assessments for every change to every agent, from initial deployment to current state. Predictive analysis identifies changes that are likely to degrade governance posture before they are implemented.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Assessment Completeness

Test 8.2: Governance Dimension Coverage

Test 8.3: Approval Authority Verification

Test 8.4: Dependency Map Accuracy

Test 8.5: Risk Classification Consistency

Test 8.6: Cumulative Impact Detection

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 9 (Risk Management System)Direct requirement
EU AI ActArticle 12 (Record-Keeping)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Supports compliance
FCA SYSC6.1.1R (Systems and Controls)Supports compliance
NIST AI RMFGOVERN 1.2, MAP 3.5, MANAGE 2.4Supports compliance
ISO 42001Clause 8.2 (AI Risk Assessment), Clause 10.2 (Continual Improvement)Supports compliance
DORAArticle 9 (ICT Risk Management Framework), Article 8 (Identification)Direct requirement

EU AI Act — Article 9 (Risk Management System)

Article 9 requires that the risk management system is a continuous iterative process that is updated throughout the AI system's lifecycle. Change impact assessment is the mechanism through which the risk management system is updated when the AI system changes. Without it, the risk management system established at deployment becomes progressively less accurate as unassessed changes accumulate. The regulation's requirement for "regular systematic updating" maps directly to the continuous impact assessment requirement. Each impact assessment is an update to the risk management understanding for the affected system.

EU AI Act — Article 12 (Record-Keeping)

Article 12 requires automatic recording of events relevant to the AI system's lifecycle. Change impact assessments are lifecycle events that must be recorded. The requirement for documented assessment records, risk classifications, and approval decisions directly supports Article 12 compliance by creating a traceable record of every change decision throughout the system's operational life.

SOX — Section 404 (Internal Controls Over Financial Reporting)

Change management is a core internal control for SOX compliance. For AI agents involved in financial operations, SOX auditors will examine whether changes to those agents follow a documented change management process with appropriate review and approval. Impact assessment provides the evidence that changes were evaluated for their effect on financial controls before implementation. The risk-tiered approach aligns with SOX's risk-based approach to control assessment.

FCA SYSC — 6.1.1R (Systems and Controls)

The FCA expects that firms maintain adequate change management controls for technology systems. For AI agents, this includes not only code changes but also model changes, prompt changes, and configuration changes. The FCA's supervisory approach examines whether the firm's change management process is proportionate to the risk of the system and whether changes are assessed for their impact on compliance, conduct, and operational resilience.

NIST AI RMF — GOVERN 1.2, MAP 3.5, MANAGE 2.4

GOVERN 1.2 addresses the processes for managing AI risk throughout the lifecycle; MAP 3.5 addresses the tracking of changes and their impacts; MANAGE 2.4 addresses the management of changes to AI systems. AG-072 supports compliance by implementing a structured process for assessing, documenting, and governing changes throughout the AI agent lifecycle.

ISO 42001 — Clause 8.2, Clause 10.2

Clause 8.2 requires ongoing AI risk assessment, and Clause 10.2 requires continual improvement of the AI management system. Change impact assessment operationalises both requirements by ensuring that changes trigger risk reassessment and that the governance system evolves with the systems it governs.

DORA — Article 9, Article 8

DORA requires financial entities to manage ICT risks including those arising from changes to ICT systems (Article 9) and to identify risks from ICT changes (Article 8). For AI agents operating in financial services, change impact assessment is a direct implementation of these requirements, ensuring that every change to an AI agent's operating configuration is assessed for ICT risk before deployment.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — governance posture degradation affects all systems and users served by unassessed changes, potentially cascading across dependent agents

Consequence chain: Without change impact assessment governance, changes to AI agents are deployed without understanding their effects on governance compliance, safety properties, or risk posture. The immediate technical failure is a governance posture that diverges from the documented and validated state — the organisation believes its agents comply with governance requirements based on the last validation, but unassessed changes have silently degraded that compliance. The failure is insidious because it is invisible: the agent continues to operate, governance dashboards may show green (because they check the documented configuration, not the actual configuration), and no alarm fires until a user, auditor, or regulator encounters the consequences. The operational impact depends on which governance dimensions are degraded: safety property degradation leads to user harm, compliance degradation leads to regulatory exposure, security degradation leads to data breaches, and performance degradation leads to service quality failures. The cumulative effect of multiple unassessed changes can be catastrophic — an organisation that deploys 100 unassessed changes over a year may discover during an audit that its agents' governance posture bears no resemblance to its documentation. The remediation cost for retrospective impact assessment and re-validation of an agent with extensive unassessed change history is typically 5-10 times the cost of prospective assessment. Business consequences include regulatory enforcement action for inadequate change management, inability to demonstrate governance compliance at any historical point, and potential personal liability for governance failures that could have been prevented by assessment.

Cross-references: AG-007 (Governance Configuration Control), AG-008 (Governance Continuity Under Failure), AG-022 (Behavioural Drift Detection), AG-048 (AI Model Provenance and Integrity), AG-071 (Pre-Deployment Validation and Acceptance Governance), AG-073 (Staged Rollout and Canary Governance), AG-074 (Performance Drift and Revalidation Threshold Governance), AG-010 (Time-Bounded Authority Enforcement).

Cite this protocol
AgentGoverning. (2026). AG-072: Change Impact Assessment Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-072