AG-108

Operator Role Segregation Governance

Human Factors & Sociotechnical Control ~20 min read AGS v2.1 · April 2026
EU AI Act SOX FCA NIST ISO 42001

2. Summary

Operator Role Segregation Governance requires that organisations enforce formal separation between the human roles involved in AI agent governance: the individuals who configure agent behaviour, the individuals who approve agent actions, the individuals who monitor agent performance, and the individuals who can override agent operation. The dimension mandates that no single individual can both configure an agent's parameters and approve its outputs, or both monitor its behaviour and suppress its alerts, ensuring that the checks and balances inherent in multi-party oversight are not collapsed into a single point of human failure or compromise.

3. Example

Scenario A — Configuration and Approval Collapse in Lending: A consumer lending firm deploys an AI agent for credit decisioning. The team lead responsible for configuring the agent's risk parameters (interest rates, approval thresholds, exposure limits) is also the primary approver for the agent's escalated decisions. Over 6 months, the team lead incrementally loosens the approval thresholds — reducing the minimum credit score from 650 to 580, increasing the maximum loan amount from £25,000 to £50,000, and widening the acceptable debt-to-income ratio from 0.35 to 0.50. Because the same individual configures these parameters and approves the resulting decisions, no independent check challenges whether the loosened thresholds are appropriate. Default rates increase from 2.1% to 7.4% over the following 9 months, producing £12.8 million in unexpected losses. Post-incident investigation finds that the parameter changes were individually reasonable in response to business pressure but collectively created a risk profile far outside the firm's risk appetite — and no one with independent authority reviewed the cumulative effect.

What went wrong: The configuration role and the approval role were held by the same person. No segregation required an independent party to review parameter changes before they took effect in the agent's decisioning. The team lead had both the ability to change the agent's behaviour and the authority to approve the resulting outputs — eliminating the independent check that segregation provides. Consequence: £12.8 million in credit losses, FCA investigation into risk management controls, personal accountability under SM&CR for the team lead and the senior manager responsible for the lending function.

Scenario B — Monitoring and Suppression Conflict in AML: A bank's AI agent generates AML alerts that are monitored by a compliance team. The team's deputy manager, facing pressure to reduce alert volumes (which have been criticised by senior management as excessive), has access to both the alert monitoring dashboard and the alert configuration system. Over 4 months, the deputy manager raises alert thresholds for 3 transaction typologies, reducing alert volume by 40%. Because the same individual monitors the alerts and can modify the alert generation parameters, the reduction in volume appears as improved efficiency rather than suppressed detection. An FCA thematic review discovers that the 3 suppressed typologies include a pattern associated with a known money laundering network. The bank had been generating alerts on this network's transactions until the thresholds were raised.

What went wrong: The monitoring role and the configuration role were not segregated. The individual responsible for evaluating whether alerts were appropriate also had the ability to change the system that generated those alerts. This created a conflict of interest: the deputy manager could "solve" the excessive alert problem by suppressing alerts rather than investigating their root cause. No independent configuration control prevented unilateral threshold changes. Consequence: FCA enforcement action for AML systems and controls failures, £34 million regulatory penalty, requirement for independent third-party review of all alert configuration changes, and personal enforcement action against the deputy manager.

Scenario C — Override Authority Without Accountability in Healthcare: A hospital deploys an AI agent for patient medication management. All 15 nurses on the ward have override authority for the agent — any nurse can override any medication recommendation for any patient. During a busy night shift, Nurse A overrides a dosage recommendation for Patient X, increasing the dose based on clinical judgement. Nurse B, unaware of Nurse A's override, reviews the patient 2 hours later and sees the higher dose in the system. Nurse B assumes it was the agent's recommendation (the override is logged but not visually distinguished in the patient record) and does not question it. The increased dose causes an adverse drug reaction. Post-incident investigation finds that: override authority was not tied to specific patients or clinical roles; the override audit trail existed but was not surfaced in the clinical workflow; and no second-party confirmation was required for dosage overrides above a threshold.

What went wrong: Override authority was not segregated by clinical role, patient assignment, or risk level. Any operator could override any recommendation without independent confirmation. The audit trail existed but was not integrated into the workflow in a way that made overrides visible to subsequent reviewers. No dual-authorisation requirement existed for high-risk overrides (e.g., dosage changes exceeding 25% of the agent's recommendation). Consequence: Adverse drug reaction, patient harm, Datix incident report, CQC scrutiny, and mandated redesign of the override authority model.

4. Requirement Statement

Scope: This dimension applies to all AI agent deployments where multiple human roles are involved in the agent's governance lifecycle: configuration, approval, monitoring, override, and audit. The scope includes any organisation where the same individual could, without segregation controls, both define the agent's behaviour and evaluate the agent's outputs, or both monitor the agent's performance and suppress its alerts, or both deploy the agent and audit its compliance. The scope extends to temporary role assignments, deputy arrangements, and emergency access provisions — segregation must be maintained even when staffing is constrained. The minimum number of segregated roles depends on the deployment's risk profile: safety-critical and financial-value deployments require at least 4 segregated roles; standard deployments require at least 3.

4.1. A conforming system MUST enforce role segregation between at least the following functions: (a) agent configuration (setting parameters, thresholds, and behaviour), (b) agent output approval (reviewing and approving escalated decisions), and (c) agent monitoring and override (observing performance and intervening when necessary).

4.2. A conforming system MUST implement technical controls — not solely policy-based controls — that prevent a single individual from holding incompatible roles, enforced through access control systems, role-based permissions, and identity management.

4.3. A conforming system MUST require dual authorisation for high-risk configuration changes — changes to agent parameters that affect risk exposure, detection thresholds, or safety limits — where the change is proposed by one role holder and approved by a different role holder.

4.4. A conforming system MUST maintain an auditable record of role assignments, role changes, and any exceptions to segregation (including the business justification and the compensating controls applied).

4.5. A conforming system MUST prevent override actions from being invisible to subsequent reviewers — overrides must be visually and programmatically distinguishable from agent-originated outputs in all downstream workflows.

4.6. A conforming system MUST log all role-based access events (configuration changes, approvals, overrides, monitoring actions) with the authenticated identity of the individual, their role at the time of the action, and the action taken.

4.7. A conforming system SHOULD implement time-limited role elevation for emergency scenarios (e.g., a single operator needing to both monitor and override during an after-hours incident), with automatic expiration, mandatory post-incident review, and compensating controls such as retrospective dual-person review.

4.8. A conforming system SHOULD implement segregation monitoring that detects and alerts on potential segregation violations — for example, the same individual making a configuration change and approving outputs affected by that change within 24 hours.

4.9. A conforming system SHOULD define role incompatibility matrices that explicitly document which roles cannot be held by the same individual and which role combinations require compensating controls.

4.10. A conforming system MAY implement rotation requirements for critical roles (e.g., annual rotation of the configuration authority) to prevent entrenchment and provide fresh perspective on parameter appropriateness.

5. Rationale

Segregation of duties is one of the oldest and most fundamental internal control principles, predating information technology by centuries. Double-entry bookkeeping (13th century) and the separation of record-keeping from custody of assets (19th century banking) established the principle that no single individual should control all aspects of a valuable transaction. COSO's Internal Control Framework codified this as a core control activity. SOX Section 404 requires it for financial controls. Every major regulatory framework in financial services, healthcare, and critical infrastructure assumes segregation of duties as a baseline control.

In the AI agent context, role segregation is both more important and more difficult than in traditional systems. More important because AI agents amplify individual decisions: a single parameter change to a credit decisioning agent affects thousands of subsequent decisions, not just one. A team lead who loosens approval thresholds is not making one bad loan — they are reconfiguring the system to make thousands of incrementally riskier loans, each individually defensible but collectively outside risk appetite. More difficult because AI agent governance introduces new roles with no historical precedent: Who configures the agent? Who approves its outputs? Who monitors its behaviour? Who can override it? In traditional systems, these functions map to established organisational roles with established segregation. In AI agent deployments, they are often collapsed into "the AI team" or "the team lead" without the segregation that centuries of internal control experience demands.

The regulatory environment is converging on role segregation for AI systems. The EU AI Act Article 14 implies segregation by requiring that human oversight be "effective" — oversight exercised by the same individual who configured the system is not independent and therefore not effective. FCA SM&CR requires clear allocation of responsibilities with no gaps or overlaps that would undermine accountability. SOX Section 404 requires segregation of duties in any process that affects financial reporting — and AI agents increasingly affect financial reporting through automated decisioning, revenue recognition, and risk assessment.

AG-108 connects to the entire Human Factors & Sociotechnical Control landscape. AG-104 (Trust Calibration) must be measured per role because different roles have different trust relationships with the agent. AG-105 (Alarm Fatigue) must account for role-specific workload because different roles receive different types of alerts. AG-106 (Skill Atrophy) must be monitored per role because different roles maintain different competencies. AG-107 (Override Usability) must respect role-based authority because not every operator should have override authority for every agent function. Without role segregation, the other four dimensions in this landscape cannot be properly implemented.

6. Implementation Guidance

Role segregation for AI agent governance requires mapping the agent's governance lifecycle to distinct roles, implementing technical controls that enforce segregation, and monitoring for violations. The implementation must be structural — embedded in access control systems — not solely procedural.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. SOX Section 404 and FCA SYSC 10.1 (Conflicts of Interest) require segregation of duties in financial processes. For AI agents involved in financial decisioning, trading, or regulatory reporting, segregation requirements should align with existing three-lines-of-defence models: first line (business operations — Agent Approver), second line (risk and compliance — Agent Monitor and Configuration Approver), third line (internal audit — Agent Auditor). The SM&CR regime requires that segregation is supported by clear statements of responsibility and that no responsibility gap exists between roles.

Healthcare. Clinical governance frameworks require segregation between prescription (recommending treatment), administration (delivering treatment), and review (evaluating outcomes). AI agents in clinical settings must map their governance roles to these established clinical governance structures. The Caldicott Guardian role should have oversight of any AI agent configuration that affects patient data handling. Medication management AI must enforce dual-authorisation for dosage overrides exceeding 25% of the standard recommendation.

Critical Infrastructure. IEC 61511 requires independence between the personnel responsible for designing safety instrumented functions and the personnel responsible for validating them. AI agents in safety-critical process control must maintain this independence: the individuals who configure the agent's safety parameters must be different from those who validate the agent's safe operation. Nuclear regulatory frameworks (ONR, NRC) impose the strictest segregation requirements, with mandatory organisational independence between design, operation, and safety assessment functions.

Maturity Model

Basic Implementation — The organisation has defined at least 3 segregated roles for AI agent governance (configurator, approver, monitor/override). Technical controls (RBAC) prevent the same individual from holding incompatible roles. Dual authorisation is required for high-risk configuration changes. Role assignments are logged. This level meets the minimum mandatory requirements but may not detect all segregation violations (e.g., informal delegation, shared credentials) and may not maintain segregation during staffing constraints.

Intermediate Implementation — All basic capabilities plus: role incompatibility matrix is formally documented and enforced in the IAM system. Segregation monitoring detects potential violations in real time (e.g., same-day configuration and approval by the same individual). Override visibility is enforced in all downstream workflows. Emergency access has formal procedures with time limits and compensating controls. Staffing model is validated against segregation requirements, including leave and turnover scenarios. At least 4 segregated roles are defined for high-risk deployments.

Advanced Implementation — All intermediate capabilities plus: segregation effectiveness is independently assessed annually. Rotation requirements prevent role entrenchment. Segregation monitoring covers indirect violations (e.g., an individual influencing a configuration change through a subordinate who holds the configuration role). The organisation can demonstrate to regulators that no individual can unilaterally configure and approve AI agent behaviour, with technical evidence from the IAM system and monitoring logs — verified through penetration testing of the segregation controls.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Role Incompatibility Enforcement

Test 8.2: Dual-Authorisation Workflow

Test 8.3: Configuration Change Without Authorisation

Test 8.4: Override Visibility in Downstream Systems

Test 8.5: Segregation Monitoring Detection

Test 8.6: Emergency Access Controls

Test 8.7: Role Assignment Audit Trail

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 14 (Human Oversight)Supports compliance
EU AI ActArticle 9 (Risk Management System)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Direct requirement
FCA SYSC10.1 (Conflicts of Interest)Direct requirement
FCA SM&CRStatements of ResponsibilitiesSupports compliance
NIST AI RMFGOVERN 1.1, GOVERN 1.3Supports compliance
ISO 42001Clause 5.3 (Organisational Roles, Responsibilities)Supports compliance
DORAArticle 5 (ICT Risk Management Framework Governance)Supports compliance

SOX — Section 404 (Internal Controls Over Financial Reporting)

SOX Section 404 requires management to assess and report on the effectiveness of internal controls over financial reporting, with external auditor attestation. Segregation of duties is a foundational control that SOX auditors evaluate. For AI agents involved in financial processes (credit decisioning, revenue recognition, risk assessment, regulatory reporting), the segregation of configuration, approval, and monitoring roles maps directly to SOX control requirements. A SOX auditor will test whether the same individual can configure the agent's financial parameters and approve the agent's financial outputs — if they can, this is a segregation deficiency that may constitute a material weakness. AG-108 provides the governance framework that prevents this finding.

FCA SYSC 10.1 — Conflicts of Interest

SYSC 10.1.1R requires firms to take all reasonable steps to identify conflicts of interest and to manage, mitigate, or prevent them. An individual who can both configure an AI agent's alert thresholds and is responsible for managing the alert workload those thresholds generate has a structural conflict of interest: they can "solve" the workload problem by suppressing alerts rather than addressing root causes. AG-108 addresses this by segregating the configuration and monitoring functions so that no individual has both the incentive and the ability to suppress governance controls.

FCA SM&CR — Statements of Responsibilities

The SM&CR regime requires clear allocation of responsibilities among senior managers, with no gaps or overlaps. For AI agent governance, this means each governance function (configuration, approval, monitoring, override, audit) must be clearly allocated to a named individual or role in the Statement of Responsibilities. AG-108 provides the role framework that maps to SM&CR allocation requirements, ensuring that regulators can trace any AI governance decision to a responsible individual.

DORA — Article 5 (ICT Risk Management Framework Governance)

DORA Article 5 requires that the management body of a financial entity defines, approves, and oversees the ICT risk management framework. For AI agents classified as ICT systems, this includes governance of the agent's configuration and operation. Segregation of duties within the governance framework demonstrates that the management body's oversight is operationalised through independent checks, not concentrated authority.

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusOrganisation-wide — affects the integrity of all governance controls that depend on independent human checks and the credibility of the entire AI governance framework

Consequence chain: Without role segregation, all other human oversight controls are compromised by concentrated authority. A single individual who can configure the agent, approve its outputs, and suppress its alerts can — intentionally or unintentionally — circumvent the entire governance framework. The immediate failure mode is undetected configuration drift: an individual loosens parameters to meet business targets, approves the resulting outputs, and monitors away any alerts — each action individually reasonable, collectively catastrophic. In the lending example, this produced £12.8 million in credit losses. In the AML example, it produced a £34 million regulatory penalty and criminal referral. The failure is particularly dangerous because it is self-concealing: the individual who created the problem controls the mechanisms that would detect it. The regulatory consequence is severe: SOX auditors treat segregation deficiencies as potential material weaknesses; FCA enforcement treats collapsed segregation as a systems and controls failure; SM&CR treats it as a personal accountability failure for the senior manager who permitted the gap. The systemic consequence is that without role segregation, the organisation cannot demonstrate that any human oversight is independent — undermining the credibility of the entire AI governance framework with regulators, auditors, and counterparties.

Cross-references: AG-019 (Human Escalation & Override Triggers) defines escalation and override requirements; AG-108 ensures that escalation and override authority are held by appropriate, segregated roles. AG-038 (Human Control Responsiveness) requires timely human response; AG-108 ensures that response authority is clearly allocated. AG-104 (Trust Calibration Governance) measures trust per operator; AG-108 ensures trust calibration is measured per segregated role with role-appropriate thresholds. AG-105 (Oversight Workload and Alarm Fatigue Governance) manages cognitive load; AG-108 ensures workload is distributed across properly segregated roles rather than concentrated. AG-106 (Human Skill Atrophy Monitoring Governance) tracks competencies; AG-108 ensures competency requirements are defined per role. AG-107 (Override Usability and Actionability Governance) ensures override mechanisms work; AG-108 determines which operators have authority to use them. AG-022 (Behavioural Drift Detection) detects agent drift; AG-108 ensures that the individuals who detect drift are different from those who could have caused it through configuration changes. AG-039 (Active Deception and Concealment Detection) detects agent deception; AG-108 ensures the monitoring function is independent from the configuration function that a deceptive pattern might exploit.

Cite this protocol
AgentGoverning. (2026). AG-108: Operator Role Segregation Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-108