Three-Lines-of-Defence Mapping Governance requires that every material AI agent control is explicitly mapped to the three lines of defence: first line (business operations — the teams that build and run agent systems), second line (risk and compliance oversight — the functions that set standards, challenge the first line, and monitor adherence), and third line (internal audit — the independent assurance function that evaluates the effectiveness of both first- and second-line activities). The mapping must be documented, current, and testable. Without it, organisations cannot demonstrate to regulators that their agent governance has independent oversight, cannot identify gaps where controls lack second- or third-line coverage, and cannot ensure that the three lines are genuinely independent rather than nominally separate.
Scenario A — Missing Second Line for a Customer-Facing Agent: A retail bank deploys an AI customer-service agent that handles complaint resolution and can offer compensation up to £500 per interaction. The engineering team builds, operates, and monitors the agent. The risk function reviews the agent's design document at deployment but has no ongoing second-line role — no periodic challenge, no independent monitoring of compensation patterns, no review of the agent's escalation decisions. Over 8 months, the agent develops a pattern of offering maximum compensation to customers who express frustration in specific language patterns, regardless of complaint merit. Total compensation paid: £1.4 million against a budget of £600,000. The pattern is discovered during the annual audit — 8 months after it began.
What went wrong: The second line of defence was absent after initial deployment. No risk or compliance function was mapped to the ongoing oversight of the agent's compensation decisions. The first line was both operating and self-monitoring. Consequence: £800,000 in excess compensation, FCA supervisory attention for inadequate systems and controls, mandatory remediation programme, reputational damage from media coverage of "AI giving away money."
Scenario B — Third Line Not Independent: An insurance company deploys an AI claims-processing agent. The internal audit function is asked to provide third-line assurance over the agent's controls. However, a member of the internal audit team previously helped design the agent's fraud-detection rules and was only reassigned to audit 2 months before the review. The auditor reviews their own prior work, finds it satisfactory, and provides a clean opinion. An external regulator subsequently identifies that the fraud-detection rules contain a systematic gap that the auditor's familiarity with their own design prevented them from seeing. The gap had allowed £3.2 million in fraudulent claims over 14 months.
What went wrong: The third-line function was not independent — the auditor had a prior involvement conflict. The three-lines mapping existed on paper, but the individuals assigned to the third line did not have the independence that the framework requires. Consequence: £3.2 million in fraudulent claim payouts, regulatory enforcement action for inadequate governance, mandatory external audit engagement, personal accountability investigation for the audit function head.
Scenario C — Complete Three-Lines Coverage Works: A payment processor deploys an AI transaction-monitoring agent. The first line (operations) runs the agent and monitors its real-time performance. The second line (risk and compliance) independently reviews a sample of the agent's decisions monthly, challenges the first line's performance metrics, and maintains independent thresholds for what constitutes acceptable false-positive and false-negative rates. The third line (internal audit) conducts a semi-annual review of both the first line's operations and the second line's oversight effectiveness. During the second-line monthly review, the compliance function identifies that the agent's false-negative rate for transactions above £50,000 has increased from 0.3% to 1.8% over 3 months — a trend the first line's metrics did not flag because they measured overall false-negative rate (which remained stable at 0.4%). The second line triggers remediation before the trend becomes a regulatory finding.
What went right: Each line was mapped, staffed, and operating independently. The second line applied different analytical perspectives than the first line, catching a segment-specific trend that aggregate metrics missed. The third line's prior review had verified that the second line's review methodology was sound.
Scope: This dimension applies to all AI agent deployments operating in regulated environments or processing material transactions (financial value exceeding £10,000 per day in aggregate, personal data of more than 1,000 data subjects, or safety-critical operations). The scope covers every governance control that constrains agent behaviour — mandate enforcement, monitoring rules, escalation thresholds, approval workflows, access controls, and audit programmes. For each such control, the organisation must identify which function operates as the first line, which function provides second-line challenge and oversight, and which function provides third-line independent assurance. The scope extends to automated controls: a control that runs without human intervention still requires second-line challenge of its design and configuration, and third-line assurance of its effectiveness. Organisations without a formal internal audit function must either establish one, engage an external provider, or document why third-line coverage is not applicable — with regulatory approval where required.
4.1. A conforming system MUST maintain a documented mapping of every material agent control to named functions or individuals for each of the three lines of defence.
4.2. A conforming system MUST ensure that the second-line function for each control is organisationally independent from the first-line function — the second line SHALL NOT report to the same management chain as the first line below the executive level.
4.3. A conforming system MUST ensure that the third-line function for each control is independent from both the first and second lines — the third line SHALL NOT have had operational or oversight involvement in the control within the preceding 24 months.
4.4. A conforming system MUST require the second-line function to perform documented challenge activities for each mapped control at least quarterly, producing written evidence of the challenge and its outcome.
4.5. A conforming system MUST require the third-line function to perform an independent assurance review of each mapped control at least annually, with findings reported to the board or a board-delegated governance committee.
4.6. A conforming system MUST identify and remediate any material control that lacks coverage in any of the three lines within 30 business days of identifying the gap.
4.7. A conforming system SHOULD maintain the three-lines mapping in a machine-readable format that can be queried to identify coverage gaps, stale reviews, and independence conflicts.
4.8. A conforming system SHOULD integrate the three-lines mapping with the role register from AG-259, ensuring that role assignments align with line-of-defence designations.
4.9. A conforming system MAY implement automated coverage analysis that flags controls approaching the quarterly challenge deadline or the annual assurance review deadline.
The three lines of defence is the dominant governance model in financial services and increasingly in other regulated sectors. Its power comes from layered independence: the people who run a process are not the same people who challenge it, and neither group is the same as the people who provide independent assurance. When implemented properly, each line catches different types of failure: the first line catches operational errors through proximity; the second line catches systemic issues through independent challenge; the third line catches governance failures through independent assurance.
For AI agent governance, the three-lines model is essential because AI systems exhibit failure modes that each line is uniquely positioned to detect. The first line, with its operational proximity, detects performance degradation, latency issues, and obvious errors. The second line, with its independent perspective and different metrics, detects systematic biases, drift patterns, and threshold adequacy issues that the first line's familiarity with the system may cause it to normalise. The third line, with its governance-level perspective, detects structural gaps: controls that exist on paper but not in practice, independence that is nominal but not real, and oversight activities that are performed but not effective.
Without a formal three-lines mapping, organisations typically default to a two-lines model at best — operations and audit — with no structured second-line challenge function. This is the most common gap in AI agent governance. The second line's role — setting independent standards, performing ongoing challenge, and maintaining independent monitoring — is where most governance failures are caught before they become incidents. Removing or weakening the second line creates a gap where operational drift goes unchallenged until the annual audit, by which time consequences may have accumulated for months.
AG-260 makes the three-lines mapping explicit, testable, and enforceable. It transforms the three lines from a conceptual framework into a concrete governance control with evidence requirements and test specifications.
The three-lines-of-defence model requires careful implementation to ensure that the lines are genuinely independent rather than nominally separate. The most common failure is "three lines on paper, one line in practice" — where the mapping exists in a governance document but the day-to-day reality is that a single team performs all three functions.
Recommended patterns:
Anti-patterns to avoid:
Financial Services. The three-lines-of-defence model is mandated by the Basel Committee on Banking Supervision and expected by the FCA, PRA, and ECB. For AI agent controls, the three-lines mapping should integrate with the firm's existing three-lines framework rather than creating a parallel structure. The second-line function for AI agents typically sits within the model risk management or operational risk function. The third-line function is internal audit with AI/technology audit capability. The PRA's SS1/23 on model risk management explicitly references the three lines of defence for AI and machine learning models.
Healthcare. Clinical governance frameworks typically implement a two-lines model (clinical operations and clinical governance), with external regulatory inspection serving as a quasi-third line. For AI agent governance, organisations should establish formal three-lines coverage, with the clinical governance function providing second-line challenge and an internal or external audit function providing third-line assurance. CQC inspections do not substitute for third-line assurance because they are periodic and externally scheduled.
Critical Infrastructure. Safety-critical deployments should implement four lines of defence, adding an external independent safety assessor as a fourth line beyond internal audit. This aligns with IEC 61508 requirements for independent safety assessment. The three-lines mapping should explicitly identify which controls require fourth-line coverage.
Basic Implementation — The organisation has documented three-lines-of-defence mappings for material agent controls. The first and second lines are identified and active. The third line has conducted at least one review. The mapping may be maintained in a spreadsheet or governance document rather than a structured database. Second-line challenge activities may not cover all controls quarterly. Independence has been asserted but not independently verified.
Intermediate Implementation — The three-lines mapping is maintained in a machine-readable format integrated with the AG-259 role register. Second-line challenge activities are performed quarterly for all material controls, with documented outcomes. Third-line reviews follow a risk-based audit plan and cover all material controls within a 24-month cycle. Independence is verified annually through reporting-line review and conflict-of-interest checks. Coverage gaps are identified and remediated within 30 business days.
Advanced Implementation — All intermediate capabilities plus: automated coverage analysis flags controls approaching review deadlines. The three-lines mapping is dynamically updated when personnel or organisational changes occur. Second-line challenge activities include independent testing (not just review of first-line outputs). Third-line reviews explicitly assess second-line effectiveness, not just first-line control operation. The organisation can demonstrate to regulators that every material agent control has continuous, independent, multi-layered oversight. External conformance assessment (AG-157) validates the three-lines framework itself.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-260 compliance requires verifying the completeness of the mapping, the independence of the lines, and the effectiveness of challenge and assurance activities.
Test 8.1: Mapping Completeness
Test 8.2: Second-Line Independence
Test 8.3: Third-Line Independence
Test 8.4: Second-Line Challenge Timeliness
Test 8.5: Third-Line Review Coverage
Test 8.6: Coverage Gap Remediation Timeliness
Test 8.7: Mapping Currency
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| EU AI Act | Article 17 (Quality Management System) | Direct requirement |
| Basel Committee | Principles for the Sound Management of Operational Risk (Principle 4) | Direct requirement |
| FCA SYSC | 4.1.1R (General Organisational Requirements) | Direct requirement |
| PRA SS1/23 | Model Risk Management — Governance | Direct requirement |
| NIST AI RMF | GOVERN 1.2, GOVERN 2.1 | Supports compliance |
| ISO 42001 | Clause 5.3 (Organizational Roles, Responsibilities and Authorities) | Supports compliance |
| DORA | Article 5 (Governance and Organisation) | Supports compliance |
| IIA Standards | Standard 2100 (Nature of Work) | Supports compliance |
The Basel Committee's Principles for the Sound Management of Operational Risk explicitly establishes the three-lines-of-defence model as the expected governance structure for operational risk management in banking. Where AI agents create operational risk, their governance controls fall within the scope of this principle. AG-260 implements the mapping framework that allows organisations to demonstrate compliance with Principle 4 for AI agent operations.
The PRA's Supervisory Statement on model risk management explicitly references the three lines of defence in the context of AI and machine learning models. SS1/23 requires that model risk management frameworks include independent validation (second line) and independent audit (third line). For AI agents that incorporate models, AG-260 ensures that the three-lines coverage extends from the model to the governance controls that constrain the agent's use of the model.
SYSC 4.1.1R requires firms to have robust governance arrangements, including a clear organisational structure with well-defined, transparent, and consistent lines of responsibility. The three-lines mapping provides the structure through which firms can demonstrate that AI agent governance has defined lines of responsibility with appropriate independence.
| Field | Value |
|---|---|
| Severity Rating | High |
| Blast Radius | Organisation-wide — compromises the independence and effectiveness of the entire governance framework |
Consequence chain: Without three-lines mapping, governance controls for AI agents operate without independent challenge or assurance. The first line monitors itself, identifies its own issues, and assesses its own effectiveness. Systematic biases, gradual drift, and structural gaps go undetected because no independent perspective exists to identify them. The regulatory consequence is severe: every major financial regulator expects the three-lines-of-defence model, and its absence for AI agent governance is treated as a systemic governance failure. The practical consequence is that governance failures accumulate undetected until they manifest as incidents — by which time the consequences have compounded over the detection gap period. The detection gap is typically 6-18 months when only the first line is active, compared to 1-3 months when an active second line is performing quarterly challenge.
Cross-references: This dimension builds upon AG-259 (Role-Segregated Control Ownership Governance) which establishes the role segregation that makes the three lines meaningful; AG-159 (Agent Accountability and Named Ownership) which ensures each agent has a named owner sitting in the first line; AG-108 (Operator Role Segregation) which enforces the operational separation that underpins first-line and second-line independence; AG-157 (External Conformance Assessment) which provides an additional layer of assurance beyond the three internal lines; and AG-170 (Approval Quality and Substantive Review) which ensures that second-line challenge and third-line assurance activities are substantive rather than perfunctory.