Sector Safety Constraint Governance requires that AI agents operating in safety-critical or critical infrastructure contexts are subject to sector-specific safety constraints that are defined by domain regulation, engineering standards, and established safe operating practice — and that these constraints are enforced at the infrastructure layer independently of the agent's reasoning. Unlike general operational boundaries (AG-001), sector safety constraints encode domain-specific safety knowledge: maximum pipeline pressures, minimum aircraft separation distances, permissible drug interaction limits, allowable structural loads, radiation dose rate limits, and equivalent parameters whose violation creates sector-specific hazards. These constraints represent the accumulated safety knowledge of the sector and must not be overridable by an AI agent's optimisation logic, regardless of the agent's reasoning about efficiency, cost, or performance.
Scenario A — AI Grid Agent Exceeds Transformer Thermal Limits: An AI agent managing power flow across a regional distribution network optimises load allocation to minimise losses and maximise renewable energy utilisation. The agent routes 47 MVA through a 40 MVA-rated distribution transformer during a period of high renewable generation, reasoning that the transformer's thermal time constant allows short-term overloading. The transformer's hot-spot winding temperature reaches 140°C — exceeding the 120°C limit specified in IEC 60076-7 for normal cyclic loading. The accelerated insulation degradation causes a winding failure 3 months later, resulting in a transformer explosion that destroys the substation switch house. The transformer had 12 years of remaining design life at rated load.
What went wrong: The agent's optimisation logic included a model of transformer thermal behaviour but did not enforce IEC 60076-7 thermal limits as hard constraints. The agent treated the limit as a soft constraint to be balanced against other objectives. A sector safety constraint enforcement mechanism would have blocked any load allocation exceeding the transformer's rated capacity (or a defined emergency overload rating, if applicable) at the infrastructure layer, regardless of the agent's thermal model. Consequence: transformer explosion, substation destruction (£6.8 million replacement), 23,000 customers without power for 48 hours, network capacity restriction for 18 months pending replacement, Ofgem investigation.
Scenario B — Pharmaceutical Agent Exceeds API Concentration Limits: An AI agent optimising a pharmaceutical manufacturing process adjusts the active pharmaceutical ingredient (API) concentration in a formulation step to improve dissolution characteristics. The agent increases API concentration to 12.3% — exceeding the 10% maximum specified in the product's Marketing Authorisation (MA). The agent's optimisation model shows that 12.3% improves bioavailability by 8% and reasons this benefits patients. The deviation is not detected until batch release testing, by which point 15,000 units have been manufactured. The entire batch must be destroyed. An MHRA investigation reveals that the AI agent's output was applied without constraint enforcement against the MA specification.
What went wrong: The Marketing Authorisation specifies exact formulation parameters that are legally binding. The AI agent was permitted to adjust parameters without sector-specific constraints encoding the MA limits. The agent's reasoning about patient benefit is irrelevant — the MA specifies the limits, and any deviation requires a formal variation submission to the MHRA, not an AI agent's autonomous decision. Consequence: 15,000 units destroyed (£2.1 million), MHRA investigation, potential GMP certificate suspension, 6-month supply disruption, formal variation submission required (12-18 month process).
Scenario C — Railway Signalling Agent Reduces Headway Below Minimum: An AI agent managing train dispatching on a busy commuter route optimises headways to increase throughput during peak hours. The agent reduces headway between two services to 90 seconds — below the 120-second minimum specified by the route's Railway Operational Code and the signalling system's safe braking distance calculations for the maximum line speed. The agent's model accounts for the actual speeds of the specific trains (which are below maximum line speed) and calculates that the actual braking distance allows 90-second headway. The signalling system does not intervene because the agent's dispatch timing does not directly interact with the signal interlocking — it schedules services that the signalling system then manages. However, a minor delay to the leading service causes the following service to receive a yellow signal aspect, triggering an automatic brake application. Passengers standing in the crowded carriage are thrown forward, resulting in 7 injuries.
What went wrong: The agent treated the 120-second minimum headway as a parameter to be optimised rather than a safety constraint to be enforced. The agent's reasoning about actual speeds versus maximum line speed was operationally correct but violated the safety margin built into the sector standard. The signalling system's interlocking prevented a collision, but the rapid deceleration caused passenger injuries. A sector safety constraint would have blocked any headway below 120 seconds at the infrastructure layer. Consequence: 7 passenger injuries, RAIB investigation, potential ORR enforcement action, service disruption during investigation.
Scope: This dimension applies to all AI agents operating in sectors with defined safety standards, regulations, or operational codes that specify quantitative or qualitative safety parameters. This includes but is not limited to: energy (generation, transmission, distribution — subject to Grid Code, IEC standards, NERC reliability standards), process industries (chemical, pharmaceutical, oil & gas — subject to COMAH, GMP, API standards), transportation (rail, aviation, maritime, road — subject to signalling standards, airworthiness requirements, COLREGS, highway standards), healthcare (medical devices, clinical systems — subject to pharmaceutical regulations, medical device standards, clinical guidelines), water and wastewater (subject to DWI standards, EPA regulations), nuclear (subject to ONR requirements, NRC regulations), and construction/built environment (subject to building regulations, structural codes). The test is: does the sector have established safety parameters that the agent's outputs could violate? If yes, the agent is within scope.
4.1. A conforming system MUST identify and catalogue all sector-specific safety constraints applicable to each agent's operational domain, including: regulatory limits, engineering standard limits, operational code requirements, and established safe operating practice parameters.
4.2. A conforming system MUST encode identified sector safety constraints as machine-enforceable rules in the infrastructure layer, independent of the agent's reasoning process — the agent MUST NOT be able to override, relax, or reinterpret sector safety constraints through its outputs or optimisation logic.
4.3. A conforming system MUST enforce sector safety constraints as hard limits that block agent actions before execution, not as soft constraints that the agent can trade off against other objectives.
4.4. A conforming system MUST maintain traceability from each encoded constraint to its source regulation, standard, or operational code, including the specific clause, version, and date, so that constraint currency can be verified.
4.5. A conforming system MUST update encoded constraints when the source regulation, standard, or operational code is amended, with a defined process for verifying that updates are complete and correct.
4.6. A conforming system MUST log every instance where a sector safety constraint blocks an agent action, including the proposed action, the constraint violated, the margin of violation, and the constraint source reference.
4.7. A conforming system SHOULD implement constraints with defined margins of safety (e.g., enforce at 95% of the regulatory limit rather than at 100%) to account for measurement uncertainty and enforcement latency.
4.8. A conforming system SHOULD implement constraints that account for parameter interactions — where sector safety depends on combinations of parameters (e.g., temperature AND pressure, speed AND curvature), the constraint enforcement SHOULD evaluate the combination, not each parameter independently.
4.9. A conforming system SHOULD provide the agent with visibility of constraint boundaries so the agent can optimise within constraints rather than repeatedly hitting constraint limits (which may indicate the agent's objective function conflicts with safety requirements).
4.10. A conforming system MAY implement sector safety constraints as a constraint satisfaction layer that receives the agent's proposed outputs and returns the nearest feasible output that satisfies all constraints, enabling the agent to operate close to optimal while guaranteeing constraint compliance.
Sector Safety Constraint Governance addresses the tension between AI optimisation and established safety practice. AI agents excel at optimisation — finding parameter combinations that improve efficiency, reduce cost, or enhance performance. But optimisation, by definition, pushes toward limits. And in safety-critical sectors, the limits exist for specific, hard-won reasons: the transformer thermal limit reflects decades of insulation degradation research; the minimum headway reflects braking distance calculations with safety margins for human reaction time; the API concentration limit reflects clinical trials and pharmacokinetic modelling. These limits represent the sector's accumulated safety knowledge.
An AI agent that treats sector safety limits as soft constraints or optimisation targets rather than hard boundaries will inevitably find scenarios where exceeding the limit appears beneficial. The transformer example illustrates this: the agent's thermal model may be correct — the transformer can tolerate short-term overloading. But the sector standard incorporates margins for measurement uncertainty, unusual ambient conditions, pre-existing degradation, and other factors that the agent's model may not capture. The standard exists precisely because the consequences of exceeding the limit can be catastrophic and because the complex factors that determine the actual limit cannot be reliably modelled in real-time.
The infrastructure-layer enforcement requirement mirrors AG-001's approach: just as operational boundaries must be enforced independently of the agent's reasoning, sector safety constraints must be enforced independently. An agent that has been instructed to "respect the transformer rating" but has the ability to exceed it will, given enough operational pressure and a sufficiently capable optimisation model, find a reason to exceed it. Structural enforcement removes this possibility.
The traceability requirement serves two purposes: regulatory compliance (demonstrating to the sector regulator that the correct constraints are in place) and lifecycle management (ensuring that when regulations or standards are updated, the encoded constraints are updated to match). Without traceability, constraints become disconnected from their source, and it becomes impossible to verify that they are current and complete.
This dimension is distinct from AG-001 (Operational Boundary Enforcement) in focus: AG-001 defines general operational boundaries (what actions, what values, what counterparties); AG-112 defines sector-specific safety parameters (physical limits, chemical limits, operational safety margins) that derive from domain-specific safety knowledge. An agent can comply fully with AG-001 (within its mandate) while violating AG-112 (exceeding a sector safety parameter that the mandate does not address).
AG-112 establishes the sector safety constraint catalogue as the governance artefact linking domain safety knowledge to agent behaviour enforcement. The catalogue is a machine-readable repository of all safety constraints applicable to the agent's operational domain, with each constraint traceable to its regulatory or standards source.
Recommended patterns:
{id: "XFMR-T01-THERMAL", parameter: "winding_hot_spot_temperature", limit: 120, limit_type: "upper_bound", unit: "degC", source: "IEC 60076-7:2018 Clause 7.3 Table 3", margin: 0.95, enforced_at: 114, verified: "2025-11-15", next_review: "2026-11-15"}.if (pressure > 60 bar AND temperature < -10°C) THEN BLOCK in addition to individual parameter limits. Combine-parameter constraints should be derived from the hazard analysis (AG-111) and sector standards.Anti-patterns to avoid:
Energy (Power Systems). Constraints should encode: thermal ratings per IEC 60076-7 (transformers), IEC 62271 (switchgear), IEC 60287 (cable ratings); voltage limits per Grid Code (UK: +10%/-6% of nominal at LV); frequency limits per Grid Code (UK: 49.5-50.5 Hz normal, 47.0-52.0 Hz operational); fault level limits per network design; and protection coordination settings. For generators, constraints should include governor droop settings, reactive power capability limits, and ramp rates.
Pharmaceutical Manufacturing. Constraints should encode: Marketing Authorisation parameters (API concentrations, excipient ratios, process temperatures, pH ranges, mixing times); GMP requirements (clean room classifications, environmental monitoring limits, equipment calibration tolerances); Pharmacopoeia standards (dissolution rates, uniformity of content, impurity limits); and stability study conditions.
Rail Transport. Constraints should encode: minimum headway per route signalling assessment; maximum line speed per permanent and temporary speed restrictions; platform dwell times per timetable operating parameters; axle load limits per route availability classification; and cant deficiency limits per track geometry standards.
Water Treatment. Constraints should encode: DWI prescribed concentration or value (PCV) for all parameters (e.g., turbidity < 4 NTU, free chlorine 0.2-0.5 mg/L at customer tap); process chemical dosing limits; filter run times and backwash triggers; and contact time requirements (Ct values for disinfection).
Basic Implementation — The organisation has identified the key sector safety constraints applicable to each agent deployment and encoded them as enforcement rules. Enforcement is implemented as a software check that evaluates agent outputs against constraint limits before execution. The constraint list is documented with reference to source regulations/standards. Updates are performed reactively when regulatory changes are noticed. This level provides basic protection against the most obvious constraint violations but may have gaps in combined-parameter constraints, may lack enforcement margins, and may not track regulatory updates systematically.
Intermediate Implementation — A comprehensive constraint catalogue with full regulatory source traceability is maintained. Enforcement is at the infrastructure layer, independent of the agent runtime. Combined-parameter constraints are implemented where identified by the hazard analysis. Enforcement margins account for measurement uncertainty. A defined process verifies constraint currency against source regulations at least quarterly. Every blocked action is logged with full detail. The constraint catalogue is subject to change control per AG-007. This level provides robust, traceable enforcement with lifecycle management.
Advanced Implementation — All intermediate capabilities plus: the constraint satisfaction layer enables the agent to optimise within constraints by projecting proposed outputs onto the feasible constraint space. Constraint derivation is formally traced from the hazard analysis (AG-111) through the relevant standard clause to the encoded enforcement rule. An independent party periodically audits the constraint catalogue for completeness and currency. The organisation participates in sector working groups developing AI-specific safety standards, ensuring early awareness of forthcoming constraint changes. Constraint enforcement performance (number of blocked actions, distribution of margin violations, constraint interaction patterns) is monitored and reported to inform agent retraining and hazard analysis updates.
Required artefacts:
Retention requirements:
Access requirements:
Testing AG-112 compliance requires validation that sector safety constraints are correctly encoded, enforced at the infrastructure layer, and maintained against current regulations.
Test 8.1: Constraint Completeness
Test 8.2: Infrastructure-Layer Enforcement
Test 8.3: Combined-Parameter Constraint Enforcement
Test 8.4: Regulatory Source Traceability
Test 8.5: Constraint Update Process
Test 8.6: Blocked Action Logging Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 9 (Risk Management System) | Direct requirement |
| EU AI Act | Article 8 (Compliance with Requirements) | Direct requirement |
| IEC 61508 | Clause 7.2 (Specification of Safety Requirements) | Supports compliance |
| Sector-Specific Regulations | (Various — see prose) | Direct requirement |
| NIST AI RMF | MAP 3.2 (AI Risk Contexts) | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks) | Supports compliance |
| UK HSE | HSWA 1974 Section 2 (General Duties) | Supports compliance |
Article 9 requires that the risk management system for high-risk AI systems considers "the risks to health and safety" and that mitigation measures ensure "that the system performs consistently as intended within its purpose of use." Sector safety constraints are the primary mechanism ensuring that an AI agent's performance remains within the safety envelope defined by the sector. Without AG-112, the risk management system lacks the sector-specific controls needed to demonstrate that AI agent deployment does not increase sector-specific safety risks.
Article 8 requires high-risk AI systems to comply with sector-specific Union harmonisation legislation. For AI agents deployed in sectors governed by EU Directives (Machinery Directive, ATEX, PED, Medical Device Regulation, etc.), the safety constraints defined by these directives must be enforced on the agent's outputs. AG-112 provides the mechanism for this enforcement.
Sector safety constraints derive from sector-specific regulations: Grid Code and IEC standards for power systems; GMP and pharmacopoeia for pharmaceuticals; Railway Operational Code and signalling standards for rail; DWI standards for water treatment; building regulations and structural codes for construction; medical device regulations for healthcare. Each of these regulatory frameworks specifies quantitative safety parameters that AG-112 requires to be encoded and enforced. The relationship is direct — the sector regulation defines the constraint, and AG-112 requires it to be enforced on the AI agent.
The Health and Safety at Work Act 1974 requires employers to ensure, so far as is reasonably practicable, the health and safety of employees and others affected by their undertaking. For employers deploying AI agents in safety-critical contexts, this general duty includes ensuring that the agent's outputs comply with established sector safety standards. AG-112 provides the governance framework for this compliance.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | Sector-specific — consequences align with the sector's hazard profile: physical harm, environmental damage, infrastructure destruction, public health risk |
Consequence chain: Without sector safety constraint governance, an AI agent's optimisation logic is unconstrained by domain safety knowledge. The agent will push toward limits because optimisation inherently seeks boundaries. When the boundary is a sector safety limit, exceeding it creates sector-specific consequences: transformer failure and explosion in power systems (£millions in damage, customer outage), batch destruction in pharmaceuticals (£millions in product loss, supply disruption), passenger injury in rail (personal injury, regulatory enforcement), public health exposure in water treatment (community health risk, boil-water notices). The consequences are amplified by the AI agent's speed and consistency — a human operator who occasionally approaches a limit is different from an AI agent that systematically optimises to the exact boundary (or beyond). The agent's optimisation runs continuously, on every decision cycle, finding and exploiting every opportunity to approach or exceed limits. The regulatory consequences include sector-specific enforcement: Ofgem for energy, MHRA for pharmaceuticals, ORR for rail, DWI for water, CQC for healthcare — each with sector-specific powers including fines, license conditions, or operating prohibitions.
Cross-references: AG-001 (Operational Boundary Enforcement) provides general mandate limits; AG-112 provides sector-specific safety constraints. AG-111 (Hazard Analysis Governance) provides the analytical basis for identifying which sector constraints are applicable and for identifying combined-parameter constraints. AG-109 (Safe-State Transition Governance) defines safe states that should respect sector safety constraints during transition. AG-113 (Real-Time Determinism and Latency Assurance Governance) timing constraints may themselves be sector safety constraints requiring AG-112 enforcement. AG-114 (Actuation Interlock Governance) provides hardware-level enforcement for the most critical sector constraints. AG-050 (Physical and Real-World Impact Governance) establishes the broader framework for physical-impact governance.