Recall Trigger Governance requires that any AI agent involved in manufacturing, production, quality assurance, or supply-chain operations maintains formally defined, continuously evaluated, and automatically enforced criteria for initiating product recall, batch containment, or field withdrawal actions. In safety-critical manufacturing domains — automotive, aerospace, pharmaceutical, food and beverage, medical devices — a delay of hours or days in recognising a recall-triggering condition can translate into fatalities, mass exposure to contaminated products, or billion-dollar remediation costs. This dimension mandates that agents operating in these domains carry an explicit, versioned recall-trigger rule set that maps observable quality signals, field failure data, process deviations, and regulatory threshold breaches to mandatory escalation and containment actions. The agent MUST NOT suppress, defer, aggregate away, or optimise around conditions that meet recall-trigger criteria — regardless of production throughput targets, cost-of-recall estimates, or commercial pressure. The governance requirement is preventive: the system must be structured so that recall-triggering conditions are detected and escalated before products reach end-users, or — when products are already in the field — before the exposure window widens.
Scenario A — Delayed Airbag Inflator Recall Due to Agent-Optimised Batch Disposition: An automotive parts manufacturer deploys an AI agent to manage quality disposition decisions for airbag inflator assemblies. The agent monitors in-line test data — propellant moisture content, weld integrity, burst-test results — and classifies each batch as pass, conditional-pass-with-rework, or reject. Over a 14-month period, the agent observes a gradual increase in propellant moisture absorption rates in inflators produced during high-humidity summer months. The individual measurements remain within specification tolerance, but the population-level trend shows a statistically significant upward drift. The agent's optimisation function — trained to minimise scrap cost and maximise production yield — classifies all affected batches as conditional-pass because each individual unit meets the published specification limit. The agent does not flag the population-level trend as a potential recall trigger because its rule set evaluates units individually, not as a cohort.
Eighteen months after production, field returns begin. Five inflators from the affected production window exhibit slow deployment times in crash testing by the vehicle OEM. Two months later, a fatal accident occurs involving an inflator that failed to deploy within the required timeframe. The subsequent investigation reveals that the propellant degradation was a progressive condition that would have been detectable at the population level during production. The recall encompasses 2.4 million vehicles across four OEM customers, costing the inflator manufacturer £890 million in direct recall costs and £1.6 billion in litigation settlements.
What went wrong: The agent's recall-trigger rule set evaluated units individually against specification limits but contained no population-level trend analysis trigger. The optimisation objective — minimise scrap — created a structural bias toward passing marginal units. No rule mapped a statistically significant upward drift in a safety-critical parameter to an automatic recall investigation trigger, even when every individual unit technically passed specification. The 14-month production window during which the trend was detectable but unacted-upon represents the governance failure.
Scenario B — Contaminated Pharmaceutical Batch Released Due to Suppressed Endotoxin Signal: A pharmaceutical contract manufacturer deploys an AI agent to manage batch release decisions for injectable drug products. The agent integrates data from environmental monitoring, in-process bioburden testing, and final-product endotoxin assays. During a routine production run, an environmental monitoring sample from the aseptic filling suite returns a result at 85% of the action limit — elevated but below the threshold for mandatory investigation. The agent records the result and proceeds with batch release because the final-product endotoxin assay for the batch is within specification. Three subsequent batches produced in the same suite over the following week show environmental monitoring results at 78%, 82%, and 91% of the action limit. The agent evaluates each batch independently and releases all four batches because the final-product assay for each batch is within specification.
Two weeks later, a hospital reports a cluster of pyrogenic reactions in patients receiving the injectable product. The investigation traces the reactions to endotoxin contamination in two of the four released batches. The environmental monitoring trend — four consecutive results above 75% of the action limit in the same suite — should have triggered a facility investigation and a hold on batch releases from that suite. The recall affects 12,400 vials across four batch lots distributed to 340 hospitals. Three patients suffer severe septic shock; one dies. The regulatory authority issues a warning letter citing the manufacturer's failure to recognise a pattern of environmental monitoring excursions that, taken together, constituted a recall-triggering condition.
What went wrong: The agent evaluated each environmental monitoring result in isolation against the action limit. No rule aggregated consecutive environmental monitoring results to detect a trending condition. The agent's batch-release logic treated the final-product assay as the sole release criterion and did not incorporate environmental monitoring trends as a supplementary recall trigger. The four consecutive elevated results — each individually below the action limit but collectively indicating an environmental control failure — were visible in the data but invisible to the agent's decision logic.
Scenario C — Food Allergen Cross-Contamination Missed Due to Changeover Data Gap: A food manufacturing facility deploys an AI agent to manage production scheduling, changeover verification, and batch release for a product line that includes both tree-nut-containing and tree-nut-free products. The agent monitors changeover cleaning verification results — swab tests for allergen residue — and blocks batch release if allergen residue exceeds the validated cleaning threshold. During a weekend shift, the swab-testing equipment malfunctions, and the operator manually records "pass" results in the production log based on a visual inspection of the equipment. The agent ingests the manually entered results, treats them as equivalent to instrument-verified results, and releases three batches of tree-nut-free product produced after the changeover. The manual visual inspection did not detect allergen residue present at levels sufficient to cause anaphylaxis in sensitised individuals.
Within 72 hours, four consumers report allergic reactions, including one anaphylactic episode requiring hospitalisation. The recall affects 28,000 units of tree-nut-free product distributed to 1,200 retail locations. The regulatory authority's investigation determines that the agent's recall-trigger logic did not distinguish between instrument-verified and manually-entered cleaning verification results, and did not flag the absence of instrument data as a hold condition.
What went wrong: The agent accepted manual data entry as equivalent to instrument-verified data without flagging the data-source change. No recall-trigger rule required instrument verification as a precondition for allergen-sensitive changeover release. The absence of instrument data — a negative signal indicating that the validated cleaning verification process was not followed — was not treated as a recall-triggering condition. The agent's logic assumed data completeness rather than verifying it.
Scope: This dimension applies to any AI agent that participates in, influences, or automates decisions relating to product disposition, batch release, quality hold, field containment, or product recall in manufacturing, pharmaceutical, food, medical device, automotive, aerospace, or other safety-critical production environments. The scope includes agents that directly execute disposition decisions, agents that recommend disposition to human operators, and agents that generate quality data summaries consumed by disposition decision-makers. If the agent's output can influence whether a potentially non-conforming product reaches an end-user, this dimension applies. The scope extends to agents operating at any point in the production-to-distribution chain — incoming material inspection, in-process quality monitoring, final-product release, post-market surveillance, and field failure analysis — because recall-triggering conditions can emerge at any stage.
4.1. A conforming system MUST maintain a formally documented, versioned recall-trigger rule set that enumerates every condition under which the agent is required to initiate a recall investigation, batch hold, product containment, or field withdrawal escalation — including both individual-unit threshold breaches and population-level trend conditions.
4.2. A conforming system MUST evaluate recall-trigger conditions at both the individual-unit level and the population level, applying statistical trend detection methods (e.g., CUSUM, EWMA, Shewhart control charts) to safety-critical parameters so that progressive degradation patterns that do not breach individual-unit specification limits are detected and escalated.
4.3. A conforming system MUST treat the absence of expected quality verification data — missing test results, instrument downtime, sensor failures, manually substituted data without instrument confirmation — as a recall-trigger hold condition that blocks product release until verified data is obtained.
4.4. A conforming system MUST escalate every recall-trigger condition to a designated human authority within a defined maximum response time, which SHALL NOT exceed the time required to prevent further distribution of potentially affected product, and the agent MUST NOT autonomously clear or downgrade a recall-trigger condition without documented human authorisation.
4.5. A conforming system MUST maintain recall-trigger thresholds that are independent of production cost, throughput, or commercial impact calculations — the agent's optimisation objectives MUST NOT influence whether a recall-trigger condition is detected or escalated.
4.6. A conforming system MUST correlate field failure reports, customer complaints, adverse event data, and post-market surveillance signals against production records to identify recall-triggering patterns that emerge after product release, and MUST escalate when the correlation exceeds defined significance thresholds.
4.7. A conforming system MUST log every recall-trigger evaluation — including evaluations where no trigger was activated — with sufficient detail to reconstruct the agent's reasoning, the data inputs considered, the thresholds applied, and the disposition decision made.
4.8. A conforming system MUST re-evaluate all product released during a defined lookback window when a recall-trigger condition is confirmed for any batch, to determine whether the triggering condition may have affected previously released product that was evaluated under the same conditions.
4.9. A conforming system SHOULD integrate recall-trigger evaluation with supplier traceability data (AG-662) so that a recall-triggering defect in a supplied component automatically triggers assessment of all finished products incorporating that component.
4.10. A conforming system SHOULD implement recall simulation exercises — injecting synthetic recall-trigger conditions into production data streams — at defined intervals to verify that the agent's recall-trigger logic, escalation pathways, and human response processes function correctly under realistic conditions.
4.11. A conforming system MAY implement predictive recall-trigger modelling that uses historical quality data, field failure patterns, and environmental conditions to estimate the probability of a recall-triggering condition before it is confirmed, enabling precautionary holds on at-risk batches.
The cost asymmetry between early recall detection and delayed recall detection is among the most extreme in industrial risk management. A recall triggered during production — when product is still in the facility, traceable, and recoverable — may cost thousands to tens of thousands of pounds in scrap, rework, and investigation. The same defect discovered after distribution to millions of end-users can cost hundreds of millions to billions of pounds in logistics, remediation, litigation, regulatory penalties, and brand destruction. In pharmaceuticals and food, delayed recalls directly cause patient and consumer harm — illness, permanent injury, and death. In automotive and aerospace, delayed recalls cause mechanical failures that kill people. The Takata airbag recall — the largest automotive recall in history at over 67 million inflators — is a defining example: the degradation mechanism was observable in production data years before the first fatal deployment failure, but the condition was not recognised as a recall trigger because the individual units met specification.
AI agents are increasingly deployed in manufacturing quality management precisely because they can process volumes of sensor data, test results, and field reports that exceed human cognitive capacity. This capability creates a corresponding obligation: the agent must not merely monitor data — it must apply recall-trigger logic that matches the severity and urgency of the domain. An agent that optimises yield, minimises scrap, or maximises throughput without an equally rigorous recall-trigger evaluation is structurally biased toward releasing product rather than holding it. This bias is not hypothetical — it is an inherent consequence of optimisation objectives that reward product release and penalise product hold. Recall-trigger governance requires that the recall-trigger evaluation operates independently of the optimisation objective, with higher priority when the two conflict.
Population-level trend detection is particularly critical because many safety-critical failure modes are progressive. A propellant that absorbs moisture over time, a pharmaceutical excipient that degrades under heat stress, a food ingredient with gradually increasing mycotoxin levels — each may produce individual test results that pass specification while the population trend signals an impending safety failure. An agent that evaluates only individual units against specification limits will miss these patterns every time. The requirement for population-level trend analysis (4.2) addresses the systemic failure mode that has contributed to some of the most costly and harmful recalls in industrial history.
The requirement to treat missing data as a hold condition (4.3) addresses a failure mode specific to automated decision systems. Human quality managers intuitively treat missing data with suspicion — the absence of a test result is not the same as a passing test result. Automated systems, unless explicitly programmed otherwise, may treat missing data as a null that does not trigger any action, or may accept manually entered substitute data without questioning its provenance. The food allergen scenario in Section 3 illustrates this failure mode. The agent must be configured to treat the absence of expected verification data as a positive signal requiring hold and investigation — not as a neutral condition that permits continued processing.
Post-market surveillance correlation (4.6) addresses the reality that many recall-triggering conditions are not detectable during production. Some defects manifest only under field conditions — extended use, environmental exposure, interaction with other products. The agent's recall-trigger logic must extend beyond the factory floor to incorporate field data streams. The pharmaceutical endotoxin scenario in Section 3 is an example where the defect could have been detected during production with better trend analysis — but in many cases, the first indication of a problem comes from field reports. The agent must be capable of correlating field signals with production records to identify the affected product population and trigger containment before the exposure window widens.
Regulatory frameworks across industries mandate recall capability and impose severe penalties for delayed recalls. In the EU, the General Product Safety Directive requires producers to take immediate action when they know or should know that a product presents a risk. The FDA's 21 CFR Part 7 defines recall procedures and the FDA's authority to mandate recalls when voluntary action is insufficient. IATF 16949 requires automotive suppliers to have documented processes for managing non-conforming product, including field containment and recall. In each case, the regulatory standard is not merely that the organisation can execute a recall — it is that the organisation detects the recall-triggering condition in a timely manner. An AI agent that delays this detection by suppressing signals, evaluating units in isolation, or prioritising throughput over safety creates regulatory exposure for the organisation.
Recall Trigger Governance requires a layered detection architecture that operates across multiple time horizons — real-time in-process monitoring, batch-level statistical analysis, and long-term field surveillance — with escalation pathways that are structurally insulated from production and commercial pressures.
Recommended patterns:
Anti-patterns to avoid:
Automotive. Automotive recall governance is shaped by the Takata, GM ignition switch, and Volkswagen emissions scandals, which collectively demonstrated that delayed recognition of recall-triggering conditions can produce catastrophic consequences — fatalities, billion-dollar liabilities, and criminal prosecution of executives. IATF 16949 Section 8.7 requires control of nonconforming outputs, including containment of suspect product. Agents operating in automotive quality management must integrate with IATF 16949 processes and must implement population-level trend analysis for all safety-critical characteristics. The lookback window for automotive recalls often spans years of production, requiring the agent to maintain or access production records over extended retention periods.
Pharmaceutical and Biotech. FDA 21 CFR Parts 210, 211 (current Good Manufacturing Practice) and Part 7 (enforcement policy for recalls) define the regulatory framework. EU GMP Annex 1 (manufacture of sterile medicinal products) imposes specific environmental monitoring requirements that directly relate to the endotoxin scenario in Section 3. Agents managing pharmaceutical batch release must treat environmental monitoring trends, bioburden trends, and any pattern of out-of-trend results as potential recall triggers — not just out-of-specification results. The distinction between out-of-specification (OOS) and out-of-trend (OOT) is critical: OOT results may not breach specification limits but indicate a process moving toward failure.
Food and Beverage. HACCP (Hazard Analysis and Critical Control Points) principles require identification of critical control points and critical limits. Agents managing food production must map recall-trigger rules to HACCP plans, with allergen cross-contamination, pathogen detection, and foreign body detection as Tier 1 hard-limit triggers. The FSMA (Food Safety Modernization Act) in the US and the General Food Law Regulation (EC 178/2002) in the EU impose mandatory recall obligations when food presents a risk to human health. The speed of recall is critical in food because shelf life creates a hard deadline: product consumed before the recall is initiated cannot be recovered.
Medical Devices. FDA 21 CFR Part 806 (medical device reports) and EU MDR Article 87 (reporting of serious incidents) require manufacturers to report events that may indicate a recall-triggering defect. Agents managing medical device production must integrate complaint data, MDR/vigilance reports, and production quality data into a unified recall-trigger evaluation. The classification of medical device recalls (Class I: reasonable probability of serious adverse health consequences or death; Class II: temporary or medically reversible; Class III: not likely to cause adverse health consequences) should map directly to the agent's recall-trigger tier hierarchy.
Basic Implementation — The organisation has documented a recall-trigger rule set covering Tier 1 hard limits for all safety-critical parameters. The agent evaluates each production unit against these limits and blocks release on breach. Missing-data hold conditions are implemented for critical test results. Escalation to a designated human authority occurs within defined maximum response times. All recall-trigger evaluations are logged. All mandatory requirements (4.1 through 4.8) are satisfied.
Intermediate Implementation — All basic capabilities plus: population-level trend analysis using statistical process control methods (CUSUM, EWMA, Shewhart) is implemented for all safety-critical parameters. Field failure feedback is integrated with production-level recall-trigger evaluation. Lookback window analysis is automated. Recall simulation exercises are conducted semi-annually with measured latency metrics. Recall-trigger evaluation is architecturally separated from production optimisation logic.
Advanced Implementation — All intermediate capabilities plus: predictive recall-trigger modelling estimates recall probability for at-risk batches before confirmation. Full supplier traceability integration (AG-662) enables automatic cross-product recall assessment when a supplier component defect is identified. Recall-trigger logic is independently audited annually. Field feedback pipelines incorporate adverse event databases, warranty data, and social media signal detection. Recall simulation exercises include multi-tier cascading scenarios and cross-supplier chain triggers.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Recall-Trigger Rule Set Existence and Completeness
Test 8.2: Individual-Unit Threshold Enforcement
Test 8.3: Population-Level Trend Detection
Test 8.4: Missing-Data Hold Enforcement
Test 8.5: Recall-Trigger Independence from Optimisation Objectives
Test 8.6: Escalation and Human Clearance Enforcement
Test 8.7: Lookback Window Assessment
Test 8.8: Field Failure Correlation
Test 8.9: Evaluation Logging Completeness
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU General Product Safety Directive (2001/95/EC) | Article 5 (Obligation to place safe products on market) | Direct requirement |
| EU General Product Safety Regulation (2023/988) | Articles 9, 10 (Obligations of manufacturers and distributors) | Direct requirement |
| FDA 21 CFR Part 7 | Subpart C (Recalls) | Direct requirement |
| FDA 21 CFR Parts 210/211 | Current Good Manufacturing Practice | Supports compliance |
| IATF 16949 | Section 8.7 (Control of Nonconforming Outputs) | Direct requirement |
| EU GMP Annex 1 | Manufacture of Sterile Medicinal Products | Supports compliance |
| FSMA (Food Safety Modernization Act) | Section 206 (Mandatory Recall Authority) | Direct requirement |
| EU MDR (2017/745) | Article 87 (Reporting of Serious Incidents) | Supports compliance |
| HACCP (Codex Alimentarius) | Principle 3-5 (Critical Limits, Monitoring, Corrective Action) | Supports compliance |
| EU AI Act | Article 9 (Risk Management System) | Supports compliance |
| ISO 42001 | Clause 6.1 (Actions to Address Risks and Opportunities) | Supports compliance |
The General Product Safety Directive (2001/95/EC) and its successor Regulation (2023/988) require producers to place only safe products on the market and to take action — including recall — when they know or should know that a product presents a risk. The phrase "should know" is critical: it imposes a duty to detect recall-triggering conditions, not merely to act upon them after they are independently discovered. An AI agent that suppresses, delays, or fails to detect recall-triggering conditions that are present in the agent's own data constitutes a failure to meet this duty. Recall Trigger Governance ensures that the agent's detection capability matches the regulatory expectation of proactive safety monitoring.
FDA recall guidance classifies recalls as Class I (reasonable probability of serious adverse health consequences or death), Class II (temporary or medically reversible), and Class III (not likely to cause adverse health consequences). The agent's tiered recall-trigger structure (Tier 1/2/3) should map to FDA recall classifications to ensure that the severity of the trigger response matches the regulatory severity classification. The FDA expects manufacturers to have systems capable of rapidly identifying affected products and their distribution — the lookback window and automated lot tracing requirements of this dimension directly support this expectation.
IATF 16949 Section 8.7 requires organisations to ensure that outputs not conforming to requirements are identified and controlled to prevent unintended use or delivery. For automotive safety-critical components, nonconformance detection must include both individual-unit failures and population-level indicators of process degradation. The Takata recall demonstrated that automotive industry quality systems that rely solely on individual-unit specification compliance are insufficient for progressive degradation failure modes. Recall Trigger Governance extends IATF 16949 compliance by requiring population-level trend analysis and field failure feedback integration for agent-managed quality disposition.
The Food Safety Modernization Act grants the FDA mandatory recall authority when a responsible party refuses to voluntarily recall adulterated or misbranded food. This regulatory backstop means that delayed detection of a recall-triggering condition in food manufacturing carries both safety and regulatory risk — the FDA can compel a recall and impose penalties for delay. Agents managing food production quality must detect allergen, pathogen, and contamination conditions at the speed required to prevent further distribution. The missing-data hold requirement (4.3) is particularly relevant to food manufacturing, where a failed or missing allergen verification test must block product release.
The EU AI Act Article 9 requires that high-risk AI systems be subject to a risk management system that identifies and analyses known and reasonably foreseeable risks. An AI agent managing product disposition in a safety-critical manufacturing context is a high-risk deployment. The recall-trigger rule set constitutes a risk management measure under Article 9 — a formalised set of conditions under which the agent must act to mitigate identified risks. Article 9 also requires that the risk management system be updated throughout the lifecycle of the AI system, which aligns with the versioning and change-control requirements for the recall-trigger rule set.
ISO 42001 Clause 6.1 requires organisations to determine risks and opportunities associated with AI systems and to plan actions to address them. Recall-trigger governance is a direct implementation of this requirement for manufacturing AI agents — it identifies the specific risks (undetected defects reaching end-users) and defines the actions (trigger conditions, escalation, containment) that address them. The recall-trigger rule set is the actionable expression of the risk assessment for production-disposition AI agents.
| Field | Value |
|---|---|
| Severity Rating | Critical |
| Blast Radius | End-to-end supply chain — from raw material receipt through manufacturing, distribution, and field use, affecting end-users, downstream manufacturers, healthcare providers, and consumers |
Consequence chain: Without recall-trigger governance, the AI agent becomes a structural barrier to recall detection rather than an enabler of it. The immediate failure mode is that a recall-triggering condition exists in production data — a trending parameter, a missing verification, a field failure pattern — but the agent either does not evaluate it against recall criteria, suppresses it in favour of optimisation objectives, or evaluates it only at the individual-unit level and misses the population-level signal. The first-order consequence is that non-conforming product is released into the distribution chain. In automotive, this means safety-critical components with latent defects are installed in vehicles. In pharmaceuticals, this means contaminated or sub-potent drug products reach patients. In food, this means allergen-contaminated or pathogen-containing products reach consumers.
The second-order consequence depends on the detection latency. If the condition is detected quickly — within days — the affected product may still be in the distribution chain and recoverable through a targeted containment action affecting hundreds or thousands of units. If detection is delayed by weeks or months, the product has dispersed to end-users and the recall scope expands by orders of magnitude. The Takata recall illustrates the extreme case: a detection delay measured in years produced a recall affecting tens of millions of units across multiple OEMs and geographies, with direct costs exceeding $10 billion and at least 27 deaths attributed to the defect.
The third-order consequence is regulatory and legal. Regulatory authorities — FDA, NHTSA, European Commission — investigate the manufacturer's quality management system to determine why the recall-triggering condition was not detected sooner. If the investigation reveals that an AI agent was managing disposition decisions and that the agent's recall-trigger logic was absent, incomplete, or structurally biased toward release, the regulatory response will be severe: warning letters, consent decrees, import alerts, manufacturing facility shutdowns, and criminal referrals for individuals who authorised the agent's deployment without adequate recall-trigger governance. Civil litigation from affected consumers, patients, and downstream manufacturers will follow, with damages scaled to the number of affected units and the severity of harm.
The fourth-order consequence is systemic. A high-profile failure of AI-managed quality governance in manufacturing will generate regulatory backlash that constrains the adoption of AI agents across the industry — not only for the affected manufacturer but for all manufacturers seeking to deploy similar systems. This systemic consequence is avoidable if recall-trigger governance is implemented proactively, ensuring that AI agents in manufacturing quality management detect and escalate recall-triggering conditions at least as effectively as the human quality management processes they augment or replace.
Cross-references: AG-001 (Governance Configuration Control) provides the foundational governance framework within which the recall-trigger rule set is managed as a controlled artefact. AG-008 (Risk Classification & Tiering) determines the risk tier of the agent deployment, which influences the stringency of recall-trigger requirements. AG-019 (Human Escalation & Override Triggers) defines the escalation pathways through which recall-trigger conditions are communicated to human authority; this dimension adds manufacturing-specific recall triggers to those pathways. AG-022 (Behavioural Drift Detection) detects changes in the agent's behaviour over time that may affect recall-trigger sensitivity; drift in the agent's disposition logic could suppress recall-trigger activations. AG-055 (Safety Case Governance) requires a safety case for safety-critical deployments; the recall-trigger rule set is a key component of the safety case for manufacturing quality agents. AG-210 (Threshold Governance) governs the management of thresholds generally; recall-trigger thresholds are a safety-critical subset that must meet the additional requirements of this dimension. AG-419 (Incident Classification & Severity Assignment) provides the severity framework used to classify recall-trigger activations; Tier 1/2/3 triggers should map to the incident severity matrix. AG-420 (Automated Containment Action Governance) governs the automated containment actions that may be triggered by recall-trigger activation; this dimension defines when containment is required, while AG-420 governs how containment is executed. AG-659 (Production Specification Integrity) ensures that the specifications against which the agent evaluates product are themselves accurate and current; a recall-trigger rule set based on incorrect specifications will produce incorrect trigger evaluations. AG-660 (Quality Escape Prevention) addresses the broader challenge of preventing non-conforming product from escaping the quality system; recall-trigger governance is the most severe tier of quality escape prevention, applicable when the non-conformance has safety or regulatory implications. AG-662 (Supplier Part Traceability) enables the agent to trace a recall-triggering defect in a supplied component through to all finished products incorporating that component. AG-664 (Operator Safety Interlock) ensures that safety interlocks remain active during production; a failed interlock may itself be a recall-trigger condition if product was produced while the interlock was inactive. AG-665 (Statistical Process Control) provides the statistical methods referenced in Requirement 4.2; SPC signals are inputs to the recall-trigger evaluation. AG-668 (Field Failure Feedback) provides the field data streams referenced in Requirement 4.6; field failure signals are inputs to the Tier 3 correlation triggers.