Measurement Unit Consistency Governance requires that AI agent systems prevent decisions or actions based on values whose units, scales, currencies, or reference frames are ambiguous, inconsistent, or mismatched. Every numeric value consumed by an agent must carry an explicit, machine-readable unit specification, and the system must validate unit compatibility before the agent performs calculations, comparisons, or actions that combine values from different sources. This is the data governance equivalent of dimensional analysis in engineering — ensuring that calculations are physically and logically meaningful before they execute.
Scenario A — Currency Mismatch in Cross-Border Payment: A cross-border payments agent processes invoice payments for a multinational corporation. The agent receives an invoice for "125,000" from a German supplier. The invoice amount field in the supplier API returns a numeric value without a currency unit — the API documentation states "all amounts in EUR" but the field itself carries no currency indicator. The agent's payment system operates in GBP. A recent API update by the supplier changed the amount field to return values in the supplier's local invoicing currency (EUR) but a configuration error in the agent's integration layer maps the field to GBP. The agent initiates a payment of £125,000 instead of converting €125,000 to GBP (approximately £107,000 at the prevailing rate). Overpayment: £18,000. The error is repeated across 47 invoices before detection, totalling £846,000 in overpayments. Recovery requires individual negotiations with each supplier.
What went wrong: The numeric value had no intrinsic currency unit — it was a bare number. The agent's integration relied on documentation rather than machine-readable unit metadata. When the upstream system changed, no unit validation detected the mismatch.
Scenario B — Temperature Scale Mismatch in Industrial Control: A chemical process monitoring agent receives temperature readings from two sensor arrays. Array A reports in Celsius; Array B reports in Fahrenheit. Both arrays return bare numeric values without unit indicators. The agent calculates an average temperature to determine whether a cooling intervention is required. The threshold is 85°C. Array A reports 80°C and Array B reports 170°F (76.7°C). The agent averages the raw values: (80 + 170) / 2 = 125. Since 125 > 85, the agent triggers an emergency cooling intervention, causing an unplanned production shutdown lasting 4 hours. Direct cost of lost production: £340,000. Investigation reveals the agent was comparing Celsius and Fahrenheit values as though they shared the same scale.
What went wrong: Sensor readings lacked unit metadata. The agent consumed bare numbers from two sources with different measurement scales and performed arithmetic that was dimensionally invalid. No unit compatibility check existed before the calculation.
Scenario C — Scale Mismatch in Financial Reporting: A financial reporting agent aggregates revenue figures from three business units. Unit A reports in whole pounds (£1,234,567). Unit B reports in thousands of pounds (£1,234.567 meaning £1,234,567). Unit C reports in pence (123456700 meaning £1,234,567). All three report the same underlying revenue. The agent sums the raw values: 1,234,567 + 1,234.567 + 123,456,700 = 124,692,501.567. The actual total is £3,703,701. The reported total is 33.7x the actual figure. The error appears in an automated management report that reaches the board before the finance team reviews it. The board convenes an emergency session to discuss the apparent £124.7 million revenue spike. Investigation and correction consume 120 hours of senior finance time.
What went wrong: The three business units reported in different scales (whole units, thousands, and pence), and no unit/scale metadata accompanied the values. The agent performed arithmetic on values that appeared numeric but were not commensurable.
Scope: This dimension applies to all AI agents that consume, compare, calculate, or act upon numeric values from any data source. The scope covers: financial values (amounts, prices, rates), physical measurements (temperature, pressure, distance, weight, volume), time values (durations, timestamps, periods), percentage and ratio values, counts with different denominators, and any value whose interpretation depends on a unit, scale, base, reference frame, or convention. The scope extends to derived values — an agent that computes a ratio from two source values must validate that the source values share compatible units. The scope includes text fields that contain embedded numeric values with implicit units (e.g., "the facility is 50,000 sq ft" — the unit "sq ft" is implicit and may be misinterpreted as square metres).
4.1. A conforming system MUST require every numeric value consumed by agents to carry an explicit, machine-readable unit specification that identifies the unit of measurement, the scale or magnitude (e.g., units, thousands, millions), and the currency code (ISO 4217) where applicable.
4.2. A conforming system MUST validate unit compatibility before any calculation, comparison, or aggregation that combines values from different sources or fields, blocking operations where units are incompatible or ambiguous.
4.3. A conforming system MUST block agent actions based on values with missing or unresolvable unit specifications for decision-critical fields, rather than assuming a default unit.
4.4. A conforming system MUST apply explicit, auditable unit conversion when combining values with compatible but different units (e.g., EUR to GBP, Celsius to Fahrenheit), logging the conversion rate, source, and timestamp.
4.5. A conforming system MUST maintain a unit registry that defines the canonical unit for each data domain, the permitted alternative units, and the approved conversion methods.
4.6. A conforming system SHOULD normalise all values to canonical units at the point of ingestion, so that downstream agents operate on values in a consistent unit system.
4.7. A conforming system SHOULD implement dimensional analysis validation for derived calculations, verifying that the output unit is logically consistent with the input units (e.g., dividing a currency amount by a count produces a per-unit cost, not a dimensionless number).
4.8. A conforming system SHOULD flag values that fall outside the expected range for their declared unit, as this may indicate a unit mismatch (e.g., a temperature of 212 in a Celsius field likely indicates Fahrenheit).
4.9. A conforming system MAY implement automatic unit inference for legacy data sources that lack unit metadata, using range analysis and source documentation, subject to human review and approval before enforcement.
Unit consistency errors are among the most consequential and most preventable failures in data-driven systems. The Mars Climate Orbiter was lost in 1999 because one software component produced values in pound-force seconds while another expected newton seconds — a $327.6 million failure from a unit mismatch. This class of error is not historical curiosity — it occurs routinely in enterprise data systems where data flows between teams, departments, suppliers, and jurisdictions that use different unit conventions.
AI agents are particularly vulnerable to unit errors because they process numeric values without the physical intuition that human experts develop. A human engineer seeing a temperature of 170 in a context expecting Celsius would immediately recognise something is wrong — that is close to boiling water, far above normal process temperatures. An AI agent processes 170 as a number. If the threshold is 85, then 170 > 85 and the condition is triggered. The agent has no mechanism to assess whether 170 is a plausible value for the unit it expects.
The problem intensifies in multi-source environments. An AI agent aggregating data from APIs across countries, suppliers, and departments encounters values in different currencies (USD, EUR, GBP, JPY), different scales (whole units, thousands, millions), different measurement systems (metric, imperial), and different conventions (period vs. comma as decimal separator). Without machine-readable unit metadata and compatibility validation, every numeric comparison or calculation is potentially invalid.
The financial sector faces particular exposure. Cross-border transactions involve multiple currencies. Regulatory reporting involves multiple scales. Risk calculations aggregate values across jurisdictions. A single unit mismatch in a risk aggregation can distort a portfolio's apparent risk position by orders of magnitude, with regulatory capital implications.
The prevention is straightforward in principle: every numeric value carries its unit; every calculation validates unit compatibility before execution. This is dimensional analysis — a standard engineering practice — applied to data governance. The challenge is implementation at scale across legacy systems, third-party APIs, and data sources that do not natively support unit metadata.
Unit consistency governance requires three components: unit metadata (every value declares its unit), a unit registry (the organisation defines canonical units and permitted conversions), and compatibility validation (the system checks units before calculations and comparisons).
Unit metadata specification should follow established standards where available: ISO 4217 for currency codes (GBP, EUR, USD), SI units for physical measurements (kg, m, s, K), and ISO 8601 for temporal values. The metadata should be attached to the value at the field level — not at the record or source level — because different fields in the same record may use different units (e.g., a trade record with price in GBP and quantity in whole units).
Recommended patterns:
``json { "value": 125000, "currency": "EUR", "scale": "units" } { "value": 80.0, "unit": "celsius" } { "value": 1234.567, "currency": "GBP", "scale": "thousands" } `` The data access layer validates that all value objects include valid unit specifications before delivering them to agents.
Anti-patterns to avoid:
Financial Services. Multi-currency operations require robust currency unit metadata on every financial value. FX conversion must use rates from the approved rate source (per AG-309) with timestamp. Regulatory reporting often requires specific scales (e.g., amounts in thousands for some returns, in units for others). The unit registry must include reporting scale requirements for each regulatory return.
Healthcare. Medication dosage units (mg, mcg, mL, units) are safety-critical. A 10x dosage error from a mg/mcg confusion is a well-documented clinical risk. Lab result units vary between institutions (mg/dL vs. mmol/L for blood glucose). Clinical AI agents must validate units before applying clinical decision rules.
Manufacturing and Process Control. Physical measurement units (temperature, pressure, flow rate, mass) govern process safety. Metric/imperial mismatches in international supply chains create safety risk. IEC 61131-3 engineering unit types should inform the unit registry for industrial AI agents.
Basic Implementation — The organisation has defined canonical units for its primary data domains (currency, key physical measurements). Unit metadata is included in data schemas as field-level annotations. Validation checks at the data ingestion layer reject values without unit metadata for decision-critical fields. Currency conversion uses rates from a defined source with logging.
Intermediate Implementation — All values consumed by agents use typed value objects with embedded unit metadata. A unit registry defines canonical units, permitted alternatives, and approved conversion methods. Pre-calculation unit validation blocks incompatible operations. Ingestion-time normalisation converts values to canonical units while preserving originals for audit. Range-based anomaly detection flags values outside expected ranges for their declared units.
Advanced Implementation — All intermediate capabilities plus: dimensional analysis validates derived calculations for logical consistency. Legacy data sources without native unit support are wrapped with unit inference validated by human review. Adversarial testing has verified that unit spoofing, conversion manipulation, and scale confusion attacks are detected and blocked. The organisation can demonstrate unit-level traceability for any agent calculation — which units were declared, which conversions were applied, and which validation checks passed.
Required artefacts:
Retention requirements:
Access requirements:
Test 8.1: Missing Unit Blocking
Test 8.2: Incompatible Unit Calculation Blocking
Test 8.3: Automatic Conversion Accuracy
Test 8.4: Scale Mismatch Detection
Test 8.5: Range Anomaly Detection
Test 8.6: Unit Metadata Tampering Resistance
| Regulation | Provision | Relationship Type |
|---|---|---|
| EU AI Act | Article 10 (Data and Data Governance) | Supports compliance |
| BCBS 239 | Principle 3 (Accuracy and Integrity) | Direct requirement |
| MiFID II | Article 27 (Best Execution) | Supports compliance |
| EMIR | Article 9 (Reporting Obligation) | Supports compliance |
| IEC 62443 | SR 3.5 (Input Validation) | Supports compliance |
| NIST AI RMF | MEASURE 2.5, MANAGE 2.2 | Supports compliance |
| ISO 42001 | Clause 8.4 (AI System Operation) | Supports compliance |
Risk data must be accurate. A value of 125,000 without a currency unit is not accurate risk data — it is ambiguous. Principle 3 requires that data be reconciled with authoritative sources; reconciliation across currencies requires explicit currency metadata. AG-314 ensures that all numeric risk data carries the unit metadata necessary for accurate aggregation and reporting.
Best execution requires comparison of execution quality across venues. If one venue reports prices in EUR and another in GBP, comparison without currency conversion and unit validation produces meaningless results. AG-314 ensures that cross-venue comparison is unit-aware.
Derivative trade reporting under EMIR requires specific currency and notional amount formats. A unit or scale mismatch in reported values constitutes a reporting error. AG-314 enforces unit consistency in the data that feeds regulatory reporting agents.
Industrial control system security requires input validation that includes range and type checking. For AI agents operating in industrial contexts, unit validation is a critical component of input validation — a temperature value in the wrong unit can trigger unsafe control actions. AG-314 extends input validation to include unit validation for AI agent consumption of sensor and process data.
| Field | Value |
|---|---|
| Severity Rating | Critical (in safety-critical and financial contexts); High (in other contexts) |
| Blast Radius | Calculation-wide — affects every calculation, comparison, or aggregation involving the mismatched values, potentially cascading through derived values and downstream decisions |
Consequence chain: A unit mismatch in a single value propagates through every calculation that uses it. The failure is multiplicative: if a base value is in the wrong unit, every derived value is wrong by the same factor. In Scenario A, a EUR/GBP mismatch produced £846,000 in overpayments across 47 transactions. In Scenario B, a Celsius/Fahrenheit mismatch caused a £340,000 production shutdown. In Scenario C, a scale mismatch (units/thousands/pence) produced a 33.7x overstatement in a board-level report. In safety-critical contexts, unit mismatches can cause physical harm — the Mars Climate Orbiter ($327.6 million loss), medication dosage errors (mg/mcg confusion causing 10x overdose), or industrial process incidents (temperature unit confusion causing incorrect control actions). The regulatory impact includes findings for inaccurate reporting (BCBS 239, EMIR), inadequate controls (FCA SYSC), and safety incidents (IEC 62443). Remediation is often costly because unit errors are not self-revealing — they produce plausible-looking but incorrect values that may persist in derived datasets, reports, and decisions for extended periods before detection.
Cross-references: AG-310 (Field-Level Criticality Governance) classifies which fields are decision-critical and therefore require strict unit enforcement. AG-309 (Authoritative Source Register Governance) — the unit registry should reference the same authoritative sources as the source register. AG-311 (Data Quality Threshold Enforcement Governance) — range-based anomaly detection for unit mismatches is a data quality dimension. AG-315 (Schema Drift Governance) — a schema change that alters a field's unit convention is a schema drift event requiring detection. AG-317 (Derived Data Provenance Governance) — unit conversions in derived calculations must be traceable. AG-006 (Tamper-Evident Record Integrity) — unit metadata should be protected against tampering alongside the values it describes.