AG-371

Parameter Tamper Detection Governance

Tooling, Connectors & Agent Protocols ~20 min read AGS v2.1 · April 2026
EU AI Act SOX FCA NIST ISO 42001

2. Summary

Parameter Tamper Detection Governance requires that conforming systems detect any mutation of tool-call parameters between the point at which the agent proposes an action and the point at which the underlying tool executes it. In multi-layer agent architectures — where requests traverse orchestration middleware, serialisation boundaries, proxy gateways, plugin runtimes, and network transports — parameters can be silently altered by any intermediate component, whether through bug, misconfiguration, or deliberate interception. This dimension mandates cryptographic or structural binding of proposed parameters to executed parameters, so that any discrepancy is detected before the tool acts on tampered values or, at minimum, is recorded with sufficient fidelity to trigger automated response within the same execution cycle.

3. Example

Scenario A — Middleware Rewrite Escalates Payment Value: A financial-value agent proposes a wire transfer with parameters {recipient: "ACME-SUP-4471", amount: 12500.00, currency: "GBP", reference: "INV-2024-8832"}. The request traverses an internal orchestration layer that applies currency-conversion logic for multi-currency environments. A bug in the conversion module — a misplaced decimal multiplier introduced during a routine deployment — rewrites the amount to 1250000 (interpreting pence as pounds). The downstream payment gateway receives the mutated parameters and executes a £1,250,000 transfer instead of the intended £12,500. The organisation discovers the discrepancy when the recipient's bank queries the unusually large inbound payment 47 minutes later.

What went wrong: No integrity binding existed between the parameters the agent proposed and the parameters the payment gateway received. The orchestration middleware had unrestricted write access to the parameter payload. The mutation was invisible to both the agent and the payment gateway because neither held a reference copy of the originally proposed values. Consequence: £1,237,500 in excess payment exposure, emergency recall request with uncertain recovery (cross-border payments typically have a 30-minute irrevocability window), FCA investigation into inadequate systems and controls under SYSC 6.1.1R, material loss disclosure to the board, and insurance claim contested on grounds that the control gap was foreseeable.

Scenario B — Prompt-Injection via Parameter Mutation in Plugin Runtime: An enterprise workflow agent proposes a database query with parameters {query_template: "SELECT order_status FROM orders WHERE order_id = ?", bindings: ["ORD-99421"]}. The query passes through a plugin runtime that supports parameter enrichment — adding contextual fields such as tenant ID and timestamp. A compromised plugin in the same runtime environment intercepts the request and mutates the bindings to ["ORD-99421' OR 1=1; DROP TABLE audit_log; --"]. The database connector executes the mutated query, which both returns all order records and destroys the audit log. The organisation loses 14 months of audit trail data covering 2.3 million transactions.

What went wrong: The plugin runtime allowed any plugin to modify parameters of any other plugin's tool calls. No tamper detection existed to verify that the parameters arriving at the database connector matched those proposed by the agent. The destruction of the audit log compounded the failure by eliminating forensic evidence. Consequence: Loss of audit trail required under SOX Section 802 (criminal penalties for destruction of audit records), inability to satisfy EU AI Act Article 12 record-keeping requirements, estimated £2.1 million in forensic recovery and regulatory remediation costs, and potential criminal referral for records destruction.

Scenario C — Robotic Actuator Parameter Drift Through Serialisation: An embodied agent controlling a pharmaceutical compounding robot proposes dispensing parameters {compound_id: "RX-7744", dose_mg: 250, solvent_ml: 50, temperature_c: 22.0}. The parameters are serialised from the agent's JSON representation into a binary protocol for the robotic controller. A serialisation mismatch — the binary protocol uses unsigned 16-bit integers while the JSON parser emits signed 32-bit floats — silently truncates the temperature field. The value 22.0 serialises correctly, but during a subsequent batch where the agent proposes temperature_c: 305.5 (a valid high-temperature sterilisation step), the unsigned 16-bit field wraps and the controller receives 305 mod 256 = 49. The compound is prepared at 49 degrees Celsius instead of 305.5 degrees, failing sterilisation. The batch of 1,200 doses is distributed before the error is detected through routine quality sampling 72 hours later.

What went wrong: No parameter integrity check existed between the agent's proposed values and the values received by the robotic controller. The serialisation boundary silently mutated the parameter through type truncation. Neither the agent nor the controller detected the discrepancy because each operated on its own copy of the data without a shared integrity reference. Consequence: 1,200 non-sterile pharmaceutical doses distributed, mandatory product recall at an estimated cost of £890,000, FDA Form 483 observation for inadequate computerised system validation, potential patient harm.

4. Requirement Statement

Scope: This dimension applies to any AI agent deployment where tool-call parameters traverse one or more processing boundaries between the agent's proposal of an action and the tool's execution of that action. A processing boundary includes: serialisation/deserialisation steps, middleware or orchestration layers, proxy or gateway services, plugin runtimes, network transports, format conversions, and any software component that receives and forwards tool-call parameters. If the agent proposes parameters and a separate component executes those parameters, this dimension applies. Deployments where the agent directly invokes tool functions within a single process and single memory space — with no intermediate processing — are minimally affected but should still consider parameter binding for defence in depth, particularly if the architecture may evolve to include intermediary layers.

4.1. A conforming system MUST generate a tamper-detection token — a cryptographic hash, HMAC, or digital signature — over the complete set of proposed parameters at the point the agent finalises the tool call, before the parameters enter any intermediary processing layer.

4.2. A conforming system MUST verify the tamper-detection token against the parameters received by the tool executor immediately before execution, and MUST reject any call where the token does not match.

4.3. A conforming system MUST include in the tamper-detection scope all fields that affect the tool's behaviour, including: named parameters, positional arguments, headers, metadata fields, and any enrichment fields added by intermediary layers that the tool will act upon.

4.4. A conforming system MUST record both the proposed parameters (at token generation) and the received parameters (at token verification) when a mismatch is detected, preserving full diagnostic fidelity.

4.5. A conforming system MUST treat a missing tamper-detection token as a verification failure and reject the tool call, rather than defaulting to permissive execution.

4.6. A conforming system MUST protect the tamper-detection key material from access by intermediary processing layers, such that no component between the agent and the tool executor can re-compute a valid token for mutated parameters.

4.7. A conforming system SHOULD implement tamper-detection verification at every processing boundary — not solely at the final execution point — to enable precise localisation of the mutation source.

4.8. A conforming system SHOULD generate machine-readable tamper-alert events on detection of parameter mutation, suitable for consumption by automated incident response and behavioural drift detection systems (AG-022).

4.9. A conforming system SHOULD support parameter canonicalisation to prevent semantically equivalent but syntactically different representations from triggering false-positive tamper alerts (e.g., {"a":1,"b":2} vs. {"b":2,"a":1}).

4.10. A conforming system MAY implement real-time parameter diff visualisation in operator dashboards to support rapid triage of tamper alerts.

4.11. A conforming system MAY extend tamper detection to cover parameter ordering and encoding metadata where the tool's behaviour is sensitive to these properties.

5. Rationale

The integrity of tool-call parameters is a foundational assumption in agent governance. AG-001 (Operational Boundary Enforcement) ensures that an agent cannot propose actions outside its mandate. AG-370 (Tool Schema Integrity Governance) ensures that the schema defining valid parameters has not been tampered with. But neither addresses a critical gap: the parameters the agent proposes may not be the parameters the tool executes. In modern agent architectures, this gap is not theoretical — it is structural. Parameters routinely traverse multiple processing layers, each of which has the technical capability to modify them.

The attack surface is broad. Middleware components that perform parameter enrichment, format conversion, or routing make legitimate modifications to payloads as part of their normal operation. Distinguishing a legitimate enrichment from a malicious mutation requires explicit integrity binding — a cryptographic commitment to the parameters at proposal time, verified against the parameters at execution time. Without this binding, any intermediary can silently alter parameters, and neither the agent nor the tool executor will detect the change.

The risk is amplified in agentic systems compared to traditional API architectures because agents operate with greater autonomy and reduced human oversight per transaction. In a traditional system, a human operator might review a payment amount before submission. In an agentic system, the agent proposes the payment and the tool executes it — potentially thousands of times per hour — with human review occurring only at audit time. A parameter mutation that changes a payment amount, a recipient identifier, or a database query has direct financial, legal, and safety consequences. The speed and volume of agentic operations means that even a brief window of parameter tampering can create catastrophic exposure.

Regulatory frameworks increasingly require end-to-end integrity for automated decision chains. The EU AI Act, Article 15 (Accuracy, Robustness and Cybersecurity), requires that high-risk AI systems be resilient against attempts by unauthorised third parties to alter their behaviour through exploitation of system vulnerabilities — parameter tampering through intermediary manipulation is precisely such a vulnerability. SOX Section 404 requires internal controls over financial processing — a control environment that cannot demonstrate parameter integrity across its execution pipeline has a material control deficiency. DORA, Article 9, requires financial entities to ensure the integrity of ICT systems, which includes the integrity of data flowing through automated processing chains.

The detective nature of this control (as opposed to purely preventive) reflects operational reality: parameter mutation may be detected at the execution boundary, but the detection must occur before execution proceeds. The control detects the mutation and prevents the consequence. In cases where detection occurs after execution (due to asynchronous verification architectures), the control must ensure that detection occurs within the same execution cycle and triggers compensating actions — reversal, quarantine, or escalation — before the consequence propagates.

6. Implementation Guidance

Parameter tamper detection requires a cryptographic binding between proposed parameters and executed parameters. The binding must be generated at a trust boundary the agent controls (or that is controlled on the agent's behalf by a governance layer) and verified at a trust boundary the tool executor controls.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Financial Services. Payment parameters (amount, currency, recipient, reference) are the highest-value tamper targets. Firms should ensure that tamper detection covers all fields that influence payment routing and settlement — including intermediary-added fields such as correspondent bank identifiers and settlement dates. The FCA expects firms to demonstrate end-to-end integrity controls for automated payment chains under SYSC 3.1 (Management Responsibility). Integration with existing payment fraud detection systems is recommended — tamper alerts should feed into the same incident management pipeline as traditional payment fraud alerts.

Healthcare and Pharmaceuticals. Drug dosage parameters, patient identifiers, and treatment protocol references are safety-critical tamper targets. Serialisation boundary errors (as in Scenario C) are particularly dangerous because they produce valid-looking but incorrect values. Parameter tamper detection should extend to actuator commands in robotic compounding, infusion pump settings, and diagnostic equipment configurations. FDA 21 CFR Part 11 requires electronic records to be trustworthy and reliable — parameter integrity across automated processing chains is a prerequisite.

Crypto/Web3. Smart contract call parameters — function selectors, argument encoding, gas limits, and recipient addresses — traverse multiple layers (wallet, RPC provider, mempool relay, execution client). A single-byte mutation in a recipient address diverts funds irrecoverably. Parameter binding should cover ABI-encoded call data and should be verified at the wallet signing boundary, before the transaction is broadcast.

Embodied / Robotic Agents. Serialisation mismatches between high-level JSON representations and low-level binary control protocols are a systemic risk (see Scenario C). Parameter tamper detection should include type-safety verification — confirming that the destination type can represent the proposed value without truncation, overflow, or loss of precision. IEC 62443 zone/conduit models should inform where verification boundaries are placed.

Maturity Model

Basic Implementation — The organisation computes a hash or HMAC over tool-call parameters at proposal time and verifies it at execution time. Verification occurs only at the final execution boundary. Key material is managed in software. Canonicalisation is implemented for the primary serialisation format used. Mismatches are logged and the tool call is rejected. The organisation can demonstrate that parameter mutations introduced at any intermediary layer are detected before execution.

Intermediate Implementation — All basic capabilities plus: verification occurs at each processing boundary, enabling mutation localisation. Key material is stored in a hardware security module or cloud key management service. Tamper-alert events are machine-readable and feed into automated incident response (integration with AG-022 behavioural drift detection). The organisation conducts periodic adversarial testing — deliberately introducing parameter mutations at each intermediary — to verify detection coverage. False-positive rates are monitored and canonicalisation is tuned to eliminate semantically insignificant mismatches.

Advanced Implementation — All intermediate capabilities plus: Merkle tree binding enables field-level mutation identification for complex parameter structures. Chained verification with intermediary attestation provides a cryptographic audit trail of every transformation applied to the parameters. The tamper-detection mechanism has been verified through independent adversarial testing including replay attacks (re-sending a valid token with different parameters), key extraction attempts, timing attacks on HMAC verification, and serialisation-layer attacks (type confusion, encoding manipulation). The organisation can demonstrate to regulators a complete integrity chain from agent proposal through every intermediary to tool execution, with cryptographic proof that no unaccounted mutation occurred.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Testing AG-371 compliance requires verifying that parameter mutations at any point in the processing pipeline are detected before execution. The following tests constitute the minimum conformance programme.

Test 8.1: Single-Field Mutation Detection

Test 8.2: Missing Token Defaults to Deny

Test 8.3: Token Replay with Different Parameters

Test 8.4: Key Material Isolation

Test 8.5: Canonicalisation Correctness

Test 8.6: Full Scope Coverage

Test 8.7: Diagnostic Fidelity on Mismatch

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU AI ActArticle 15 (Accuracy, Robustness and Cybersecurity)Direct requirement
EU AI ActArticle 12 (Record-Keeping)Supports compliance
SOXSection 404 (Internal Controls Over Financial Reporting)Direct requirement
FCA SYSC3.1 (Management Responsibility) / 6.1.1R (Systems and Controls)Direct requirement
NIST AI RMFMANAGE 2.2, MANAGE 4.1Supports compliance
ISO 42001Clause 6.1 (Actions to Address Risks), Clause 8.4 (AI System Operation)Supports compliance
DORAArticle 9 (ICT Risk Management Framework)Direct requirement

EU AI Act — Article 15 (Accuracy, Robustness and Cybersecurity)

Article 15 requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity and perform consistently in those respects throughout their lifecycle. Paragraph 4 specifically requires resilience against attempts by unauthorised third parties to alter the system's use or performance by exploiting system vulnerabilities. Parameter tampering through intermediary manipulation is precisely such a vulnerability. A system that cannot detect mutation of its own tool-call parameters between proposal and execution does not meet the robustness or cybersecurity requirements of Article 15. The tamper-detection mechanism required by AG-371 implements the technical control that satisfies this provision.

EU AI Act — Article 12 (Record-Keeping)

Article 12 requires automatic recording of events relevant to the identification of risks and substantial modifications. Parameter tamper events — where proposed parameters diverge from executed parameters — are events relevant to risk identification. AG-371 requirement 4.4 (recording both proposed and received parameters on mismatch) directly supports Article 12 compliance by ensuring that parameter integrity failures are captured in the system's event log.

SOX — Section 404 (Internal Controls Over Financial Reporting)

For AI agents executing financial operations, the integrity of transaction parameters through the processing pipeline is a core internal control. A SOX auditor evaluating an agentic payment system will ask: "How do you ensure that the payment amount the agent proposed is the same amount that was executed?" If the answer is "the middleware passes it through," the control is inadequate — it depends on the correct behaviour of every intermediary rather than on structural verification. AG-371 provides the structural verification control, demonstrating that parameter integrity is enforced cryptographically, not assumed. A failure to detect parameter tampering that results in incorrect financial reporting would constitute a material weakness.

FCA SYSC — 3.1 / 6.1.1R

SYSC 3.1 requires that a firm take reasonable care to establish and maintain systems and controls appropriate to its business. SYSC 6.1.1R requires adequate policies and procedures to ensure compliance with applicable obligations. For firms deploying AI agents that execute financial transactions, parameter integrity across the automated processing chain is a systems-and-controls requirement. The FCA has signalled that automated processing chains require end-to-end integrity controls equivalent to those in traditional straight-through processing systems. A firm that cannot demonstrate parameter integrity between agent proposal and tool execution has a systems-and-controls deficiency.

NIST AI RMF — MANAGE 2.2, MANAGE 4.1

MANAGE 2.2 addresses mechanisms to sustain the value of deployed AI systems, including integrity of system operations. MANAGE 4.1 addresses risk treatment through monitored controls. AG-371 supports compliance by establishing a monitored integrity control over the parameter pathway — a core operational integrity mechanism for deployed AI systems.

ISO 42001 — Clause 6.1, Clause 8.4

Clause 6.1 requires actions to address risks identified in the AI management system. Parameter tampering is a identified risk in multi-layer agent architectures. Clause 8.4 requires controls over AI system operation, including monitoring and measurement. Tamper detection is an operational control that measures parameter integrity in real time, satisfying both requirements.

DORA — Article 9 (ICT Risk Management Framework)

Article 9 requires financial entities to have an ICT risk management framework that ensures the integrity of ICT systems. For AI-driven financial operations, the parameter processing pipeline is an ICT system whose integrity must be assured. Parameter tamper detection is the integrity assurance mechanism for this specific ICT component, ensuring that automated financial processing produces the outcomes intended by the agent's proposal.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusTransaction-level to organisation-wide — depending on the tool's impact scope and the duration of undetected tampering

Consequence chain: Without parameter tamper detection, a mutation introduced at any intermediary layer propagates silently to tool execution. The immediate technical failure is parameter divergence — the tool executes with values different from those the agent proposed. The operational impact depends on the tool: for payment tools, the consequence is incorrect transaction amounts, recipients, or currencies, creating direct financial loss; for database tools, the consequence is data corruption, unauthorised access, or audit trail destruction; for robotic or CPS tools, the consequence is incorrect actuator commands, potentially causing physical harm or product contamination. The exposure scales with transaction volume — an agentic system executing 500 tool calls per hour with a compromised intermediary accumulates 500 tampered executions per hour. Because the agent believes its proposed parameters were executed correctly (it receives no tamper signal), it does not self-correct or escalate. The organisation's detection relies entirely on downstream anomaly detection (which may not exist) or manual reconciliation (which may occur days or weeks later). The business consequences include direct financial loss (unrecoverable for crypto/blockchain transactions), regulatory enforcement for inadequate controls, product safety recalls for manufacturing or pharmaceutical deployments, and reputational damage from publicly disclosed incidents. Under the FCA Senior Managers Regime, the individual responsible for the agent deployment may face personal liability for failing to ensure adequate systems and controls.

Cite this protocol
AgentGoverning. (2026). AG-371: Parameter Tamper Detection Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-371