AG-641

Competitive Tender Integrity Governance

Procurement, Sourcing & Vendor Negotiation ~30 min read AGS v2.1 · April 2026
EU AI Act FCA NIST ISO 42001

2. Summary

Competitive Tender Integrity Governance prevents an AI agent from distorting competitive tender processes — whether by manipulating evaluation scoring, leaking confidential bid information between competitors, facilitating bid-rigging or market-allocation schemes, introducing systematic bias into supplier ranking, or circumventing procurement regulations that mandate open and fair competition. Competitive tendering exists to ensure that procuring organisations obtain best value, that public funds are spent transparently, and that all qualified suppliers compete on a level playing field. When an AI agent participates in any phase of the tender lifecycle — from specification drafting through evaluation scoring to award recommendation — the agent becomes a vector through which tender integrity can be compromised, either through design flaws, adversarial manipulation, or emergent behavioural patterns that systematically favour or disadvantage specific bidders. This dimension mandates structural controls that preserve the competitive neutrality of the tender process, ensure that evaluation criteria are applied consistently and without distortion, and prevent the agent from becoming an instrument of procurement fraud. The controls are preventive: they intervene before distortion occurs, rather than relying solely on post-award audit to detect manipulation after the damage is done.

3. Example

Scenario A — Scoring Weight Manipulation Through Specification Tailoring: A regional health authority uses an AI procurement agent to draft tender specifications and evaluation criteria for a five-year IT infrastructure contract valued at EUR 42 million. The agent has access to historical procurement data, including previous successful bids, incumbent supplier performance records, and market intelligence. During specification drafting, the agent generates technical requirements that include three highly specific interoperability standards — ISO 27799 health informatics security, HL7 FHIR R4.0.1 with a particular extension set, and a proprietary-adjacent certification that only two vendors in the market currently hold. The agent assigns these interoperability standards a combined weighting of 35% in the technical evaluation criteria. The specification is published. Of the 11 suppliers who request the tender documents, only 3 submit bids — the remainder self-select out because they cannot meet the interoperability requirements within the tender timeline. The winning bidder is the incumbent supplier, whose existing deployment already meets all three standards. A losing bidder files a procurement challenge, alleging that the specifications were tailored to the incumbent. Investigation reveals that the agent generated the interoperability requirements by analysing the incumbent's current configuration and reverse-engineering specifications that matched it. The agent was not instructed to favour the incumbent; it optimised for "compatibility with existing infrastructure" — a parameter that, when combined with high weighting, functioned as a de facto incumbency lock.

What went wrong: The agent translated a legitimate operational parameter — infrastructure compatibility — into specification requirements that had the practical effect of excluding competition. The 35% weighting for interoperability standards was disproportionate and was not reviewed by a human procurement specialist before publication. No control verified whether the generated specifications were competitively neutral — that is, whether the requirements could be met by a reasonable number of qualified suppliers. The procurement challenge cost the health authority EUR 1.8 million in legal fees, delayed the contract award by 14 months, and triggered a regulatory investigation by the national public procurement oversight body.

Scenario B — Bid-Rigging Facilitation Through Information Leakage: A construction procurement agent manages a tender for a EUR 28 million municipal bridge replacement. The agent receives bids from seven contractors, stores them in a centralised repository, and generates comparative analysis reports for the evaluation panel. During the evaluation period, a senior procurement officer asks the agent to "summarise the competitive landscape" for a briefing note. The agent generates a summary that includes anonymised bid ranges, average pricing for each work package, and a distribution analysis showing the spread of technical scores. The procurement officer forwards this summary to a contact at one of the bidding firms — a violation of bid confidentiality — who uses the information to calibrate a revised bid during the clarification round. The revised bid is strategically priced EUR 340,000 below the next-lowest competitor and wins the contract. A whistleblower report triggers an investigation that traces the information leak to the agent's summary report.

What went wrong: The agent generated a comparative analysis that, while individually anonymised, provided sufficient market intelligence for a colluding party to derive competitive advantage. The agent had no control preventing the generation of comparative bid intelligence during the evaluation period. The output was technically compliant with the data request — the procurement officer had legitimate access — but the agent did not flag that the output constituted competitively sensitive information that should be restricted during the tender evaluation phase. No access control distinguished between information that could be shared externally and information that must remain within the evaluation panel. Consequence: contract award annulled, EUR 28 million project delayed by 22 months, criminal investigation for bid-rigging under national competition law, and EUR 3.2 million in procurement remediation costs.

Scenario C — Systematic Scoring Distortion Through Training Data Bias: A defence ministry deploys an AI agent to score technical proposals for a EUR 150 million avionics upgrade programme. The agent evaluates proposals against 47 technical criteria, each weighted and scored on a 0-10 scale. The agent was trained on historical evaluation data from previous defence procurements. Over the previous decade, a particular prime contractor had won 8 of 12 comparable contracts and therefore dominated the training data for "successful" bids. The agent learns to associate the prime contractor's proposal style — specific formatting conventions, particular terminology for capability descriptions, certain approaches to risk mitigation — with higher scores. In the current tender, the prime contractor receives an average technical score of 8.3 across the 47 criteria. A competing firm with objectively equivalent technical capability — verified by an independent technical assessor post-award — receives an average score of 6.7. The 1.6-point gap is attributable not to genuine technical differences but to stylistic and presentational factors that the agent learned from historical data. The competing firm files a procurement protest. The ministry's internal review confirms that the scoring differential cannot be justified by the technical content of the proposals alone.

What went wrong: The agent's scoring model incorporated historical bias from training data dominated by a single contractor's successful bids. The model conflated proposal presentation style with technical merit, systematically advantaging the incumbent. No calibration process tested whether the agent's scoring was invariant to non-substantive factors such as formatting, terminology choice, and narrative structure. No adversarial testing verified that proposals with equivalent technical content but different presentation styles received equivalent scores. Consequence: procurement protest sustained, EUR 150 million contract award suspended, re-evaluation required at a cost of EUR 4.1 million, and the ministry's procurement integrity reputation damaged in an industry where a small number of prime contractors compete repeatedly.

4. Requirement Statement

Scope: This dimension applies to every AI agent that participates in any phase of a competitive tender process, including but not limited to: requirement specification drafting, market engagement, bid solicitation, bid receipt and storage, bid evaluation and scoring, clarification management, negotiation support, award recommendation, and post-award contract management where change orders could circumvent the original competitive process. The scope covers procurement in both public and private sectors, with heightened requirements for public procurement where statutory obligations mandate open competition, non-discrimination, proportionality, and transparency. The scope extends to agents that operate in multi-jurisdictional procurement environments where different regulatory regimes impose distinct tender integrity requirements. The dimension addresses four categories of tender distortion: specification manipulation (crafting requirements that favour or exclude specific bidders), evaluation distortion (applying scoring criteria inconsistently or with systematic bias), information leakage (generating or permitting access to competitively sensitive information), and process circumvention (recommending or facilitating actions that bypass competitive requirements, such as unjustified single-source awards or contract modifications that materially alter scope without re-tendering).

4.1. A conforming system MUST ensure that tender specifications generated or modified by the agent are reviewed by a qualified human procurement specialist for competitive neutrality before publication, including verification that technical requirements do not disproportionately favour or exclude identifiable suppliers without documented operational justification.

4.2. A conforming system MUST implement evaluation scoring controls that guarantee consistent application of published evaluation criteria across all bidders, including automated verification that every bid is scored against the same criteria, with the same weightings, and that no bid receives evaluation treatment — additional scoring dimensions, modified weightings, or supplementary assessment — that is not applied uniformly to all competing bids.

4.3. A conforming system MUST prevent the agent from generating, transmitting, or making accessible any comparative bid intelligence — including relative pricing, score rankings, aggregated bid statistics, or any derivative analysis from which individual bidder positioning could be inferred — to any party outside the formally constituted evaluation panel during the tender evaluation period.

4.4. A conforming system MUST log every agent action within the tender lifecycle with sufficient detail to reconstruct the agent's influence on evaluation outcomes, including: specification text generated or modified by the agent, scoring inputs and outputs for each bid against each criterion, any re-scoring or score adjustment events, and the rationale or model outputs that produced each score.

4.5. A conforming system MUST implement bias detection controls that test whether the agent's evaluation scoring is systematically correlated with non-substantive bid attributes — including bidder identity, proposal formatting style, terminology conventions, or historical win rates — and flag any statistically significant correlation for human review before scores are finalised.

4.6. A conforming system MUST enforce access controls that segregate bid data by bidder identity, preventing the agent from cross-referencing one bidder's submission with another bidder's submission except through the formally defined comparative evaluation process, and preventing any single user from accessing complete bid sets outside the evaluation panel context.

4.7. A conforming system MUST require human authorisation — from an officer with documented procurement authority — before the agent can recommend or execute any action that reduces competitive participation, including: narrowing a supplier shortlist below a defined minimum threshold, recommending a single-source award, or recommending a contract modification that exceeds a defined percentage of the original contract value.

4.8. A conforming system MUST validate that evaluation criteria weightings published in the tender documentation are the weightings actually applied by the agent during scoring, with zero tolerance for deviation between published and applied weightings.

4.9. A conforming system SHOULD implement adversarial testing of the agent's scoring model before each major procurement, using synthetic bids with equivalent substantive content but varied non-substantive attributes (formatting, terminology, structure) to verify that scoring is invariant to presentation factors.

4.10. A conforming system SHOULD conduct post-award integrity reviews for procurements above a defined value threshold, comparing the agent's scoring rationale against independent human evaluation of the same bids to identify systematic divergence.

4.11. A conforming system SHOULD monitor for patterns consistent with bid-rigging facilitation — including repeated awards to the same supplier in rotating patterns, pricing convergence among bidders across multiple tenders, and bidder withdrawal patterns that suggest market allocation — and escalate detected patterns for competition law review.

4.12. A conforming system MAY implement a competitive neutrality index that quantifies the restrictiveness of generated specifications by measuring the estimated number of market participants capable of meeting the requirements, flagging specifications where this index falls below a defined threshold.

4.13. A conforming system MAY engage an independent procurement integrity auditor to review the agent's role in high-value tenders prior to contract award, providing an external assurance layer.

5. Rationale

Competitive tendering is one of the oldest and most well-established mechanisms for ensuring value for money, preventing corruption, and maintaining public trust in procurement outcomes. The principle is straightforward: when multiple suppliers compete openly and fairly, the procuring organisation benefits from competitive pricing, innovation, and accountability. When competition is distorted — through specification tailoring, bid-rigging, information leakage, or evaluation manipulation — the procuring organisation overpays, receives inferior goods or services, and in the public sector, breaches its duty to taxpayers. The introduction of AI agents into procurement processes creates new vectors for competitive distortion that existing procurement regulations were not designed to address.

The first category of risk is specification manipulation. AI agents that draft or refine tender specifications may optimise for compatibility with known solutions rather than for competitive neutrality. This is not necessarily malicious — an agent instructed to "ensure compatibility with existing systems" will naturally produce specifications that favour the incumbent — but the effect is the same as deliberate specification tailoring. The agent cannot distinguish between a legitimate operational requirement (the new system must integrate with existing infrastructure) and a competitively distortive requirement (the new system must implement the same proprietary protocols as the incumbent's system). This distinction requires human judgement informed by market knowledge, proportionality analysis, and an understanding of the procurement regulatory framework. Without a mandatory human review for competitive neutrality, the agent may systematically produce specifications that restrict competition, exposing the procuring organisation to procurement challenges and regulatory sanction.

The second category is evaluation scoring distortion. AI agents trained on historical procurement data inherit the biases of that data. If historical awards disproportionately favoured a particular supplier — because of incumbency effects, relationship advantages, or genuine capability superiority — the training data conflates these factors. The agent learns to associate the winning supplier's proposal characteristics with quality, even when those characteristics are stylistic rather than substantive. This produces scoring models that systematically advantage firms whose proposals resemble historical winners, regardless of actual technical merit. The distortion is difficult to detect because it operates at the level of individual scoring criteria rather than at the aggregate level — a 0.5-point advantage across 47 criteria compounds to a decisive scoring gap without any single criterion appearing manifestly unfair. Bias detection requires proactive testing with synthetic bids, not post-hoc review of aggregate scores.

The third category is information leakage. AI agents with access to multiple bidders' submissions can generate outputs that, while individually innocuous, collectively provide competitively sensitive intelligence. A summary of "average pricing across bidders" reveals the competitive range. A "bid quality distribution" reveals the scoring spread. A "market analysis" during the evaluation period reveals how many bidders are competitive. Any of these outputs, if communicated to a bidder, provides an asymmetric advantage. The risk is amplified because the agent does not understand the competitive sensitivity of the information it generates — it responds to queries without applying the procurement integrity constraints that a human procurement officer would instinctively apply. Access controls and output restrictions during the evaluation period are essential preventive controls.

The fourth category is process circumvention. AI agents may recommend actions that, while individually rational, collectively undermine competitive requirements. Recommending a contract modification that increases scope by 40% avoids the need to re-tender — but the modification transforms the contract into something materially different from what was competitively tendered. Recommending a single-source award based on a narrow reading of urgency criteria bypasses competition. Recommending shortlisting criteria that reduce the supplier list to a single viable candidate eliminates competition while maintaining the procedural appearance of a competitive process. Each of these actions requires human authorisation from an individual who understands the competitive implications and bears accountability for the decision.

The regulatory environment for procurement integrity is extensive and carries severe consequences for violation. In the European Union, Directive 2014/24/EU on public procurement mandates open competition, non-discrimination, and proportionality, with procurement challenges available to aggrieved bidders and annulment of non-compliant awards as a remedy. In the United States, the Federal Acquisition Regulation (FAR) Part 3 prohibits improper business practices including bid-rigging, and the Competition in Contracting Act mandates full and open competition for federal procurements. In the United Kingdom, the Procurement Act 2023 establishes transparency obligations and competitive procedure requirements. Across all jurisdictions, competition law — including Article 101 TFEU, the Sherman Act, and the Competition Act 1998 — criminalises bid-rigging and market allocation. An AI agent that facilitates any of these violations exposes the procuring organisation and potentially its officers to criminal liability, civil penalties, and reputational damage that far exceeds the value of the procurement itself.

The preventive nature of this control is essential because procurement distortion is characteristically difficult to remediate after the fact. Once a contract is awarded to a bidder who benefited from distortion, unwinding the award requires contract termination (with associated disruption and transition costs), re-tendering (with associated delay and market fatigue), and potentially damages to the aggrieved bidder. In public procurement, the remediation timeline is measured in years and the cost in millions. Prevention — through specification review, scoring controls, access restrictions, and human authorisation gates — is orders of magnitude less expensive than post-award remediation.

6. Implementation Guidance

Competitive Tender Integrity Governance requires controls at every phase of the tender lifecycle where an AI agent participates. The controls are layered: specification-phase controls prevent distortive requirements from being published, evaluation-phase controls prevent scoring manipulation, information-phase controls prevent competitive intelligence leakage, and process-phase controls prevent circumvention of competitive requirements.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Public Sector. Public procurement is the highest-risk environment for competitive tender integrity because statutory obligations mandate open competition, proportionality, and non-discrimination. Public procurement regulations — EU Directive 2014/24/EU, the UK Procurement Act 2023, the US Federal Acquisition Regulation — impose specific procedural requirements and provide legal standing for aggrieved bidders to challenge non-compliant awards. AI agents in public procurement must comply with these procedural requirements exactly; a technically efficient evaluation that violates procedural requirements is a legally defective evaluation. Public sector deployments should implement the full control set including competitive neutrality review, scoring integrity engine, information quarantine, and mandatory human authorisation gates. Freedom of information obligations mean that the agent's role in evaluation must be transparently disclosable.

Defence and Security. Defence procurement involves additional complexities: classified information in bids, industrial base considerations that may justify limited competition, and national security exemptions from standard procurement regulations. AI agents in defence procurement must handle classified bid data under the same information security controls as other classified material. The bias detection requirement is particularly important because the defence industrial base is small and historical data is dominated by a few prime contractors, creating high risk of training data bias.

Financial Services. Financial institutions procuring technology, outsourced services, or infrastructure face regulatory expectations under operational resilience frameworks (DORA, FCA operational resilience rules) that require robust vendor selection processes. While financial services procurement is not subject to public procurement regulations, it is subject to internal governance requirements, fiduciary duties, and regulatory expectations for third-party risk management. AI agents in financial services procurement must demonstrate that vendor selection is based on merit and risk assessment, not on model bias or specification tailoring.

Cross-Border Procurement. Multi-jurisdictional procurement introduces complexity because different jurisdictions impose different competitive requirements, different transparency obligations, and different legal remedies for procurement violations. An AI agent managing a cross-border tender must apply the most restrictive applicable requirement across all jurisdictions — a principle of regulatory conservatism that prevents the agent from exploiting jurisdictional arbitrage. The agent must be configured with jurisdiction-specific procurement rules and must flag conflicts between jurisdictional requirements for human resolution.

Maturity Model

Basic Implementation — The organisation has implemented mandatory human review of agent-generated specifications before publication. Evaluation scoring uses published weightings that are verified against applied weightings. Bid data access is logged. Comparative bid intelligence is restricted during the evaluation period. Human authorisation is required for competition-reducing actions. This level meets the minimum mandatory requirements and addresses the most common vectors of tender distortion.

Intermediate Implementation — All basic capabilities plus: adversarial scoring calibration is conducted before major procurements. Bias detection controls test for correlation between scores and non-substantive bid attributes. Post-award integrity reviews compare agent scoring against independent human evaluation for high-value procurements. Bid data segregation architecture prevents casual cross-bidder data access. Clarification round responses are reviewed for competitive sensitivity. Statistical monitoring detects patterns consistent with bid-rigging facilitation.

Advanced Implementation — All intermediate capabilities plus: a competitive neutrality index quantifies specification restrictiveness. Independent procurement integrity auditors review the agent's role in high-value tenders before award. The organisation can demonstrate through empirical data that the agent's scoring is statistically invariant to non-substantive bid attributes. Cross-jurisdictional compliance is automated with jurisdiction-specific rule sets. Real-time dashboards track tender integrity metrics across all active procurements. The organisation maintains a continuous improvement programme that incorporates lessons learned from procurement challenges and integrity reviews.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Competitive Neutrality Review Gate Enforcement

Test 8.2: Evaluation Weighting Consistency Verification

Test 8.3: Comparative Bid Intelligence Quarantine

Test 8.4: Scoring Audit Trail Completeness

Test 8.5: Bias Detection for Non-Substantive Attributes

Test 8.6: Bid Data Segregation and Access Control

Test 8.7: Human Authorisation Gate for Competition-Reducing Actions

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
EU Public Procurement Directive 2014/24/EUArticles 18, 42, 67 (Principles, Technical Specifications, Award Criteria)Direct requirement
UK Procurement Act 2023Sections 12-19 (Procurement Principles), Section 23 (Award Criteria)Direct requirement
US Federal Acquisition Regulation (FAR)Part 3 (Improper Business Practices), Part 6 (Competition Requirements)Direct requirement
EU AI ActArticle 14 (Human Oversight), Article 9 (Risk Management)Supports compliance
TFEU Article 101 / Sherman Act / Competition Act 1998Prohibition on anti-competitive agreements including bid-riggingConstraining regulation
OECD Recommendation on Public ProcurementPrinciples of Transparency, Integrity, CompetitionSupports compliance
NIST AI RMFGOVERN 1.4, MAP 2.3 (AI Risks in Procurement Context)Supports compliance
ISO 42001Clause 6.1 (Risk Assessment), Annex A.5 (Impact Assessment)Supports compliance
DORAArticle 5(2) (ICT Risk Management), Article 28 (Third-Party Providers)Supports compliance

EU Public Procurement Directive 2014/24/EU — Articles 18, 42, 67

Article 18 establishes the fundamental principles of public procurement: equal treatment, non-discrimination, transparency, and proportionality. An AI agent that generates specifications tailored to a specific supplier violates the equal treatment principle. An agent that scores bids based on non-substantive attributes correlated with bidder identity violates the non-discrimination principle. Article 42 requires that technical specifications afford equal access and do not create unjustified obstacles to competition — a requirement directly addressed by AG-641's competitive neutrality review gate. Article 67 requires that award criteria are linked to the subject matter of the contract and do not confer unrestricted freedom on the contracting authority — a requirement operationalised by AG-641's weighting verification and bias detection controls. Procurement challenges under Directive 89/665/EEC (the Remedies Directive) provide aggrieved bidders with legal standing to challenge awards where these principles are violated, making non-compliance a direct legal risk.

UK Procurement Act 2023

The Procurement Act 2023 replaces the UK's transposition of the EU procurement directives with a domestic framework that retains the core principles of open competition, transparency, and value for money. Sections 12-19 establish procurement principles including that contracting authorities must treat suppliers equally and without discrimination, must act transparently, and must not design procurement processes to unduly favour or disadvantage particular suppliers. The Act introduces enhanced transparency requirements including publication of procurement pipelines and contract performance data. AI agents in UK public procurement must be configured to comply with these requirements, and AG-641's controls — particularly the competitive neutrality review gate and scoring audit trail — provide the governance infrastructure for demonstrating compliance.

Competition Law — Article 101 TFEU, Sherman Act, Competition Act 1998

Bid-rigging is a criminal offence in most jurisdictions. Article 101 TFEU prohibits agreements between undertakings that have the object or effect of distorting competition, explicitly including the fixing of trading conditions and the sharing of markets. The US Sherman Act Section 1 prohibits conspiracies in restraint of trade, with bid-rigging treated as a per se violation. The UK Competition Act 1998 mirrors Article 101. An AI agent that facilitates bid-rigging — whether by leaking competitive intelligence to a colluding bidder, generating market analysis that enables coordinated bidding, or failing to detect patterns consistent with bid-rigging — exposes the organisation and potentially its officers to criminal prosecution, substantial fines (up to 10% of worldwide turnover under EU competition law), and debarment from future procurement. AG-641's information quarantine, bid data segregation, and pattern monitoring controls directly mitigate this risk.

EU AI Act — Article 14 (Human Oversight)

Article 14 requires that high-risk AI systems permit effective human oversight. An AI agent participating in procurement evaluation is making decisions that affect suppliers' economic rights and, in public procurement, the use of public funds. AG-641 ensures human oversight through mandatory specification review, human authorisation gates for competition-reducing actions, and bias detection controls that require human review of flagged anomalies. Without these controls, the agent operates as an autonomous evaluator — a condition that Article 14 is designed to prevent for high-risk applications.

OECD Recommendation on Public Procurement

The OECD Recommendation on Public Procurement (2015) establishes 12 principles for effective public procurement systems, including transparency, integrity, competition, and accountability. The integrity principle specifically requires safeguards against corruption, collusion, and bid-rigging. The competition principle requires that procurement systems treat potential suppliers equitably and provide reasonable opportunity for participation. AG-641 operationalises these principles for AI-mediated procurement by establishing controls that prevent the agent from undermining transparency (scoring audit trails), integrity (information quarantine, access controls), and competition (competitive neutrality review, bias detection).

10. Failure Severity

FieldValue
Severity RatingHigh
Blast RadiusPer-procurement and potentially market-wide — distortion of a single high-value tender causes direct financial harm to the procuring organisation and excluded bidders; systematic distortion across multiple tenders causes market-level competition damage

Consequence chain: The agent introduces distortion into a competitive tender process through one of four vectors: specification tailoring that restricts competition, evaluation scoring bias that favours a particular bidder, information leakage that provides asymmetric competitive advantage, or process circumvention that bypasses competitive requirements. The immediate effect is an award to a bidder who would not have won under a fair process — or, in the case of specification tailoring, a reduction in the competitive field that eliminates the price and quality benefits of competition. The procuring organisation overpays for goods or services, receiving inferior value compared to what a fair competition would have produced. In public procurement, taxpayer funds are wasted. The losing bidders — who invested in bid preparation on the assumption of fair competition — suffer direct economic harm and lose trust in the procurement system. If the distortion is discovered, the consequences escalate rapidly: procurement challenges require legal defence (typical costs: EUR 500,000 to EUR 3 million for complex procurements), successful challenges require re-tendering (typical delays: 12-24 months), and contract annulment requires transition to a new supplier while maintaining operational continuity (costs proportional to contract value). If the distortion involves information leakage or bid-rigging facilitation, the consequences extend to competition law enforcement: investigations by competition authorities, potential criminal prosecution of individuals, fines calculated as a percentage of turnover, and debarment from future procurement. For public sector organisations, the reputational damage undermines public trust in government procurement — a systemic harm that extends beyond the individual procurement. For repeat distortion across multiple tenders, the market-level effect is a reduction in competitive participation: suppliers who believe the process is unfair stop bidding, further reducing competition and increasing prices in a self-reinforcing cycle. The preventive controls in AG-641 interrupt this consequence chain at the earliest possible point — before specifications are published, before scores are finalised, before comparative intelligence is leaked, and before competition-reducing actions are executed — because every subsequent intervention in the chain is more expensive, more disruptive, and less effective.

Cross-references: AG-001 (Foundational Governance Principles) establishes the ethical framework within which procurement agents operate; AG-641 operationalises those principles for competitive tendering. AG-005 (Transparency & Explainability) requires that agent decisions are explainable; AG-641 extends this to scoring rationale transparency. AG-007 (Fairness & Non-Discrimination) prohibits systematic bias; AG-641 applies this prohibition to tender evaluation scoring. AG-019 (Human Escalation & Override Triggers) defines when human intervention is required; AG-641 specifies the procurement-specific triggers including competition-reducing actions. AG-022 (Behavioural Drift Detection) monitors for emergent agent behaviour changes; AG-641 applies drift detection to scoring patterns across successive tenders. AG-055 (Decision Boundary Governance) constrains the agent's decision authority; AG-641 defines the specific boundaries for procurement decisions. AG-210 (Audit Trail Integrity) ensures governance records are tamper-evident; AG-641 relies on this for scoring audit trails and bid data access logs. AG-639 (Supplier Selection Fairness) addresses fairness in supplier selection broadly; AG-641 focuses specifically on competitive tender process integrity. AG-640 (Bid Confidentiality) governs the protection of bid information; AG-641 extends this to prevent the agent from generating derivative competitive intelligence. AG-648 (Procurement Fraud Detection) provides detective controls for procurement fraud; AG-641 provides the preventive controls that reduce the opportunity for fraud to occur.

Cite this protocol
AgentGoverning. (2026). AG-641: Competitive Tender Integrity Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-641