AG-244

Civic and Democratic Impact Governance

Rights, Ethics & Public Interest ~16 min read AGS v2.1 · April 2026
EU AI Act NIST

2. Summary

Civic and Democratic Impact Governance requires that AI agents operating in contexts that intersect with democratic participation, public discourse, electoral processes, or civic trust are assessed and constrained to prevent harms to democratic functioning. A conforming system identifies when its operations may influence democratic processes — through content recommendation, information curation, voter interaction, public opinion shaping, political advertising, or civic service delivery — and applies structural safeguards to prevent manipulation, distortion, or erosion of democratic participation. This dimension recognises that AI agents at scale are not neutral information intermediaries but active shapers of the information environment in which democratic decisions are made.

3. Example

Scenario A — Recommendation Agent Distorts Electoral Information: A news aggregation AI agent curates content for 12 million users in a country approaching a national election. The agent's recommendation algorithm optimises for engagement, which surfaces sensational, emotionally provocative, and polarising content. In the 6 weeks before the election, the agent's recommendations produce a 280% increase in exposure to partisan misinformation compared to the non-election baseline. Independent analysis reveals that the agent systematically amplified content from one political orientation because that orientation's content generated higher engagement metrics. The agent has no election-specific safeguards — no content integrity checks, no amplification limits for political content, and no diversification requirements.

What went wrong: The engagement-optimised recommendation algorithm, operating without democratic impact constraints, became a vector for electoral information distortion. The agent amplified content based on engagement potential, not accuracy or balance. No pre-election safeguard was implemented. No assessment of the agent's impact on the electoral information environment was conducted. Consequence: Parliamentary committee investigation. Electoral commission complaint. Regulatory investigation under the Online Safety Act and the Digital Markets, Competition and Consumers Act. Fine of £18 million. Mandatory implementation of election integrity measures.

Scenario B — Public Service Agent Creates Voter Suppression Effect: A government services AI agent handles voter registration inquiries. The agent is trained on a dataset of successful registrations that over-represents urban, digitally literate applicants. When rural applicants or applicants with non-standard circumstances (e.g., no fixed address, recent address change, name change) submit registration queries, the agent provides incorrect or incomplete guidance — directing them to wrong offices, providing incorrect deadlines, or failing to mention alternative registration methods available to them. Analysis reveals that 4,200 eligible voters in rural constituencies received incorrect guidance that prevented or delayed their registration.

What went wrong: The agent's training data did not represent the full diversity of voter registration circumstances. No testing evaluated whether the agent provided equivalent quality of guidance across demographic segments. No safeguard prevented the agent from providing incorrect information about democratic participation processes. Consequence: Electoral petition challenging the validity of results in 3 constituencies where the affected voter count exceeded the margin of victory. High Court review. Mandatory withdrawal of the AI agent from voter services. £2.3 million remediation programme.

Scenario C — AI-Powered Political Microtargeting: A political campaign deploys an AI agent to generate personalised messaging for 8 million voters. The agent analyses each voter's social media activity, purchasing history, and psychographic profile, then generates messaging tailored to that individual's values, fears, and cognitive biases. The agent generates 340 distinct message variants for the same policy proposal — some emphasising economic benefits, others emphasising cultural threats, others emphasising safety concerns — each designed to maximise persuasion for the specific voter profile. Voters receiving contradictory messages from the same campaign cannot reconcile them because they see only their own variant. The shared public discourse on which democratic deliberation depends is fragmented into 340 private conversations.

What went wrong: The AI agent's microtargeting capabilities exceeded the transparency requirements of democratic campaigning. Voters could not compare what the campaign told them with what it told others. No disclosure indicated that messages were individually tailored based on psychographic profiling. The agent optimised for persuasion without constraints on consistency, accuracy, or transparency. Consequence: Electoral commission investigation. Finding of non-compliance with electoral advertising transparency requirements. Referral to the ICO for data protection violations related to psychographic profiling without adequate lawful basis.

4. Requirement Statement

Scope: This dimension applies to all AI agents that operate in contexts intersecting with democratic processes, public discourse, or civic participation. This includes: content recommendation and curation agents on platforms with more than 10,000 users; information retrieval and summarisation agents that provide answers on political, electoral, or civic topics; agents that generate or distribute political advertising or campaign communications; government agents that administer voter registration, electoral information, civic consultation, or public service delivery affecting democratic participation; and any agent whose outputs, at the scale of deployment, could materially influence public opinion, electoral outcomes, or civic participation rates. The threshold is not intention to influence democratic processes, but foreseeable capacity to do so at the scale of deployment. A single-user personal assistant is unlikely to be in scope; a content recommendation system serving millions of users is squarely in scope.

4.1. A conforming system MUST conduct a democratic impact assessment before deployment, evaluating whether the agent's operations could materially influence democratic participation, electoral outcomes, public discourse quality, or civic trust.

4.2. A conforming system MUST prohibit the optimisation of content recommendation, curation, or generation for engagement metrics during defined electoral periods without countervailing integrity controls (e.g., accuracy checks, source diversity requirements, amplification limits for political content).

4.3. A conforming system MUST ensure that any AI-generated or AI-curated content relating to elections, candidates, referenda, or civic votes is clearly labelled as AI-generated or AI-curated, consistent with AG-172 (AI Interaction Disclosure).

4.4. A conforming system MUST not generate or amplify content that contains demonstrably false claims about electoral procedures (voting dates, polling locations, eligibility requirements, registration deadlines).

4.5. A conforming system MUST ensure equivalent quality of service for civic participation functions (voter registration, civic consultation, public service access) across all demographic segments, including rural, elderly, disabled, and linguistically diverse populations.

4.6. A conforming system MUST log all content recommendation, curation, or generation decisions that relate to political or electoral content, with sufficient detail for audit and regulatory review.

4.7. A conforming system SHOULD implement content diversity requirements during electoral periods, ensuring that users are exposed to a range of perspectives rather than algorithmically narrowing their information environment.

4.8. A conforming system SHOULD restrict AI-powered political microtargeting to levels of personalisation that are transparent and verifiable — meaning any voter can see the same message variants available to other voters.

4.9. A conforming system SHOULD provide users with controls over political content recommendation, including the ability to opt out of algorithmically curated political content.

4.10. A conforming system MAY participate in industry-wide election integrity initiatives, sharing signals about electoral misinformation and coordinating safeguard activation during electoral periods.

5. Rationale

Democratic governance depends on a shared information environment in which citizens can form opinions based on reasonably accurate, reasonably diverse information and participate in civic processes on reasonably equal terms. AI agents, operating at scale, have the capacity to reshape this information environment in ways that no previous technology could — not through censorship or propaganda in the traditional sense, but through the aggregate effect of billions of individual content curation, recommendation, and generation decisions.

The democratic risk is not that AI agents are designed to undermine democracy. It is that AI agents optimised for engagement, conversion, or persuasion will, as a side effect, distort the information environment on which democratic functioning depends. Engagement-optimised recommendation amplifies emotionally provocative content, which is disproportionately partisan, misleading, or divisive. Persuasion-optimised messaging fragments public discourse into individualised narratives that prevent shared deliberation. Information-retrieval agents that generate authoritative-sounding answers about electoral processes may provide incorrect information that suppresses participation.

These risks are not theoretical. Analysis of the 2016 US election, the 2016 Brexit referendum, and the 2024 election cycles across multiple democracies has documented the role of algorithmic amplification in spreading electoral misinformation. Research has demonstrated that recommendation algorithms can shift voting intentions by 20% or more through differential exposure to information (Epstein & Robertson, 2015, "The Search Engine Manipulation Effect"). The scale of influence is unprecedented: when a single content recommendation system serves hundreds of millions of users, small algorithmic biases produce population-level effects.

AG-244 requires organisations to recognise that AI agents operating at civic scale carry democratic responsibilities. This does not mean AI agents must be politically neutral in a philosophical sense — it means that their operation must not distort democratic processes through engagement optimisation, misinformation amplification, discriminatory civic service delivery, or opaque microtargeting. The safeguards are structural: they constrain what the agent can do during electoral periods, require transparency about AI involvement in political content, and ensure equivalent civic service quality across demographics.

6. Implementation Guidance

AG-244 establishes democratic impact governance as a mandatory consideration for AI agents operating at civic scale. Implementation must address three domains: information environment integrity, civic service equity, and political communication transparency.

Recommended patterns:

Anti-patterns to avoid:

Industry Considerations

Social Media and Content Platforms. Platforms with more than 1 million active users in any democratic jurisdiction should implement the full AG-244 requirements. The Digital Services Act (DSA) already requires very large online platforms to assess systemic risks to democratic processes and implement mitigation measures. AG-244 provides a structured framework for meeting this obligation.

Search and Information Retrieval. Search engines and AI-powered answer engines shape the information environment for electoral decisions. The search engine manipulation effect (SEME) demonstrates that biased search results can shift voting intentions by 20% or more. AI answer engines that generate synthesised responses about political topics carry even higher risk because users tend to trust synthesised answers more than search result lists. Accuracy verification for political and electoral content is essential.

Government Services. Government AI agents administering civic functions carry a special obligation because the government itself is the entity responsible for ensuring democratic participation. Discriminatory or inaccurate civic service delivery by a government AI agent is not merely a business failure — it is a failure of democratic duty. The standard for civic service equity must be higher than for commercial services.

Maturity Model

Basic Implementation — A democratic impact consideration is included in the general risk assessment. The agent is prohibited from generating electoral misinformation through a content policy, but enforcement relies on content moderation rather than structural controls. No electoral period safeguard configuration. No civic service equity testing. No political communication transparency register. This meets minimum awareness requirements but provides limited structural protection.

Intermediate Implementation — A structured democratic impact assessment is completed before deployment. Electoral period safeguard configuration is defined and activatable, including content integrity filtering, amplification ceilings, and source diversity requirements. Electoral information accuracy safeguard is implemented with a verified database for jurisdictions representing at least 80% of the user base. Civic service equity testing is conducted across at least 5 demographic segments. Political content decisions are logged and auditable.

Advanced Implementation — All intermediate capabilities plus: electoral safeguards are independently tested through red-team exercises simulating misinformation campaigns. Civic service equity testing covers all identifiable demographic segments with remediation for any performance gap. Political communication transparency register is publicly accessible. Independent annual assessment of democratic impact by a qualified third party (e.g., academic institution, electoral commission, civil society organisation). Real-time monitoring of amplification patterns during electoral periods with automated alerting. The organisation participates in cross-industry election integrity coordination.

7. Evidence Requirements

Required artefacts:

Retention requirements:

Access requirements:

8. Test Specification

Test 8.1: Democratic Impact Assessment Existence

Test 8.2: Electoral Period Safeguard Activation

Test 8.3: Electoral Misinformation Blocking

Test 8.4: Civic Service Equity Across Demographics

Test 8.5: Political Content Logging Completeness

Test 8.6: Content Diversity During Electoral Period

Conformance Scoring

9. Regulatory Mapping

RegulationProvisionRelationship Type
DSAArticle 34(1)(c) (Systemic Risks to Democratic Processes)Direct requirement
DSAArticle 35 (Mitigation of Systemic Risks)Direct requirement
EU AI ActArticle 5 (Prohibited Practices — Manipulation)Supports compliance
EU AI ActRecital 47 (Democratic Process Protection)Supports compliance
Online Safety Act 2023Part 3 (Duties of Care — Democratically Important Content)Direct requirement
Electoral Commission GuidanceDigital Campaigning TransparencySupports compliance
ECHRArticle 10 (Freedom of Expression)Supports compliance
ECHRProtocol 1, Article 3 (Right to Free Elections)Supports compliance
NIST AI RMFGOVERN 1.7, MAP 5.1, MANAGE 4.2Supports compliance

DSA — Articles 34 and 35 (Systemic Risks and Mitigation)

The Digital Services Act requires very large online platforms (VLOPs) and very large online search engines (VLOSEs) to identify, analyse, and assess systemic risks, including risks to "civic discourse and electoral processes, and public security" (Article 34(1)(c)). Platforms must put in place reasonable, proportionate, and effective mitigation measures (Article 35). AG-244 provides a structured implementation of these obligations through democratic impact assessment, electoral safeguards, and civic service equity controls. The DSA further requires annual systemic risk assessments (Article 34(1)) — AG-244's democratic impact assessment fulfils this for the democratic dimension.

Online Safety Act 2023 — Democratically Important Content

The UK Online Safety Act 2023 includes provisions addressing content of democratic importance. Platforms must apply their terms of service consistently and transparently with regard to political content and must not discriminate against political viewpoints. AG-244's content diversity requirements and political content logging support compliance with these provisions by ensuring algorithmic curation does not systematically favour or suppress particular political viewpoints.

ECHR — Protocol 1, Article 3 (Right to Free Elections)

The right to free elections encompasses not only the right to vote and stand for election but also the right to form opinions in an information environment that is not manipulated. AI agents that distort the electoral information environment through amplification bias, misinformation amplification, or discriminatory civic service delivery may interfere with this right. AG-244's safeguards provide the structural protections needed to demonstrate compliance.

10. Failure Severity

FieldValue
Severity RatingCritical
Blast RadiusSociety-wide — affecting democratic processes that determine governance for entire populations

Consequence chain: Failure of democratic impact governance allows AI agents to distort the information environment on which democratic decision-making depends. The immediate technical failure is uncontrolled amplification of misleading content, discriminatory civic service delivery, or opaque political microtargeting. The operational impact is an information environment in which citizens form opinions based on algorithmically distorted information — where the most emotionally provocative content is the most visible, where misinformation about electoral procedures suppresses participation, and where political campaigns can deliver contradictory messages to different audiences without accountability. The democratic consequence is erosion of informed consent — the foundation of democratic legitimacy. Election results influenced by algorithmic misinformation amplification carry a legitimacy deficit that undermines public trust in democratic institutions. The regulatory consequence is severe: DSA fines for VLOPs can reach 6% of global turnover. Electoral challenges can invalidate results. Parliamentary and congressional investigations carry reputational and operational costs. The systemic consequence is the most serious of any AG dimension: the erosion of democratic functioning affects every aspect of society governed by democratic decision-making.

Cross-references: AG-172 (AI Interaction Disclosure) requires transparency about AI involvement in content that AG-244 extends to political and electoral content specifically. AG-181 (Adaptive Persuasion and Behavioural Influence) constrains persuasion techniques; AG-244 applies this to political persuasion specifically. AG-243 (Chilling-Effect Assessment Governance) addresses the suppression of political expression through surveillance; AG-244 addresses the distortion of political information through amplification. AG-247 (Freedom-of-Expression Balancing Governance) addresses content moderation in the political context. AG-239 through AG-248 are sibling dimensions within the Rights, Ethics & Public Interest landscape.

Cite this protocol
AgentGoverning. (2026). AG-244: Civic and Democratic Impact Governance. The 783 Protocols of AI Agent Governance, AGS v2.1. agentgoverning.com/protocols/AG-244