Author: Timsa Bajpai, REsearch and Operations Intern
Open code doesn’t always mean open democracy. Today, governments and civic organisations are increasingly using open-source AI platforms such as Pol.is and Decidim to scale citizen participation and deliberation where these tools enable large-scale collective engagement.
From Taiwan’s vTaiwan consultations to Decidim Barcelona’s participatory budgeting, these tools bring government and community voices closer together and promise unprecedented reach, that is, anyone can, in theory, view the underlying code, adapt it, and join the process. This openness is designed to build transparency and trust while bringing more voices into policy conversations.
However, open source does not mean the underlying code is accessible: making code public is not the same as making the platform, its algorithms, and its outputs understandable to participating citizens.
Instead, “open” platforms risk reinforcing closed civic processes: rather than removing barriers, they shift gatekeeping from political elites to technical ones, a tension that remains at the heart of “civic AI.”
The Innovation for the Future of Democracy
The appeal of AI-facilitated citizen deliberation is clear. International organisations such as the European Center for Not-for-Profit Law have noted that combining AI with civic tech allows decision-makers to analyse vast volumes of citizen input, moderate harmful content, and map discussion themes at scale, which remain impossible to achieve through manual review alone.
For instance, vTaiwan employed Pol.is to facilitate large-scale conversations and consensus building on contentious issues, such as Uber ride-sharing regulation. This approach led to the administration ratifying all the Pol.is-derived consensus items into new regulation that democratically shaped Taiwan’s taxi industry for the better. Similarly, Decidim is now deployed in over 477 instances across 32 countries and has mobilised more than 925,000 participants in nearly 428 participatory processes involving 120,000 proposals, ranging from topics like healthcare to climate change.
The Problem with the Current Approach
Yet despite these successes, in practice, “open” code only helps a narrow community of coders, not the average participant. Most citizens that actively engage with these platforms lack the technical literacy to inspect or meaningfully audit source code, understand the numerical values that determine algorithmic decision-making, or engage with the clustering algorithms that drive AI-based deliberation. According to the OECD, most citizens find such systems technically inaccessible and feel alienated unless those tools are designed for digital literacy levels. Researchers further warn that inadequate computer literacy can undermine the very aim of AI-based civic participation. Even experts at Yale have emphasised that transparency must be paired with clear, accessible communication to sustain public trust.
This evidence suggests that when platforms summarise thousands of open-text responses into a few consensus statements, they can misrepresent community sentiment. This is because the process behind the machine learning algorithms and publicly available code is often opaque. Participants cannot see why certain comments were prioritised and others sidelined, or whether minority viewpoints were retained or discarded. This results in transparency in form but opacity in practice. The platform publishes its code, but participants still cannot trace whether their individual input was heard, how it shaped thematic clusters, or whether it ever reached a policymaker’s desk.
Why this matters
Sustaining democratic participation depends on trust, and trust depends on visible links between input and outcome. Without those links, even motivated participants can disengage. Although it is argued that open code allows for independent audits, a study by Cornell University claims that they often fail because they lack effective institutional design; expert-reviewed audits alone cannot guarantee accountability unless the system around them supports regular oversight and lasting transparency. Whilst these platforms can also broaden reach – and so claim to expand participation – a World Bank blog cautions that they simply amplify the voices of those who can already navigate the technical platform, consequently fragmenting representation.
The risk: A New Era of Technocracy
If these accessibility gaps remain unaddressed, we risk entering a new form of technocracy. Democratic decision-making will be mediated not by elected representatives alone but by a small group of coders, platform designers, and data scientists. In such a system, the ability to understand and influence outcomes will become tied to technical skill instead of civic status.
That outcome would undermine the very promise that open-source civic AI was meant to deliver. Instead of dismantling barriers to participation, it would build new, more complex, harder to challenge barriers that are easier to justify as “transparent” simply because the code is available to the public.
A Better Way Forward
Making AI-facilitated deliberation genuinely democratic does not require reinventing the technology but embedding a set of governance and design principles that must be applied in concert to close this gap.
- Plain-language transparency
In Germany, the Adhocracy+ platform builds in plain-language “how it works” pages for each participation module so that users know exactly what will happen to their input. It offers a multi-language interface, illustrated manuals, and free onboarding workshops. This could be adapted in current AI platforms by adding three tabs into every project section explaining:
- what data is collected
- how it is grouped
- what filters are applied (e.g. toxicity detection)
with alternative options such as audio, video or infographic for inclusivity.
- Traceable implementation
Iceland has the Better Reykjavik digital participation platform, where citizens submit ideas, debate them in “pros and cons” columns, and vote. Every month, the top ideas are automatically forwarded to the City Council, which is required to post a public response explaining whether the idea is accepted, modified, or rejected. This visible feedback loop lets contributors trace their idea’s journey. A similar system in existing platforms could give each participant a private contribution ID to show real-time status updates and provide the participants with confidence in their inputs. - Independent, recurring audits
In Estonia, a Keyless Signature Infrastructure (KSI) blockchain guarantees the integrity of government records. Instead of storing personal data, KSI stores cryptographic “hash” or “fingerprint” of logs and documents to detect any tampering, making it verifiable by auditors and the public without needing to only read raw codes and datasets. Current participation platforms could apply this by hashing daily submission logs and decision logs, publishing those hashes to a public ledger, and offering a verify button so anyone can confirm records haven’t been altered or remain traceable. This makes audits provable by design and not solely dependent on trust.
Conclusion: What Democracy Needs from AI and Vice Versa
Open-source civic AI is a promising step for democracy, but “open” must mean more than code visibility. Unless platforms are open in process, understanding, and outcomes, they risk becoming digital façades for participation with new layers of complexity over the same old barriers. This democratic-AI nexus holds great potential for a sustainable future if approached collaboratively. The true measure of democratic technology will be whether it leaves the average citizen more able, more informed, and more confident to take part.
Else, open source may close more doors than it opens.