
I was moved when I came across a headline in the NYT recently suggesting that AI is beginning to erode democracy globally. The article cites examples from around the world to show how deep fakes and other AI-generated and AI-modified content is regularly being used to disrupt elections today. In one especially extreme example, a presidential election in Romania this past December was annulled by the court due to “extensive deployment of artificial intelligence” that posed threats to electoral integrity.
In typical NYT fashion, the article indulges in scaremongering and is quite one-sided, and doesn’t give even a single example of how AI is being used for good in democracy. Nevertheless I think it does a fair job of capturing the global sentiment towards AI at this moment, which is one of fear, awe, and trepidation. This is only the beginning, they tell us.
I feel strongly that there’s another side to the story here: that AI can and will play a positive, constructive role in democracy and in governance more generally. The more I reflect on this point, the more I begin to feel that my personal mission is to refute it and tell the other side of the story. I also feel that this is a mission of House of Stake, and the research we’re doing at the intersection of AI and governance. Here are three reasons why.
Note: I gave a talk a few days ago at EthCC, and this issue is closely related to the contents of that talk. You can watch the full talk if you’re interested.
Thing #1: Education, Access 📖
One of the prerequisites for democracy is that voters are educated: about the candidates, about the issues, and about governance in general. The most important thing in an election is the candidates, and most voters vote for (or against) a candidate or a personality, more than they vote for (or against) specific policies. If voters don’t know who the candidates are and what they stand for, or worse, if they believe something that’s not true, democracy simply won’t work. People will choose not to vote, or they’ll cast a protest vote, or they’ll end up voting against their own interests.
Of course, voters should also be educated about the issues. They should not only know what each candidate stands for, they should understand what’s at stake and the possible outcomes of various policy proposals. They should have an educated, informed opinion about major issues such as economics, immigration, education, etc.
If only it were so simple. One of the major problems with democracy today is that many of the issues being debated, such as economics, are extraordinarily complex. The sad reality is that most voters have little to no idea what’s actually going on, or what impact various policy proposals might have (e.g., rent control, which is popular but a terrible idea). This is a big part of the reason we keep ending up with dismal candidates and terrible policy, and why democracy is in such bad shape globally.
Voters should also have some idea about governance itself: how the government works, the role of each office and branch of government, and the process by which bills become laws, can be challenged by the courts, etc. They should have some understanding of how the government collects and uses revenue, something else that most voters today simply don’t understand well.
AI has a constructive role to play in each of these areas. One of my favorite examples is candidates that have chosen to create an AI-powered digital clone of themselves, trained on their views and policies. The number of voters that a candidate can speak to directly each day is obviously very limited, whereas the number of voters that can interact with an “AI clone” is effectively unlimited. Such an interactive tool is more useful and more engaging than reading campaign and policy material. There are already a few examples of this happening today, and while a digital clone isn’t perfect, it’s better than having no access at all to a candidate, which is the status quo for most voters.
AI also has a role to play in educating voters about the issues, especially thorny, complex, contentious issues like immigration reform or economics. This is obviously already possible using tools such as ChatGPT, though it takes some work. The technology exists today to create a “voter copilot” that could learn a voter’s preferences and then help them understand how, for whom, and on which issues they should vote. (The logical extension of this is the swarm intelligence, more on which below.)
The same is true of governance itself. Voters may themselves not fully understand the governance process, but it’s trivial to train an AI agent that understands it. Such an agent should monitor, say, local elections, candidates, topics, etc., and notify a voter when they should vote, contact a representative, or otherwise participate in the political process to further their goals and promote their values.
We’re looking into all of these use cases in House of Stake. We already have a working AI governance copilot that lets you view proposals, ask questions about them, and ask advice on how you should vote, as well as helping you draft new proposals. This is just the beginning. From here, we’ll train agents on specific policy documents to represent different political factions within the community, and then over time grant those agents more agency in the governance process. Which brings us to the next thing.
Thing #2: Agency, Identity, Bias 🎭
The ideas discussed above are all very realistic and very straightforward. These technologies exist today and these products can be built today; indeed, we’re already in the process of building them at House of Stake. Let’s look at a few more speculative, futuristic ideas as well. Here are some ways in which AI can have an even bigger positive impact on democracy and governance.
Voting systems work well when voters feel enfranchised, i.e., when they have agency in the system. Sadly, this isn’t always a reality today in many places for a variety of reasons. One is knowledge about the candidates and issues, as discussed above. Another is simply access to voting, ability to obtain a voter ID, etc. AI has a role to play in addressing all of these.
The reason is simple: even if voters aren’t fully informed on every candidate and every issue, even if they can’t always pay attention to politics, and even if they can’t always make it to the poll to vote, they can, in theory, have an AI agent that acts on their behalf and does all of these things. In other words, AI can help voters feel enfranchised.
Another cause of disenfranchisement is that governance is run by deeply flawed humans.
We humans are biased, we’re shortsighted, we aren’t always fully informed, and we act in selfish ways. AI doesn’t necessarily need to have any of these failings. Yes, AI models do have bias, this is clear, but I believe they’re probably less biased than human actors—time will tell. And there are techniques we can use to identify and correct for this bias, such as averaging the output of multiple models which have different sets of biases. Unlike human actors, AI agents should always be fully informed and have full context, and AI agents definitely shouldn’t act in selfish ways. They’re far better than we are at processing huge amounts of information. They should make better governors than we do.
Then there’s identity and reputation. These are less of an issue in off-chain, traditional democratic systems, which rely on factors such as government-issued ID and in-person voting to ensure voting integrity and prevent Sybil attacks. Unfortunately, we don’t have these options in on-chain voting, where Sybil attacks are rampant. Government-issued ID is by definition centralized and outside the scope and sovereignty of cryptographic networks, and we obviously do not vote in person. There have been many attempts over the years to build robust systems to prove that an on chain actor, associated with a particular wallet address, is in fact a unique human, but by and large these systems have failed to work, and/or they’ve failed to gain attention. One notable exception is World ID, which takes a novel approach using biometrics, and which in my opinion is a worthwhile experiment.
But we need more options, and I believe that AI has a role to play here, too. One avenue of research we’re exploring in House of Stake is building an AI guardian that uses sophisticated machine learning and pattern matching to perform fraud detection, providing collusion resistance and Sybil resistance. I’m not 100% sure this is going to work, but it might work, and it’s worth trying. If it does, it gives us another tool in the on-chain democracy toolbox, and one that might conceivably be even more robust to collusion than a solution like World ID (where multiple IDs may, in theory, still be controlled by the same actor).
These are just some of the ways in which AI tools and AI-based governance actors can play a constructive role in both off chain and on chain governance.
Thing #3: Swarm Intelligence 🐝🐝🐝
We’re moving a bit further into speculative territory here, but I nevertheless want to share my personal vision for where I think we’re going: the single most constructive use case for AI in governance is to create a swarm intelligence.
It’s a fancy word, but what does this mean? To be clear, governance done right is already a form of “swarm intelligence.” The basic idea here is just that, when we make decisions, we do it with input from a broad array of actors and stakeholders. There are several ways that happens today, most notably through systems such as representative democracy. In such a system, in theory, constituents express their preferences to their representatives, who ensure that the concerns, needs, values, etc. of their constituents are heard in the governance process.
In practice, of course, things are a bit messier than that, and the system doesn’t work exactly as intended. I could easily fill this entire issue with the ways in which the current democratic system is broken, but for now suffice it to say that AI can, in theory, make all of this better and more effective.
The basic idea of AI-driven swarm intelligence is quite straightforward: each stakeholder has an AI agent acting on their behalf. That agent knows the stakeholder’s needs, preferences, values, etc. Unlike the human stakeholder, the AI agent is “always on”: it’s aware of everything that’s going on in the political process at all times, it’s able to handle massive amounts of information, and it’s able to filter on precisely the issues that the stakeholder cares about the most. It’s not corruptible or fallible like a human actor would be, and as such it doesn’t suffer from a principal-agent problem. It ensures that the stakeholder’s preferences are considered in the governance process much more effectively than modern democracy can: either indirectly, by prompting the stakeholder to take action (to propose something, to comment on something, to vote, to call their representative etc.), or, in a more mature, more direct version, the agent acts directly on their behalf without requiring intervention.
If we’re actually able to build such a thing, then the result is a perfect system of governance, or something as close to perfect as we can possibly achieve. The system would, in theory, always have at its disposal precisely the aggregate preferences of every stakeholder, so every decision could be made with perfect information. When deliberation is needed, a “swarm” of agents representing a particular faction could temporarily form and debate an issue with other factions, and collectively the swarm would choose the “best” outcome, for some definition of best.
On the one hand, I think we’re actually not so far away from being able to build a prototype of such a system. We could build an imperfect prototype today: the technology already exists. It’s basically just a bunch of agents, each of which has a particular policy document based on its user’s preferences, that are talking to one another. On the other hand, as is always the case with governance, the devil’s in the details. There will be a lot of challenges to building such a system and getting it to work as intended.
The first is the assumption that the AI agent perfectly understands your preferences, and acts in your best interest at all times. Even if it were possible to perfectly capture your values, political preferences, etc. into some sort of policy document that the AI agent could include in its context, there’s still the alignment question. The reality is that AI alignment is still a very hard problem, and even to the extent that an AI agent appears to know your interests and act on your behalf, it’s not actually reasoning about what’s in your best interest, and it certainly isn’t doing so from a specific set of values or a moral code as you might be.
Another assumption is that a group of AI agents can effectively communicate about things like political preferences, can deliberate more effectively than people can, and can arrive at the “best” outcome. Agent-to-agent communication is still in pretty basic form today and I’m skeptical that a swarm of even a few hundred agents, to say nothing of millions of them, could communicate and exchange information effectively or efficiently.
And perhaps the biggest assumption of all is that we can effectively express to an AI agent what the “best” outcome looks like. The question of what’s best in governance is thousands of years old, and there’s still no single, good answer to this question. I don’t think AI is going to fundamentally change that, but I could be wrong.
In spite of these challenges, this is still very much an experiment worth running. We’re working on this, and the other ideas I mentioned above, at House of Stake. If any of them sound interesting to you, reach out and get involved!