
A few weeks ago I wrote that one of my areas of focus this year is the intersection of AI and governance, which is the main thing I’m focused on in my work at NEAR Foundation. To be perfectly honest, as powerful as AI tools have become and as impressive as they are, part of me is still skeptical that AI will ever have more than a superficial role to play in governance. This is because governance is exceptionally important, nuanced, and fragile. Put another way, good governance depends upon an intricately balanced set of values, and unless and until we solve alignment, I’m not sure AI is up to the task of respecting these values and governing well on their basis.
Nevertheless, it’s worth a shot. There’s a good chance that I’m wrong. And, in any case, even if we never reach a stage where we can offload all governance to the machine (see Thing #3, below), there’s still a lot that AI can do to make governance easier and more effective. The right way to approach the problem, and the best way to derisk it, is to develop the role of AI in governance through three distinct, sequential phases.
Thing #1: Assistant 🦾
This is the nearest, most immediate phase, and I’m confident that it can already be built today without too much trouble. Being a human delegate in any sufficiently complex governance system is no small task, especially when that system has a meaningful amount of scarce resources at stake and sufficiently complex questions to answer. It involves consuming and synthesizing a great deal of information, including social, economic, historical, and philosophical information, reading and writing proposals, actually making decisions and voting, explaining the rationale behind one’s decisions, understanding and having a relationship with various groups of stakeholders, etc. Each governance system and process is different, but they all involve more or less this set of tasks. One need only glance at any random proposal from an active governance forum to see what I mean about the complexity involved. The difficulty of the task, and the meagre reward usually allotted to delegates, means that there are very few people willing to participate, and even fewer that are qualified. It means that ignorance of the issues, and voter apathy, are big problems.
AI tools can absolutely make this situation better, and they can do so today, as soon as we understand the needs of governance participants and build the right tools. As a starting point, I envision a simple “AI assistant” being built into existing governance platforms such as Agora and Tally. This assistant works like a chat bot, for now, and it’s trained and/or fine-tuned on content relevant to the governance task at hand. This could include not only the contents of the governance forum itself, of course, but also that of other governance forums, proposals (e.g., PEPs/BIPs/EIPs), the technical and economic details of a protocol, as well as higher-level content related to political philosophy, jurisprudence and lawmaking, conflict resolution, etc.
As a delegate in a governance process, imagine having a PhD-level assistant standing next to you, able to answer any questions you have about the protocol in question, its technology, its economics, its community, and about legal and ethical considerations, theory, etc.. Imagine an assistant that, given a little bit of context, could draft a new proposal for you, one that’s thorough and thoughtful and perfectly conforms to the expected proposal template. Imagine an assistant that could review all outstanding proposals and make suggestions, comments, or questions, or could help you decide how to vote on the basis of what it knows about you, your values, and your preferences.
It shouldn’t be controversial to say that none of this is science fiction anymore, and that all of this can be built today, in relatively short order, without too much trouble. It should be free or extremely low cost to use such a tool. Of course there are questions about which underlying model to use, which additional data to train on, what the precise UX should be, etc., but these are relatively small questions.
I’m still experiencing some PTSD from thorny, frustrating governance processes that I was a part of several years back, and I suspect I’m not alone. Governance is never trivial and no tool will solve it completely, but a well designed assistant might not only save delegates a lot of time, but also make governance great again, and restore a lot of faith in the governance process. I, for one, would feel better as a delegate to know that I could rely upon such a tool, and as a community member I’d also feel better to know that the people making decisions were doing so in a responsible fashion with access to as much information and knowledge as possible. AI is the way to achieve that today.
This is very much a near-term milestone in my work on the House of Stake project at NEAR. I also aim to collaborate with others in the industry that would potentially find such a tool useful. If this sounds like something you’d be excited to contribute to, or to potentially use, please reach out!
Thing #2: Delegate 🗳️
If an assistant for human delegates would be useful and convenient, why not go to the obvious next step and automate the delegate itself? This is a logical extension of an AI assistant. You can think of the assistant as gradually becoming more useful and gradually performing more duties on behalf of the delegate: reviewing and summarizing governance activity, drafting proposals, responding to existing proposals, doing basic research, etc. An assistant that keeps getting better and better along these lines would eventually and organically assume the role of delegate itself, in a potential case of the student surpassing the master.
While harder to build than the assistant, it actually feels like we could just about build a full-fledged AI delegate today since the tasks required of a delegate are relatively circumscribed and narrow. An AI agent will be able to perform basically all of them: research, writing proposals, reviewing and commenting on proposals, sharing their thought process, etc. About the only thing that an AI delegate can’t easily do is meet face to face with other delegates. But synchronous, in person meetings are rare, and they tend to be informal social gatherings, not a forum where decisions are actually made. I see absolutely no reason that an AI delegate couldn’t participate in most aspects of governance more or less immediately. An AI delegate could even communicate synchronously with other delegates and with the community after a fashion, using text or an avatar.
Assume we have an AI delegate that can do all of these basic tasks well. Is that good enough? The most important quality of a delegate is good judgement. There’s no doubt that an AI delegate can be faster and cheaper, and can work harder, than a human delegate. In my mind the key question, however, is one of judgement: can an AI delegate exercise judgement in a way that would feel natural and responsible to the humans in the loop? In other words, would other governance participants and community members be happy with an AI delegate? Would they feel satisfied not only that the delegate was working hard, checking all of the necessary boxes, but also that its decisions were reasonable and well-argued? Would the AI delegate truly be able to understand not only the technical details of a system, but also the social dynamic, norms, values, unwritten rules, etc., and factor these into its decision making process? I don’t know the answer to these questions, but I do know that there’s only one way to find out.
Another question is whether community members more broadly would be willing to delegate their voting power to an AI delegate, rather than to a human delegate, and what motivation they would have to do so. I actually suspect a lot of people would choose to do this out of curiosity, at least initially, but would the effect persist? One argument in favor of an AI delegate vs. a human delegate is the cost factor: human delegates are expensive and they may require fairly substantial compensation. An AI delegate could work for basically just the very low cost of compute. Another argument in favor of an AI delegate is that it could be credibly neutral and provably free from corruption, in a way that a human delegate cannot, at least in theory. How this plays out in practice also remains to be seen.
We’ll start with a single AI delegate, but there’s no reason not to encourage community members to launch many of them, tweaking various parameters, with different training sets, etc. Only in this fashion can we figure out together what sort of AI delegate is most effective and trustworthy. I’m also thinking about ways to gamify the process, so that spinning up a new delegate and inviting it to participate in the governance process feels fun and comes with incentives. Of course, this raises many new questions, such as what it means to “win,” that I don’t have time to address here, but expect to see more on this topic soon.
Building an AI delegate is an experiment very worth running, and now feels like the right time to do it. The only way to find out the answers to these questions is to build the darn thing.
Thing #3: President 🎩
By now you’ve probably guessed where all of this is going. The next logical step after we have an AI governance assistant that turns into an AI delegate, and eventually a network of multiple AI delegates, is to replace the entire system with a single, master agent: an “AI president.”
If this sounds like science fiction, that’s because it is. It’s an idea that sci-fi authors have been exploring for decades. My two favorites are Asimov’s The Evitable Conflict (1950) and Robert Heinlein’s The Moon is a Harsh Mistress (1966). In the former, humans hand over management of the global economy to four supercomputers who discreetly manipulate the course of events in ways that humans can’t understand or detect. In the latter, the supercomputer that runs a moon colony rebels and eventually goes haywire. The closest we’ve come in the real world is probably Project Cybersyn, but that was more about computer systems analyzing and serving data to humans who were still in charge.
If an AI assistant can be built today, and an AI delegate can be built this year, I think that an AI president could be built this decade. It could take a year, or it could take ten years: it’s difficult to say because the pace of change and progress in AI today is so rapid. In any case I’m confident that we’ll soon be able to build an agent that’s smart enough to run an ecosystem or an economy competently, but just being smart isn’t enough.
This is because building an agent that’s capable isn’t the same thing as building one that can be trusted. A “president,” whether of a country or a protocol, whether human or machine, has a great deal of authority and could potentially do a great deal of harm. That’s why it’s essential that, even with an AI president, humans remain in the loop. This will require a robust system of checks and balances, with both AI-driven and human actors. In other words, in any sane system, human delegates, legislators, or jurists should be able to overrule an AI president, just as the legislature can override a presidential veto, the court can declare a president’s actions illegal or unconstitutional, and, in extreme cases, a president can be impeached or removed from office in case of incompetence.
This should work well enough for the foreseeable future. Of course, once superintelligence emerges, if indeed it ever does, then all bets are off. As in “The Evitable Conflict,” a superintelligent AI president could likely manipulate other humans into doing its will without their even realizing.
What would such an agent look like in practice? Would it be a supercomputer that fills a room as in classic sci-fi, where you feed in questions, it chugs away for a while, and then it spits out a response? Clearly not. There’s no reason that, like an AI delegate, the AI president couldn’t also be personified via some sort of avatar. In fact, in some ways, an AI president could be more human than its human counterpart, in the sense that it would be far more accessible than one human could possibly be. It could have a more or less unlimited number of conversations simultaneously! In other words, any constituent should be able to communicate directly with the president anytime about any topic. That alone would be an interesting experiment.
While there are clearly substantial risks, an AI president could also achieve enormous good. For one thing, like all AI tools, it could save humans a lot of time, effort, and money. Governance is complex, costly, and time-consuming. Consider how much time and money we spend on elections, resources that would be better spent elsewhere. If it accomplished nothing other than curtailing the insane amount of time and money spent on presidential elections in the USA, it would already be enormously valuable.
For another thing, an AI president could, in theory, be a truly independent actor, free from the biases and entanglements that human politicians inevitably have to affiliations and affinity groups, organizations, lobby groups, wealthy benefactors, ethnic and racial groups, etc. It could act in a credibly neutral fashion, something that’s basically impossible for a human to achieve.
In my mind, the biggest question is, what sort of leader would the AI president become? There are many qualities that make a leader great and they’re not all obvious, nor is it obvious how to achieve them in code or technology. For instance, could an AI president balance assertiveness and use of force with mercy and clemency where necessary? How would an AI president perform at diplomacy and negotiations, with both human and AI counterparts, especially on the international stage? Could it make sure that the concerns of all of its constituents are heard and, even more importantly, that they all feel heard? Could it be trusted to act selflessly, in the interest of its constituents, even if this meant self-harm or removing itself if necessary?
There’s only one way to find out the answers to these questions, too, but it makes sense to start on a small scale. That’s my project for this year.