
Sovereignty is one of my favorite subjects. It’s not something I thought very much about before joining the blockchain industry, but since then I’ve thought deeply about the idea, and it fascinates me. We use the term all the time without always specifying or understanding what it actually means! But it’s absolutely essential and central to everything that we do in this industry.
In fact, the thing that first drew me to the blockchain and cryptocurrency space was the promise of sovereignty: the idea that individuals and communities can regain control of their assets, their data, and by extension, their digital lives. This manifests in all kinds of interesting ways, from families moving their savings out of unstable, hyperinflating local currencies to bitcoin or stable coins, to applications that are governed by a community of users, where the community profits directly from application revenue, to truly sovereign network states, with their own social norms and legal code. Blockchain and its underlying cypherpunk ethos strongly emphasizes the importance of self-sovereignty in particular: the importance of individual responsibility and the power that modern digital technology, such as cryptography, has to give people agency and control over their own lives (and reclaim it from big companies).
Like Bitcoin, my focus up to now has mostly been on self sovereignty, but I realized recently that the idea of sovereignty applies equally beyond the context of the self, at higher levels and to larger groups of people. Groups have similar needs to individuals with respect to sovereignty. This has big implications for how we design, build, operate, and govern AI tools, which of course will not only be used by individuals, but by groups of all sizes.
I realized recently that AI presents lots of new challenges and opportunities along these same lines. The opportunities are massive, for individuals, families, and communities everywhere to take control of their digital lives in the new AI era. But the challenges are myriad, similar to the challenges of modern software and the modern web, but with greater risk: giving total control over AI model training and inference to a few big companies is quite frankly terrifying, not only because they can and will lie to you but also because, even more frighteningly, through AI they have the power to actually change the way you think, reason, and see the world, potentially without your even being aware of this.
There’s a fundamental tradeoff here that we need to consider, discuss, and explore: we’re giving up some degree of sovereignty in exchange for a powerful tool. This tradeoff of course existed before, with previous generations of technology including the modern Internet, but it’s clearer and more present with AI technology than ever before.
Thing #1: Personal Sovereignty 👤
The starting point, and the most important form of sovereignty, is personal sovereignty, which is actually a surprisingly modern idea. It’s closely tied to questions of personal rights and freedoms. The fundamental idea is that you have sovereignty over your own body, you should be able to dress the way you want to, to express yourself freely, etc. You should be free to pursue whatever occupation you want (subject, of course, to the economic realities of capitalism). You should be free to travel and move, and in general you should have sovereignty over how you choose to live your life.
Personal sovereignty isn’t a novel idea in the offline world, but the notion of digital sovereignty is a bit newer. In the digital realm, we need to extend the notion of personal sovereignty a bit further. First and foremost, you should have sovereignty over your identity. This means that, aside from some narrow applications with direct connections to the offline world, such as citizenship services, banking, and healthcare, you should have the freedom to use any identity online. You should be free to use your real name, or a nickname or pseudonym, or many pseudonyms, or even to be entirely anonymous (4chan style). By extension, you should also have the right to update or delete your online identity at any time. This is an extremely important right, and removing it has the chilling effect of severely curtailing freedom of expression online.
Beyond identity, you should also have sovereignty over the rest of your data. You should be able to custody your own data in the manner that you choose, and companies should not have custody over your data without your permission, and without compensating you for their access to your data and the money they make from it. You should be able to check, update, download, copy, and delete your data quickly and easily regardless of the service it’s associated with. You should also be able to share your data, in part or in whole, with other service providers and applications without limitation, and you should be able to revoke that permission at any time, for any reason or for no reason. If this right isn’t protected, then it’s effectively not your data, it’s theirs—which, sadly, is how it works today.
Privacy is very closely related to identity and data sovereignty. While certain, special applications such as the ones mentioned above (citizen services, banking, healthcare) may from time to time require you to reveal some sensitive personal information, that information should be privileged and protected. Again, application service providers should not have the right to custody that information under any circumstances. As a general principle, you shouldn’t be forced to reveal more of your information than you choose. This is especially important in the context of applications like banking and healthcare, but it’s equally relevant everywhere: I also don’t want strangers reading my email, seeing my photos, or even seeing my draft articles or workout data. It’s also important to note that privacy works both ways: you should also have the right and the ability to easily, selectively disclose your data as and how you wish with the people you want to share it with.
Note that there’s a lot of overlap here with the goals of the broader Web3 movement. A few years ago, I attempted to codify these into a Web3 Bill of Rights, which is based on the principle of individual sovereignty.
Thing #2: Local Sovereignty 📍
Sovereignty is important for individuals but it’s also important for groups. Groups should have most of the same rights as individuals with respect to things like identity, privacy, and data custody, and need some additional rights as well. This applies at many levels: the family, the firm, the city, the community, the nation. Sovereignty is fundamentally about taking control of and responsibility for our lives, and since we’re social creatures, this has to happen at several levels and in several social contexts. No man is an island, and our lives of course include many other individuals.
Beyond the individual, the family is the most important sovereign unit. Of course, individuals need to be able to plan families and build their family as they like. In the digital realm, families have all of the same rights as individuals: they should retain custody of their data, have control over who has access to it, not be forced to reveal information about the family, etc. Families should also be able to share access to identity and data with the family: there should be specific, family-friendly default settings, and categories of settings, to, e.g., allow all family members full access, or to allow all adult family members full access and only allow limited access to children, etc., to applications and data.
Then there’s firms, which are actually a lot like big families with officers and board members instead of parents. They need very similar control and sovereignty over their data. Just as it’s appropriate for parents to be able to monitor their minor children’s use of software, including AI, to be able to access their data, etc., it may be necessary and appropriate for the management of a company, or for departments such as compliance and security, to be able to access some employee data. It’s equally important that these policies be clearly communicated to employees so that they understand their rights, obligations, etc.—in other words, so that they understand the interface between personal and corporate sovereignty. Being part of a company by definition means ceding partial sovereignty to the firm in exchange for employment and in the interest of efficiency.
One big difference between families and firms is that, while families are very unlikely to need or indeed to want to run their own custom AI servers, this is precisely what many firms want! For many kinds of applications, firms need control over most of, or perhaps the entire, stack of AI technologies, including identity and data. This is to prevent vendor lockin (what happens when AI tech companies start raising their prices?), to ensure data portability and compliance, and to ensure that employees have access to models, parameters, and inference engines that are specifically well-suited to the task at hand, rather than to general purpose use cases. (This is exactly the first AI use case that we’re working towards at House of Stake.)
We’ll tackle nations last, but there are other cases of group sovereignty that I find especially fascinating: cities, and other large communities such as tribes. Their needs fall somewhere between those of firms and those of nation states.
They may not have realized it yet, but cities probably also need their own sovereign AI stacks. There’s enough difference between life and culture in Boston vs. Berlin vs. Buenos Aires that, again, using a general purpose model and general purpose inference tools just won’t cut it. There are linguistic considerations, cultural considerations, and geographic considerations. As one random example, an AI tool that’s really good at navigating Los Angeles probably won’t do very well at navigating New York, and vice versa. Cities also have certain social norms and laws that they’ll want to emphasize or enforce using AI tools, and privacy matters as well: they probably shouldn’t give their citizens’ data to third party companies.
The key point here is that AI tools aren’t merely descriptive, they’re by definition normative as well. In other words, while they’re useful for recording, describing, and exploring the world as it is, objectively, they’re also very powerful tools for shaping our thoughts about the world around us. This normative use case will be a goal of all sovereign AI applications, just as it’s the goal of government communication everywhere (otherwise known as propaganda). This idea might make us uncomfortable, but it’s nevertheless a reality. We should embrace sovereign AI even knowing that authoritarian governments will use it in ways that make us uncomfortable because there are as many, or more, socially constructive use cases.
A great example is the indigenous use case. Indigenous groups have the exact same concerns as cities, and perhaps even greater concerns, regarding their members’ privacy and sovereignty over data and identity. One reason is that the sovereignty of indigenous groups is protected by law, and they typically have greater sovereignty than cities. Native American reservations, to use one example, can set their own tax policy and other laws, and have a degree of sovereign legal immunity.
In many jurisdictions, tribes and other indigenous groups also have absolute sovereignty over membership, i.e., identity, so it’s essential that this be reflected in the AI stack as well. Perhaps even more than cities, indigenous groups also have important, traditional cultural norms and practices that need to be respected and reflected in AI models and inference tools. These tools, in turn, can do a lot to help record, respect, and share cultural elements and artifacts, such as language or even the record of people, locations, and physical artifacts. AI will be a powerful tool for protecting, recording, and promoting indigenous culture, and many smart people are already working on this. It’s at least as important for indigenous communities to be able to assert sovereignty over underlying AI models and inference tools, to be able to control policy, etc., as it is for cities and other communities, and to be able to profit and participate in the upside as well.
Which brings us to the nation state, which desires power not over a small group but rather on a global scale.
Thing #3: National Sovereignty 🚩
When we talk about sovereignty, we’re usually talking about national sovereignty. Sovereignty in the context of the nation state is a really powerful, important idea that undergirds the entire international system: the idea that sovereign states are responsible for their own affairs and have their own legal regimes, and that, barring genocide and other war crimes, countries should respect one another’s sovereignty and not interfere. (Intervention even in cases of war crimes is controversial for good reason.)
However, it’s interesting to note that things didn’t always work this way, and as such we shouldn’t take this form of sovereignty for granted! Nation state sovereignty as we know it came into existence during the 17th century as part of the Peace of Westphalia, which ended the Thirty Years’ War, which was a series of massive religious wars that ravaged Europe for a generation.
In politics, there’s a fundamental conflict between the idea of territorial law and the idea of natural or universal law. Religion is a universal concept which by definition knows no political boundaries. As such it’s imposed on the nation state by a higher order: e.g., by the Pope in Rome, by the Church, or by God himself. Sovereignty, by contrast, is by definition local. The idea of sovereignty effectively means that the nation state is not beholden to any universal law, including the laws of any religion, and is instead free to pursue its own destiny, with its own social and legal system. You can disagree with this principle, but in my opinion we have it to thank for more growth, prosperity, and general human wellbeing over the last few hundred years than almost any other single idea.
Maybe it’s stretching the metaphor, but I think there’s a parallel to AI here. Today, when you use an AI tool from the likes of ChatGPT or Anthropic, you’re playing not by your rules, but by their rules, which by and large are universal and imposed from the outside. Yes, these companies are legally required to comply with the regulations of each jurisdiction where they do business, and they may be willing to run some technology components “on premise” for certain customers. But even in these cases, with respect to the actual, underlying AI technology, such as the model and inference engine, the company is very much in control. They may agree not to sell your data or use it for training purpose, but even here, the trust model is “trust me bro” and there’s absolutely no way to verify that what they say is true. Many AI firms have experienced data breaches, and recent overreach by courts is forcing them to indefinitely retain even sensitive user information.

In this respect, your team or organization is like a pre-Westphalia kingdom. OpenAI is like the Roman Catholic Church, and Sam Altman is like the Pope. If you want to do business, if you want to engage in trade, today you have to play by their rules.
That’s not sovereign at all, and that’s not what state actors want. Trust me, I’ve had several of these conversations in different parts of the world. Sovereign actors don’t want to use a model trained by someone else, without being able to verify the input data and know exactly how it was trained (and for good reason: “sleeper agents” hidden in models are proven to work). They don’t want to run inference on someone else’s hardware, with no way to verify that the inference is happening faithfully, or how it’s happening, or with what parameters, or that no data is leaking.
This shouldn’t only be scary to sovereign actors, it should scare you, too. I’m really frightened by the idea of giving an agent access to sensitive data of mine—say, my Google or Telegram account, or my crypto keys—without being able to independently prove or verify any of the things just described. But that’s how literally every AI agent works today. You’re giving them the keys to the kingdom, and there are no guards on duty at the gate. They expect you to trust them, which is the broken, Web2 way of doing things.
There’s a better way. Not all of these technologies is ready for prime time yet, but we’re working on all of them at NEAR. It starts with model training. It’s absolutely critical that you know the exact data that was used to train a model. This is doubly true because training isn’t deterministic, i.e., even if you attempt to train another model on the same exact data set, you’ll get a different result. But it’s possible to do verifiable training using technology such as the Trusted Execution Environment (TEE) on the latest model GPUs, which will provide a cryptographic proof that training was completed faithfully. Today, there are a small number of companies that have the hardware and expertise to do model training. You hire them and pay them a lot of money to train a model for you, but when you get the result, you have absolutely no way to be sure that they faithfully completed the training. These proofs are a better way.
This goes for inference as well. Rather than giving that agent the keys to the kingdom and access to my sensitive data, I can run the agent inside a secure, verifiable cloud, again using TEE technology, and be absolutely sure which code is running, that no data is leaking, etc. Only in this way will I ever trust an AI agent to access my data and perform actions on my behalf. This technology is already available from NEAR AI.
The NEAR community talks about “User-owned Internet” and, now, “User-owned AI.” This is the idea in a nutshell: AI that works for you, not for a big tech company, and provably, verifiably so. A future without this sort of technology is quite frankly really terrifying, and I see my work as trying to chart a better, less dystopian course for this novel technology which is so clearly in the process of taking over the world. It’s as important for the individual, the family, the city, and the tribe as it is for the nation state and, ultimately, for humanity itself.