
I’m an eternal optimist. I tend to assume that things will turn out okay over the long run, and AI is no exception. But not everyone agrees with me. I’ve had tons of deep conversations with smart people over the past few months about the ways in which AI will change the world, and I’ve noticed that overall people are quite pessimistic about it. Not only in the sense that “AI is going to kill us all” (this issue isn’t about AI safety), but also in a more nuanced way.
People really do think that AI is going to take all of our jobs and leave us without a sense of meaning or purpose, and that along the way AI is maybe also going to impoverish a lot of people. I never previously took that view point seriously because of my optimism, but it’s a valid concern held by a lot of people. I thought it would be interesting to examine both sides of the coin this week.
Thing #1: Force For Good 🦸
My main reason I’m optimistic about AI is the same reason I’m optimistic about tech in general. Over very long periods of time, over decades and centuries, technology has been an overwhelmingly positive force for good in human society. We’re orders of magnitude healthier, happier, and wealthier today than we used to be. We simply have so much more. We know so much more.
A lot of that is down to technology, which has made us more efficient in economic terms, which has allowed us to grow wealthy. While it’s true that the wealthiest, most well connected, and most tech savvy among us have benefited disproportionately, it’s also true that technology has made the life of the average person infinitely better than it used to be. Even the person living the median lifestyle today has things at their disposal—modern medicine including painkillers, antibiotics, and anesthesia, air conditioning, running water, electricity, the absence of disease—that kings and emperors could only dream of in years past. Kings, presidents, and billionaires today all use the same iPhone and the same apps that you and I use.
What’s more, with literally every successive wave of technology there have been naysayers and doom-and-gloomers predicting that this time it would be different, and that the latest and greatest tech trend, whatever it might be, would take away jobs and put people out of work. They said it about sewing machines. They said it about automobiles. They said it about elevators. They said it about computers. You name it. And yet, while there may have been some short term disruption, with the benefit of a lot of hindsight and a nuanced understanding of economics it seems painfully obvious today that these technologies, along with many others, had the effect of creating far more jobs and far more wealth than they destroyed.
My favorite example is the ATM, since it’s so simple. When ATMs first emerged in the eighties, people fretted that they’d put bank tellers out of work. That definitely seemed like a possibility at the time: who would choose to go to a teller when they could withdraw cash or do their business faster and more easily with a machine?
And yet, fast forward a generation, and the actual impact of ATMs was to cause the number of tellers to increase, not decrease. Why? Because it made banking so convenient that more customers signed up, costs were reduced, banks became more profitable, grew their business, and opened more branches. What’s more, it freed bankers from having to hand out cash all day long, and let them focus on other value-added services like loans and financial planning.
I wrote about this effect before in the context of artists at the time the camera came out. There was initially fear that the camera would put artists out of work, since at the time most artists were engaged in portraiture. In the event, this isn’t what happened. The camera instead gave artists the freedom to explore lots of subjects other than portraits.
I see no reason to believe that it will be any different with AI. Yes, there will be a lot of short term disruption. That’s already happening. But it seems clear that the overall net effect of AI will be to create an unprecedented amount of wealth, because it’ll so vastly increase productivity.
This is already happening. We’ll be able to work more efficiently and do all sorts of things we couldn’t do before. And that in turn will lead to massive growth in business and commerce which will have the net effect of creating entire new industries with many millions of new jobs, and a great deal more wealth for humanity. We don’t know what those industries or jobs will be right now—it’s simply unknowable at this stage—but we know that they’re coming.
This time is not different.
Thing #2: Not So Good 💩
It feels almost cliche making the bear case for AI since so many people are bearish on AI, but it’s still important to lay out the counter case. As I said in the intro, I won’t be discussing the possibility of AI-powered robots taking over and destroying all humans or anything like that. I don’t personally see that happening anytime soon and I’m more interested in what might realistically come to pass in the short to medium term.
The counterargument to the first thing is that, contrary to what I just said, things really are different this time. The telegram, telephone, fax machine, etc. made it easier to communicate and coordinate over distance. Automobiles made it much easier, safer, faster, and cheaper to move around. Air conditioners and elevators made it possible to live and work in places where it wasn’t possible before. Each of these technologies, and many others, were foundational and vastly improved human life, and made commerce easier, but they were still limited in scope.
The reason is that telegrams, automobiles, and air conditioners don’t themselves invent new things. Instead, they make it faster, easier, cheaper, and therefore more likely that people will invent new things. In other words, they facilitate innovation, but they can’t innovate themselves: they don’t involve the creation of intelligence itself. What sets AI apart from technologies that came before is that intelligence is fundamentally different than any previous generation of technology.
AI is different. AI is itself capable of innovation: of creating new technologies, i.e., new ways of doing things, and of finding new uses for existing technologies. To be clear, AI in its present form isn’t quite capable of innovation on its own, without a human in the loop, but that’s changing. We’re transitioning today from chatbots to agents. The difference between a chatbot and an “AI agent” is that the latter has a lot more autonomy and agency than the former. This is clearly the direction AI technology is developing: towards greater autonomy, greater agency, and a greater ability to do things in the world on its own. It’s not a far leap from this to independent, autonomous innovation.
Even more importantly, AI is capable of improving itself, at least in theory. Once AI becomes capable of doing this sort of R&D on its own, without needing humans, if indeed this does come to pass, it leads to a recursive “strange loop”, where AI getting better faster in turn causes AI to get better faster until, eventually, we reach a “singularity.” It’s hard to say exactly what this means or what comes after, but you could think of it as AI getting infinitely better, infinitely faster, or getting smarter than all humans and all existing intelligence combined. None of this was possible with any technology that came before.
This may sound like science fiction, but it’s a lot more plausible today than it was just a few years ago. And if we want to plan for a future that still has a comfortable place for humans, we need to seriously consider how to respond to artificial superintelligence.
Even in its present form AI has already begun to eliminate thousands of jobs. What will happen as it gets smarter and smarter? There are vast numbers of people in jobs, mostly knowledge work related fields, that are in the process of being replaced by AI tools of varying degrees of intelligence, sophistication, and autonomy. The roles that come to mind as most vulnerable are customer service agents, drivers, writers, designers, engineers, lawyers, assistants, accountants, translators, editors, marketers, teachers, filmmakers, and the creative sector more generally. (Others have produced much longer, more comprehensive lists and analyses.) The roles at highest risk are entry level roles, since less experienced, more junior team members are exactly the ones that firms are likely to replace first with AI tools.
Jobs that require a human touch will be fine for a long time, including nurses and other caregivers, coaches and trainers, therapists, counselors, hairdressers, and many teachers. The same is true of skilled building trades such as electricians, plumbers, construction workers, etc. Embodied robots are getting better fast, but I don’t realistically see them doing this sort of work for a very long time.
But there are many, many millions of people around the world who fall into the job categories I describe above as at risk. It’s unclear what happens to those people. Again, to be clear, this isn’t theoretical, it’s already happening. In general, I’m less worried about the young and tech savvy: they can see the change coming, and they can choose to train for other fields. The situation will be much harder for people who have spent their lives and their careers doing a profession that AI takes over quickly.
In the medium term, AI will increase efficiency and lead to more commerce, but if AI does reach the point of being able to improve itself recursively, the long term is much less clear. AI might turn out not to be like the ATM after all: banks, ATMs, and even money as we know it might go away in a world where AI becomes superintelligent. How do we respond to short-term displacement, take advantage of medium-term growth opportunities, and simultaneously prepare for the future of super AI? That’s the challenge we’re facing today.
Thing #3: Inevitable ⏳
At the end of the day, how you feel about the AI transition is irrelevant. AI doesn’t care about your feelings. This is happening and this change is coming whether you like it or not. Rather than lamenting something we can’t control, a better use of our time is having an honest conversation about what this means and planning for how we should respond to it.
First and foremost, in my opinion we shouldn’t slow down. AI technology exists, it cannot be uninvented, and enough of it (models, research, code, etc.) is open that it’s just too late to stop or even slow down. If ordinary, well-intentioned people in free countries don’t continue to push AI forward, then nefarious actors in unfree places will take the reins. And the best tool to fight against bad AI is good AI. This technology obviously has the potential to do enormous good as well as enormous bad, but for the sake of the good we must keep moving forward. I’m certain that the good will outweigh the bad.
Second, we need to be much more honest about the coming, inevitable impact on certain careers and certain industries. To pick one at random from the above list, AI is already rapidly on its way to transforming the creative industries and there’s no turning back: movies, TV shows, music, and design in general will look very different once AI takes over. A wise policy response would be to reduce funding for university programs, student loans, etc. for fields that are rapidly going to vanish, and to instead offer incentives for programs in fields that will be in greater demand. Less animators, more prompt engineers and nurse practitioners.
At the same time, we should encourage people to change careers and industries as much as possible, because of the inevitable short term disruption. There’s enormous demand in fields such as healthcare and construction trades, and in general the government should stay out of the way of industry and let the market sort these things out.
But these transitions take time and will be painful over the short to medium term for a lot of people. We should offer some support for the people who will be the hardest-hit: those who are mid-career in vanishing industries. Those who are early in their careers will be better able to go back to school (or use AI tools to study), retrain, and change industries; and those late in their careers will be better positioned to retire. But if you’re a mid-career animator, or paralegal, or accountant, you may need a helping hand. We should provide some degree of public assistance, with a requirement for additional training to change careers wherever possible.
Finally, we should not be shy about using AI to solve the problems that AI creates! This is one of the reasons I’m so optimistic about the medium to long term impact of AI. I have no doubt that it’ll lead to a creative and commercial explosion, as I explained above; we just don’t know how quite yet. What will those advanced AI models have to say when we ask them what to do about all of the people who will become underemployed or unemployed? How would a superintelligence put people to work? We should research this, and we shouldn’t be shy about asking and trying out new things.
To bring things home to our work at House of Stake and NEAR more generally, one of the main areas of focus for House of Stake is researching how to use AI in governance: both how to govern community-owned AI models and systems, but even more importantly, the role that AI agents and tools have to play in designing, developing, and operating community-driven governance systems such as governance of open source projects and blockchains.
We’re starting with things like how AI can facilitate public goods funding. In fact, the question posed above, of whether and how to provide public support to people affected negatively by the proliferation of AI technology, is a question of public goods funding. It’s one example of many new governance questions that will inevitably arise as an outcome of this new technology and the changes it brings.
That’s why I believe this work is so important. AI will definitely play a big role in real world governance as well as in on chain governance, but it makes sense to start with relatively low stakes experiments in blockchain communities like NEAR. House of Stake is one of those experiments. People are generally pretty bad at making this sort of decision due to lots of prejudices and biases. AI will do a better job, which gives me faith.
If this work sounds interesting and meaningful, reach out to me on X or here on Substack.