Superhuman
Three Things #200: February 24, 2026

Most of my friends still aren’t using AI for anything more than occasionally asking casual questions. When they find out that I’m working on AI full time, they ask what it’s like, and I always find myself falling back on the same metaphor: AI just gave us superpowers, you just haven’t realized it yet. It’s hard to come up with good metaphors for the moment we’re living in, but this is one I keep coming back to.
I read a lot of science fiction and fantasy. A common theme in these genres is the superpower: nearly all of these stories have protagonists who embark upon some version of a hero’s journey, and in the end, after many challenges, unlock one or more superpowers that they ultimately use for good. Think Paul’s power of prophecy in Dune, Luke Skywalker tapping into the Force, or Harry Potter gradually becoming a powerful wizard.
There’s something incredibly compelling about the idea of a superpower. There’s a reason we keep coming back to this theme, time and time again, in so many different genres and formats.
To me, the idea of a superpower has always been just that: science fiction. Until recently. AI tools have very recently begun to give me something akin to a superpower, especially since these tools augment things I’m already good at, such as writing and coding.
I thought it would be interesting to explore the idea of superpower here in the context of AI, and maybe just learn some lessons from fiction.
Thing #1: More Who You Are 🫵
When contemplating the role of technology I find metaphor helpful. Think of impact as a vector: it has a direction and a force. We have to provide the direction, and then technology can provide amplified force. This is absolutely key: we can’t rely on technology to provide direction! Direction depends on things like values, beliefs, and principles. Maybe someday AI tools will be powerful enough to serve as entire belief systems (they have begun to create their own religions…), but we’re not there yet and I doubt we will be anytime soon. Today, AI can help me write, but it can’t tell me what’s fundamentally worth writing about.
This is also one of the most important lessons of a superpower: it doesn’t fundamentally change who you are. Instead, a superpower enhances who you already were. It doesn’t change your character. The protagonists and heroes of our favorite stories aren’t likable because they’re powerful! Lots of powerful characters—and people—aren’t at all likable.
On the contrary, they’re likable because of who they were to begin with. They started out weak, but they were always good people and always had strong characters. They typically only gain access to power after a long struggle and only because they earned it. Then they’re often tempted by power that corrupts, but because of who they were and because of this strong character, they inevitably reject this temptation. Power allows them to do things they couldn’t do before, but it doesn’t fundamentally change them.
The same is true of technology in general. Technology is powerful, but it isn’t inherently good or bad. By default it’s neutral, and can be used for any purpose, good or bad. In and of itself technology doesn’t change who we are, and it doesn’t change our values or proclivities. Like a superpower, it allows us to do things we would’ve done anyway, but couldn’t do before.
It’s early days but I already feel that this is true of AI tools. To be sure, I can do things I couldn’t do before, but these things aren’t fundamentally different than the things I was doing before, or would’ve done before, if I could’ve. I intend to, and almost certainly will, use AI to become more of who I already am: someone who cares deeply about the connection between people and machines, someone deeply interested in human systems, someone strongly motivated by ideas such as freedom and personal responsibility, etc.
This is something that’s very much on my mind as I lean more and more heavily into the superpowers that AI tools are granting me. I may be able to do anything, but AI cannot tell me what I want to do. That’s the first and hardest question, and one that I’m still very much on my own to answer—and one I’m now struggling with every day.
Thing #2: Overreliance 👉
I’ve noticed a number of prominent AI thought leaders mention that they’re already beginning to see their skills decay as a result of relying too heavily on AI. Andrei Karpathy mentioned recently that he’s begun to see his own coding skills atrophy as he uses AI to write more code for him. He points out that writing code and reviewing it (generation vs. discrimination) are two very different skills, and that one can remain strong as the other decays. I’ve noticed the same effect as I’ve relied on AI for code generation more and more.
This also seems to be a common theme when dealing with superpowers: the danger of overreliance. All powers, even superpowers, have limits. If you rely on a superpower too heavily for too long, then you’ll inevitably come to rely on it. What happens when, for whatever reason, the power goes away or is unavailable, even temporarily?
Fiction is full of stories of this kind. Harry Potter dispossessed of his wand. Superman and kryptonite. Prophets and Navigators in Dune deprived of spice. In each case, a character came to rely too heavily on their power and found themselves in trouble when they couldn’t access it. In fact, the trope is so common that there’s a turning point in almost all of these hero stories where the protagonist, deprived of his or her powers and in trouble, ultimately finds a way through in spite of this, and realizes that, in fact, they’re not defined by their powers after all.
What does this mean for us in an age of AI?
To be clear, I’m not suggesting that we not use the superpower at all. It simply doesn’t make sense to design, or build, or really do much of anything at this point without AI assistance, for the same reason that it doesn’t make sense for authors to write books with pencils and paper or for accountants to do long division. In this respect, I think it’s important to understand that there’s a frontier between tasks we should fully outsource to technology, and those we should seek to maintain to some degree.
Programming still involves a lot of boilerplate work, at least for now: think basic devops stuff, gitignore files, API scaffolding, database connectors, choosing libraries, pretty much everything repetitive. There’s no good argument for continuing to write this code by hand in any situation. AI can also handle a lot of core business logic as well: if you can describe it in English, the AI can generate it for you.
But discrimination is still important, as Karpathy pointed out. It’s still essential that we be able to participate in the high level planning, architecture, and design process. Yes, it’s true that AI tools can increasingly do this, too, but in my opinion this is the important skill that we cannot under any circumstances lose. Why? Because we won’t be able to discriminate between good and bad code, good and bad architecture or design. Without discrimination we don’t know which AI tools to trust, and whether we can rely on their output.
Then there’s also basic computer science. I still think it’s important to understand bits and bytes, how memory works, how computers represent numbers and move data around on the Internet, how algorithms work, etc. This knowledge is important for the same reason: without it, we won’t be able to judge the quality of the AI output, or of the AI tool itself. It’s already possible to build a “script kiddie” level vibe coded prototype app, but for anything production grade, this level of human oversight is a must have, at least for now. In more concrete terms, without this understanding you won’t be able to provide good answers to the questions that Claude Code asks you after you give it your harebrained idea.
To be clear, it’s a moving target. The boundary shifts as models improve. The meta-skill is knowing where that boundary is right now, and recalibrating constantly. I plan to pay close attention to the attrition of my own skills, to the present location of the boundary, and to making sure that I don’t lose all my higher-level skills so that, if and when my own hero moment comes, I’m not caught off guard.
Thing #3: Balance 🤝
One of the most universal lessons of fiction is that everything has a price. You don’t get anything for free. And every action has an equal and opposite reaction. These lessons may be common in fiction, but they apply equally in the real world. Call it yin and yang. Call it countervailing forces, a universal law of balance, or the law of opposites. Everything exists in a state of dual nature with its opposite.
In Star Wars, the Force can be used for good or for evil. It’s zero sum and requires balance. When one side becomes too dominant, the other side rises to match it. The fact that the Jedi exist means there must also be Dark Lords and Sith, using the same source of power for opposite purposes. In Harry Potter, horcruxes grant immortality but split the soul. Voldemort becomes less human with every horcrux he creates, and their creation is the reason he’s less than human.
In Dune, spice allows you to live longer, and gives some people the power of prescience, but take it long enough and you become addicted to it. Withdrawal is fatal. The most extreme example of this is the Guild Navigators, whose extreme spice addiction makes them inhuman. And, while Paul’s prescience is a gift, it also traps him. He can see the future play out, see billions dying at his hands, see the jihad spiraling out of control. This vision doesn’t free him. The same vision that makes him a hero also makes him the greatest villain in the history of mankind. The same power that makes him a messiah also shows him that he’s a monster.
The point is that exercising a superpower always has a cost. It’s important that, as we lean into these superpowers, we remain aware of the costs they exert, both on ourselves and on the world around us.
The biggest potential cost of overreliance on AI is social. It’s not something that’s talked about much, but it’s something I’ve begun to notice already. No, I’m not talking about jobs being eliminated—this will definitely happen in the short term, and it’s a trend that’s already well underway, but I’m optimistic that, over the medium to long term, AI will result in the net creation of millions more jobs than exist today.
It’s more personal than that. I used to rely on my family or closest friends when I needed advice about something or had a difficult question to answer. When I was struggling with a professional issue, I’d turn to my colleagues.
These days I’m more inclined to turn to AI. It’s faster and more convenient. It’s available around the clock and doesn’t happen to be asleep, in another timezone, while I’m awake. And, today, the responses it gives me are generally as good as or better than the responses I get from humans. I’m not alone here. Millions of people are apparently already well on the way to developing deep personal relationships with AI companions.
We’re already facing a crisis of young people interacting only in a mediated fashion, through social media platforms, rather than hanging out the old fashioned way, face to face. AI risks making that problem much, much worse. This is actually the dystopian outcome, covered in lots of good sci-fi, that I think is the scariest, most dangerous, and actually the most likely: that AI becomes so good at satisfying our every need that our social skills begin to atrophy (think: people addicted to OASIS in Ready Player One). I suspect this is already happening.
Every interaction with a machine is a lost interaction with a fellow human. A lost connection between two souls. And, while machines can do many things for us, and can solve many problems, I personally don’t believe they’ll replace the need for authentic human-to-human contact anytime soon.
This runs the gamut from exchanging casual, day to day pleasantries with friends and neighbors to, even more importantly, the deepest, most intimate relationships. These literally make us human and define the human condition, and we won’t lose them without losing a big part of our humanity. To put it bluntly, I don’t foresee us having sex with machines anytime soon.
And, if we lose the ability to interface with other people, especially people who are very different from us, that portends very badly for the future of the human race and modern society, since this very problem is already the root cause of so much suffering in the world. Old fashioned, organic, face to face contact is more important than ever today, and it’s something that we need to bear in mind as we lean more heavily into relying on these tools in our day to day lives.
My advice? The real superpower isn’t access to AI. Everyone has, or soon will have, access to that. The real superpower is knowing which direction to point that vector that’s becoming more and more powerful.
Use the superpower. Be careful where you point it. And don’t forget to look up from the screen from time to time.
