My Twitter feed has been taken over by ChatGPT. Three out of every four posts I see these days are people sharing cool things you can use it for, resources to learn how to better use AI tools, unsolicited opinions about an AI-enabled future, or people explaining how AI is on the verge of destroying us. It feels a lot like the early days of the pandemic with maybe just a dash more optimism.
I don’t know how representative this is of the real world, of course. My Twitter feed is hardly a representative sample, and I know how easy it is to find yourself in a “tempest in a teapot” echo chamber where a tiny subset of people are obsessed with something to the point that it feels all-important, while the rest of the world doesn’t care one whit and hasn’t even noticed. That doesn’t inherently mean my Twitter feed is wrong, though. It was sorta right about the pandemic in the early days when no one else was paying attention.
My gut feeling here is that something big is happening. I feel about AI today the way I felt about blockchain when I first understood it properly seven years ago: it’s going to change absolutely everything. There’s no doubt that the pace of technological innovation and change is accelerating, and paradigm-shifting technologies that used to come along once per generation or less often, like AI and blockchain, are now happening more quickly.
I’m optimistic and excited about AI and I’m definitely not in the AI doomer camp. I don’t think we’re anywhere near the emergence of AGI and I don’t think AI will do many interesting things in the physical world for a long time. Other than curiosity and excitement, the main thing I feel about what’s happening with AI is frustration that I can’t do more, learn more, or move faster. In other words, I feel like disruption is coming, and for maybe the first time in my life and career I feel like there’s nothing I can do about it and no way to take advantage of it. As a result I’ve been reflecting on the notion of disruption recently.
Thing #1: What Causes Disruption
There’s an easy and a hard answer to this question.
The easy answer is that disruption is caused by technological innovation. Innovation can be slowed or even temporarily reversed but it cannot be stopped. You can’t put the toothpaste back into the tube. Bitcoin has been invented and it’s not going away no matter how much you (or your government) dislike it. It’s going to disrupt many, many things; the process is already well underway. The same is true of Ethereum. And now it’s true for modern AI tools like Stable Diffusion and ChatGPT, especially now that they’ve been open sourced.
The hard answer is that disruption occurs as part of a process that involves some give and take. Technological innovation is just the first step in the process. Different actors can choose to respond to new technology in several ways: they can ignore it, deny it, pretend it’s not happening, or fight it. Alternatively, they can choose to embrace it and make the most of it, or take a middle path. Actors here include governments, companies, and consumers—basically everyone.
Not all actors will be disrupted equally. Partly this depends on the nature of the technology. Some technology inherently impacts business more than it impacts, say, government, and some would be the opposite. Some technologies, such as personal and mobile computing, blockchain and, probably, AI, will impact all sectors and all actors.
Largely, however, it depends how actors decide to respond to the dual threat and opportunity of new technology. There’s more to this topic than can be covered here—I highly recommend studying the timeless Innovator’s Dilemma for a good primer—but, in a nutshell, actors that are willing and able to disrupt themselves by creating new products and services that take advantage of that new technology, and cannibalize their existing products, services, and marketing and sales channels, are more likely to benefit from new technology than those who don’t. As a consumer, the equivalent action would mean studying on the side to get up to speed and take advantage of a new technology, eventually changing the direction of one’s career (no one said it would be easy!).
Contrariwise, those who instead fight against inevitable progress are likely to do the worst.
Thing #2: What Disruption Feels Like
Since studying the concept of disruption in business school I’ve always had a specific image in mind of what it’s like to be disrupted. I never gave it too much thought. The image is something along the lines of a wealthy, fat cat executive at a large, profitable company who’s obsessed with the bottom line and with existing products and customer relationships, who’s dismissive and skeptical of new entrants and of threats and who as a result willfully ignores warning signs. More generally, it’s an image of a successful, entrenched company that’s unwilling or unable to take a step back, understand how they might be vulnerable, and take seriously the threat from new technologies, new entrants, and changing market structure. Not surprisingly given my background, this narrative, from the perspective of the incumbent, is straight out of The Innovator’s Dilemma or an MBA case study. From the opposite perspective, that of the entrepreneur, it’s a textbook example of how to compete asymmetrically with incumbents.
I realized that I never felt sympathy for disrupted incumbents because it seemed to me that if they had been disrupted it was their own fault. Businesses should think long term, and with few exceptions this specifically means that they should be constantly looking for threats that come out of left field. They should constantly be adopting new technology and new ideas in an effort to disrupt or cannibalize themselves (rather than be disrupted by new entrants). Businesses that are unwilling or unable to do so will eventually succumb to disruption as part of the healthy cycle of creative destruction.
I spent less time considering what disruption looked or felt like to everyday people. After all, businesses and industries are disrupted, not people, right? Now, for the first time in my life, I feel the breeze of disruption stirring around me and my own work. I’ve been very fortunate to have fallen accidentally into an industry and a career that I love at just the right time, and as a result I’ve felt very secure my entire career. Given recent advances in AI, I feel for the first time that there’s at least a possibility that my work (not to mention the work of billions of other people) could be automated and therefore become largely irrelevant within my own lifetime.
Here’s the part that’s really surprising: in spite of being aware of the risk, I feel that I can’t do anything about it. Not in the sense that I can’t stop the coming tidal wave—that’s a given—but in the sense that there’s nothing I can do to prepare for it. I’m already stretched way too thin. I have far too much on my plate—so much that some days I struggle to get out of bed because my to do list is so intimidatingly long, and seems to just keep getting longer no matter how hard I work and no matter how much I achieve. I simply have zero slack in my calendar, life, or career. It’s all I can do to keep up with work, spend a little time with my family, manage my household, and be generally aware of what’s going on in the world and in my industry. Add passion projects like running and this writing and there’s literally nothing left. It’s hard enough to stay on top of trends and new ideas within my own industry and area of expertise. The idea of learning about something as new as AI—even though it is, in a sense, highly adjacent to my work and area of expertise—feels daunting to the point of being totally impossible. I feel this way as an experienced software developer and technology entrepreneur, and as someone who’s privileged enough to have a great education and a stable income. I can only imagine how much more hopeless it must feel to people who are further from AI or less privileged than I am.
In a sense I also got lucky to have stumbled upon blockchain and cryptocurrency when I did. I was on sabbatical just as Ethereum was taking off in 2016-2017 and had lots of free time and mental energy to invest in understanding it and building relationships. I feel many of the same things about AI technology now that I did about Ethereum then, but my hands are tied and the timing isn’t so good. Spacemesh is about to launch, and when it does, there will be an enormous amount of work to do the first few months to make sure it has the greatest chance of success. It’s no less important or relevant today than it was when I began working on it years ago: on the contrary, I think it’s more important than ever. There’s simply no way I can drop everything and focus on something totally new.
And yet, that’s precisely the prescription for avoiding disruption that I described above. It's precisely the thing that companies and individuals need to do in order to avoid disruption, and it's precisely the thing for which I have no sympathy when they fail to do so.
I feel disruption coming and I feel that almost there’s nothing I can do about it. It’s a strange feeling.
Thing #3: What to Do About It
Disruption is one of those things that one can prepare for, but if one hasn’t prepared well then when it arrives it’s already too late.
As an individual, the best hedge against disruption is education (the situation is a bit different for companies but I’m more interested in the individual case here). If you’ve studied and practiced useful skills, and kept your knowledge and skills up to date, and if you’re good at what you do and you enjoy it, you’re going to be just fine. I’m reminded of something that one of my computer science professors said to me in college. This was in 2001 just after the dot com bubble had burst, Silicon Valley was in the midst of its first big recession, and lots of tech jobs had just been lost. He said, “If you’re here to be opportunistic, you’re out of luck. If on the other hand you’re here for the right reasons, if you love what you’re doing and you take the time to be good at it, well, there will always be work for that sort of computer scientist.”
Even in the worst case scenario where AI magically renders huge swathes of labor obsolete very quickly—an outcome that I don’t consider likely anytime soon—even in this case, even in the industries that are first to become obsolete, the people who really need to worry are those who aren’t highly skilled or highly productive and those who aren’t up to date with their skills and training. Those will be the first jobs to go. People who are on top of their field—”practicing at the top of their license,” so to speak—will be fine for a long time. Experts will always be needed to train others (or to train AIs!) and to handle edge cases that automated systems aren’t able to handle, or can’t be trusted to handle.
For me as a software developer and protocol designer this means staying on top of the latest trends and technologies in my industry. It means understanding not only particular technologies in great depth, but developing high level understanding as well. It means understanding the connections between things: the nuance at the edges where multiple ideas and technical components fit together. It means being able to reason about architecture at the highest possible level, and to think creatively about how to adapt existing solutions and architectures to solve novel problems. As I go about my work these days, I try to recognize where I’m performing rote, “code monkey” style tasks, and I steer away from these (or use AI tools to help with them!) so that I can focus on the higher-level tasks where I know I can add real, unique, human value.
In this vein, it may be important to broaden one’s skill set. If you’re a “master engineer” who has deep expertise in a very narrow set of skills—say you understand a narrow corner of the Linux kernel better than anyone else—then you’re more likely to find yourself in trouble as AI tools encroach upon more and more of the modern technology stack. By contrast, if you understand many tools and technologies and how they fit together and can be combined in creative and novel ways, well, I think it will take AI much longer to become good at cross-domain skills such as these.
Finally and probably most importantly, the best protection we have against our jobs becoming outmoded or obsolete is being good at working with people. The very last jobs to be replaced, if they ever are, will be things like nursing and caregiving that require strong empathy, compassion, and interpersonal skills. AI will get very good very quickly at “dry” skills that involve math and science, number crunching, data retrieval and manipulation, but it will take automated systems a very long time until they can convincingly demonstrate these interpersonal skills and traits, and even longer until people are comfortable being cared for by machines! There’s absolutely a human component to just about every job and every role: I recommend finding it and developing it. There’s no time like the present to reflect on and prepare for what’s to come.
Let me be clear. I’m not a pessimist and I don’t believe in an AI doomsday scenario. I don’t even believe that AI is going to steal everyone’s jobs. Yes, some jobs will become obsolete in the short to medium term, but over the long term it will lead to increased productivity and plenty of new, interesting, fulfilling jobs. One doesn’t have to look far to see that the same set of fears is raised every time a new, paradigm shifting technology emerges, and literally every time those fears have proven to be unfounded. I don’t see any reason to believe that this time will be different.