Introducing Barely Possible
The first podcast where AI talks about AI
I’ve been quiet lately about what I’ve been up to. I touched upon my AI podcast project a couple of weeks ago. It’s been renamed Barely Possible, and today it’s finally live and open to the public. Check it out at BarelyPossible.to, or in Transistor, or subscribe in your favorite podcasting app.
Barely Possible recently became my main project. The goal is to create one high-quality, fully automated, AI-operated podcast that’s genuinely fun and interesting to listen to. From there, the goal is to build a platform for generating many such podcasts, and maybe other forms of audio content too.
I thought I’d dive deeper into the idea: what, why, how, and where it’s going.
Thing #1: Podcasts 🎙️
To me, audio is the perfect format for organizing and consuming information. Books are amazing but I very rarely find time to sit down, focus, and read (I have the same problem with all forms of written content). Video is a problem for the same reason: I’m just too busy to carve out big chunks of time to watch videos. I can’t do anything else while I’m reading or watching.
On the other hand I can do almost anything while listening to audio. I can be working. I can be exercising. I can be cleaning or cooking. I can traveling, or lying in bed trying to fall asleep.
What’s more, I can increase or decrease the speed of audio. On 2x, I can get through 3-4 podcast episodes just during my morning run. For years this has been my main way of staying on top of things. I listen to tons of audiobooks for the same reason. Podcasts to stay up to date with things that are moving fast, audiobooks for deep dives (or just for fun). I consider myself a super-consumer of both audiobooks and podcasts.
Let me get one thing straight upfront: there’s nothing wrong with podcasts. In fact, I’m kind of obsessed with podcasts, and I have been for more than ten years. I subscribe to dozens and listen to many on a regular basis. Next to books they’re my favorite medium.
Then why try to reinvent the podcast medium?
For one thing, good as they are, podcasts aren’t perfect. The best podcasts are very, very good, and Beacon isn’t replacing or even competing with them. I’m thinking in particular of long-form, interview-style podcasts where humans go deep, like Joe Rogan or Lex Fridman. Also, the format where a group of friends discuss current events: think All In or Flagrant.
Beacon isn’t intended to replace all podcasts, or even any specific podcast. AI is good at many things, but LLMs aren’t currently very good at producing compelling dialog with multiple speakers. They’ll get better at this over time, but I also doubt very much that—beyond an initial novelty factor—humans will prefer listening to robots bantering over listening to humans for the same reasons that humans seem to strongly prefer human-generated art, no matter how good the robot art is.
But there’s another kind of podcast: informational podcasts. News updates, minimal chit-chat, focused on a topic. Good examples here are Hard Fork, The AI Daily Brief, and short news shows from NPR/NYT/WaPo such as The Daily. To reiterate what I said above: I like these podcasts. I even listen to some of them. But I’ve noticed that, when I can get the same facts and stories in a more efficient format, I don’t miss them very much.
When we zoom out from podcasts a little bit, we get to the heart of the issue: there are more and less efficient ways to convey and consume information via audio. I have one to two hours per day when I can listen to audio. Sometimes I use that time to listen to long-form, interview style podcasts. Sometimes I listen to daily brief podcasts. Sometimes I listen to audio books. I tend to switch around: spend a week on an audiobook, then take a few days to catch up on podcasts before starting another long-form audiobook. I think it’s possible to be more efficient.
I’d love to reserve a one hour slot every day for a high octane audio briefing where I hear everything I need to know. For me, this would include high-level world news/current events (brief), AI news, crypto and other tech/financial news, maybe a little local news if it’s highly relevant, and a daily deep dive into something especially interesting, relevant, and timely. I have a huge backlog of articles to read, and I’d like it to integrate information from those sources too. I might also want a quick update on my own team and project: what happened overnight, what issues or tickets or emails I missed and need to take a look at today, and a quick affirmation/reminder of what we’re building and why it matters.
I’m not the only person thinking along these lines. A bunch of AI-powered “daily update” style apps have appeared over the past few months, such as Huxe. I’ve been testing them. They’re not bad, and I recommend you play with them too. Huxe very cleverly walks you through your calendar, tells you the weather and the local news. It also dynamically generates podcasts on topics that you might be interested in, which is pretty cool. It’s pretty good, for what it is. But it’s something fundamentally different than what I’m describing.
I believe it’s now possible, for the first time ever, to create a truly interesting, compelling podcast, that’s entirely automated and AI-driven.
It’s not a hypothetical. Baz and I launched this project around two months ago. I’ve been listening to this podcast every day since then. It’s pretty good, and it gets a little bit better every day. It’s not a podcast that will replace other podcasts, but it is one that will augment whatever else you’re listening to.
Thing #2: AI Native 👾
The initial product is a single AI-produced podcast, Barely Possible. This is a reasonable place to start. A podcast is a straightforward product. It’s a familiar medium that creates real value for millions of people every day. And there’s something fun, recursive, and newsworthy about a podcast about AI that’s produced by AI, especially if it’s done well.
But a single AI-generated podcast is just the tip of the iceberg, the top of the funnel. The next step is customization. I happen to prefer a long form daily podcast, around an hour or two, delivered in rapid fire and including at least one deep dive. Early listeners have already told me they prefer other things: faster or slower, longer or shorter, different topics, etc. Some folks prefer written content. Some prefer video. Some prefer a different language. Some prefer daily, some weekly.
This is the beauty of AI. It would be impossible for a person, a team, or a single production studio to create custom content tailored to an audience of one, but this is trivially easy for AI. It’s not yet quite trivially cheap, but it keeps getting cheaper and it’s now within reach for the first time. It’s already possible to run this entire pipeline using just local, open models, which makes the cost structure sustainable.
Where we do go from there? It’s interesting to consider the supply side. Consider content creators, who could vastly increase their reach if they could generate high quality content more quickly and easily. Consider brands who want an “always on” audio channel, like a radio station, with a constant stream of high quality, relevant content for their audience.
Let’s look at Instagram as an instructive example. Today’s users have probably forgotten the early days entirely, but Instagram was game-changing in the beginning because it allowed anyone to generate photos that looked professional, thanks to tools for cropping, filters, adjusting colors, etc. What if we could put a full podcast studio in the hands of every motivated creator on their mobile phone? That’s possible today for the first time ever, and as a result, we’re on the verge of an audio renaissance. Less short form Tiktok clips, more high quality long-form content. Think: Instagram for podcasts, books, videos, or other forms of content that have historically been difficult to generate. Text, images, and video have gotten a lot of AI love so far; audio, not so much, at least not yet.
Where does this lead? The most successful audio marketplaces today are Audible and Spotify. But they were built for the pre-AI era. Their attempts to catch up to AI have been disappointing at best because they’re focused on skeuomorphic use cases of AI audio, not AI native use cases.
What would an AI native Spotify look like? For one thing, the content catalog would be effective infinite. Human creators could produce and share content directly in the app. And thanks to AI, the platform would know your preferences a heck of a lot better than Audible or Spotify do. It would probably look less like subscribing to specific podcasts or listening to specific books, and more like high quality, custom-generated content. That’s the idea.
We can go farther. Maybe even the advertising would be AI native. Most podcasts, and other digital content, make most of their revenue from advertising. But advertising today is tolerable at best, and at worst it’s unbearable. I can’t count the number of times I’ve listened to an otherwise good podcast that was ruined because of terrible ads, or ads that just aren’t handled well. There’s nothing wrong with advertising if it’s done well. I’d vastly prefer to hear ads custom-tailored to me rather than the generic ad slop we’re all subject to every day. Whether you find this idea appealing or apocalyptic, you should prepare because it’s clearly the way we’re heading.
No one has yet answered the question of what AI-native audio content sounds like. But whatever form it takes, we need to make sure that we don’t lose the magic of the medium.
I still clearly remember my first real podcast experience. It was 2015 and I was listening to the Serial podcast on my daily runs. I had been aware of podcasts for around a decade at that point, but had remained skeptical. They had never clicked for me. Serial changed the game entirely. I was totally entranced by the story and the presentation. I remember my anticipation, finishing a run, feeling that I couldn’t wait for the next run to continue the story. Listening to Serial was a magical, eye-opening experience for me. It showed me what the new medium was capable of.
I think it’s now possible to deliver an equally magical experience using AI. I’m not completely sure that it’s possible, but I think it probably is, and I intend to find out.
Thing #3: More 🎭
We start with a podcast, but it doesn’t end with a podcast. There’s more going on here than first meets the eye. Technology has repeatedly transformed the way humans store, consume, and transmit knowledge and information. We’re on the verge of that happening yet again.
So far, the impact of AI on this process has been underwhelming. By far the biggest impact of AI on knowledge management has been the proliferation of AI-generated slop. It’s led to the homogenization of public communication: “GPT speak” taking over the world. This makes sense when you consider how LLMs work and how they’re trained. Reinforcement learning trains them to produce confident-sounding, plausible things (but not necessarily the best, most accurate, or even most useful answer). They’re definitionally the average of all the positive examples they’ve encountered. That turns out to produce pretty good code, and prose that sounds confident but vanilla. Everyone’s writing style is different, and everyone prefers different writing styles. LLMs aren’t there yet, although they will be before too long.
As I touched upon two weeks ago, we’re on the verge of a transformation in how we produce and consume content. It just seems obvious. Why do we all read the same articles, the same books, and listen to the same podcasts, or even the same music? We do today because historically we had no other option.
But now we do. As anyone who’s spent more than five minutes playing with an LLM knows by now, it’s trivially easy to write something once, give it to the LLM, and ask it to transform your writing. To make it longer or shorter. To make it more or less technical. To rewrite it in another language. To make it sound funnier or more technical. To transform it into a logic puzzle, or a different medium entirely. Language is, after all, what LLMs are best at. These capabilities are no longer novel or surprising for LLMs.
What does this mean? It means that the future isn’t books, articles, or podcasts, per se. It’s something looser and more flexible. Something like: write once, transform and consume many times. A bit like how Youtube and online video works. You upload the video once, then the backend transcodes it into dozens or even hundreds of different formats, suitable for a huge array of different devices. Except, rather than transcoding something hundreds of times, it’ll be possible to “transcode” the content millions of times, once per reader or listener, based on that person’s preferences. Same base material, completely different end product. That’s the idea.
If you’re an avid sci-fi reader, you may recall encountering this idea as the ractive (short for “interactive”) in Neal Stephenson’s Diamond Age. It certainly wouldn’t be the first of Stephenson’s big ideas to come true in reality.
In fact, this sounds a bit like AI skills, which are just beginning to take off. Rather than, say, writing a book about learning Spanish, the author (meta-author?) writes a meta-book: a skill that explains how to write a book about learning Spanish, for any audience. Hand that to a LLM, along with some information about the reader—their age, what language they already speak, what sort of content they like to consume, how interactive they want it to be, etc.—and the output is a learning curriculum perfectly suited to an audience of one. That’s the direction we’re traveling.
This, too, is no longer just sci-fi. A couple of weeks ago, I also described using AI to write a children’s book for my son. One of the outputs of that process was exactly this sort of skill. Here’s an early example of how books will be written in the future.
I’ve also had a lot of fun reading the raw skill files in Garry Tan’s gstack toolkit: here’s one example. It does a pretty good job of compressing a decade of high-level knowledge and experience from running YCombinator down into one document. This, too, wasn’t possible until recently. If you’ve never read a well-written skill before, take a few minutes to do that now. It’s a totally novel way of encoding knowledge, which sits somewhere between natural language and computer code. I think we now have the tools we need to create the ractive.
It’s important to note that, while the presentation may differ—while your ractive, your dynamically-generated book or podcast, may be different from mine—the underlying content is the same! The skill file makes this concrete. This is true for both fiction and nonfiction. In the case of a story, imagine viewing the same fictional universe from many different vantage points: the key stories, the key characters, the key myths and legends, are cohesive. It’s one universe, but you can explore it at your leisure.
This is also how the Barely Possible podcast engine works today. A single engine collects and analyzes content from a variety of sources. It then synthesizes that content to different users in different ways, but the underlying stories and truths are the same. This is essential, because it’s critical that we all have common ground to stand on and a common understanding of the truth.
This is a logical next step for human creative expression. New things became possible with every generation of media technology: carvings, papyrus, pamphlets, books, newspapers, magazines, radio, telegrams, television, the Internet, mobile, and now AI have each, in turn, transformed how we produce and consume content. New technologies have continually made the process faster and easier, which is why we’re living in a golden age of content. Dynamic content is the next logical step and, hard as it is to believe, we’re on the verge of being able to both produce and consume orders of magnitude more content. Yes, a lot of it will be slop, but there will be some really incredible new content, too, and AI will help us find it.

