I’ve been chewing on a few questions about AI for a while now, in particular questions about who is empowered by AI tools and what shapes that power takes. None of these have coalesced into very clean or coherent ideas, but they still feel worth sharing. So I’ve bundled several of them up into one honkin’ post. Each of these essays is a bit less polished than I’d normally aim for, and they’re probably not very internally consistent. But that might be a good representation of where I’m at on the topic of AI.
A couple caveats before we get started
First, none of these represent my entire point of view on a subject, and together they don’t cover everything I want to say about AI. They’re closer to conversation starters than they are to full-fledged theses. I could talk myself into disagreeing with a lot of what you’re about to read, but I think there are lots of kernels of truth in there too.
Secondly, I think I will come across pretty negative on AI throughout these essays. But it’s important to note that I do use commercial LLMs quite a bit in both personal and professional contexts, and I actually enjoy them a lot1. But I’m not going to bother writing about my positive experiences here. You can find glowing perspectives on AI anywhere these days, and I don’t want to spend any of my energy doing free advertising for a handful of massive corporations.
Ideas are cheap. We used to say this to dismiss the Ideas Guy, the one whose only contribution to a project was imagining things. The phrase sets ideation in contrast to execution, the point being that ideas are worth less than the ability to make something happen. But now we live in a bizarro world seemingly built for the ideas guys. A few sentences can make an app. A few words can produce an image. A stray thought can kick off research and write an essay. And so now we need to ask: when results are as cheap as ideas, do ideas get more valuable, or do the things we make get less valuable?
At the same time, it’s worth interrogating whether execution is of much value on its own. It’s a little distasteful to give too much credit to people who do stuff, to patly reward doing something rather than nothing. Folks love to go on and on about the man in the arena, but an output-focused worldview is, sneakily, a cynical one. In supposing that action is necessarily good, that moving forward is better than stopping to ask questions, that done is better than perfect, it creates a sense that life is fundamentally competitive. Action is valuable in and of itself if you believe that it’s better for you to have done something than for someone else to have done it correctly, or better for you to have gotten somewhere first than for all of us to arrive at the right place. To orient your life this way seems deeply unimaginative and sad.
AI tools empower both the ideas people and the output-obsessed. They are value-extraction engines that do not compensate their contributors, they enable velocity at the cost of critical thinking, and they devalue craft and creativity. They will raise our bosses’ expectations for our productivity, and they will lead to an infinite supply of half-baked creations and mediocre amusements. More people will create more things, but those things will be less personal and harder to distinguish from each other.
And, anyway, what am I even supposed to feel when someone shows off something they created with AI? Impressed? More often than not, I feel embarrassed for them. Look how tall I am after I put on stilts! Cool, man. Neat. Good stuff. It is not impressive to have pressed the “generate song” button and received a song. It is not cool to have described an image. It does not strike me as creative to have had the idea for an app. The less effort you put into something, the less it means to me that you did it.
It matters that ideas are cheap because effort and care aren’t. The important distinction was never between thinking of the thing and having the thing. What I care about, what anyone I respect cares about, is what a person is willing to spend their time and attention on. I’d love for all of us to have a little more balance in the amount of value we place on output and input. I’d love to live in a world that cares about what gets made, and who it serves, not just how quickly it accrues MAUs. I want the things that exist to tell me something about the people who made them and the processes they used and the values they espouse, even if that means the results are uglier and slower to make and intended for a smaller audience. I want to live in a world where good ideas matter and where being creative means, well, creating.
It turns out that a massive predictive text model trained on all publicly available code in existence can write some pretty good code. This becomes even more true when the model is invoked by a software harness called an “agent,” which automates the loop of reading, running, testing, and editing code so that an LLM can inspect and react to its own outputs. In my experience, these agents can’t yet operate fully autonomously in professional settings, but there are people, many of whom are certainly smarter than me, who claim they can or soon will. Uh oh, there goes my job!
One way of looking at coding agents like Claude Code, Codex, or Cursor is that they free the magic of software engineering from the stranglehold of weird nerds like me. This is both true and valuable. Five years ago, a friend with no programming experience asked me how he might make an app, and I found the question so tough to answer that I wrote 2000 words about it2. Nowadays, I would tell him to download Cursor and get to prompting.
But there’s a catch, and it’s a financial one. Basically all programming languages are free, if not entirely open source. And many fantastic text editors and IDEs are free. This means that if you just want to write and run code, and you already have a computer, you can get really far without spending a dime. But LLMs, the good ones anyways, are not free to use3. At best, you get some limited usage out of them for a flat rate each month; at worst, your usage is metered and billed by volume of input and output text. So, you want to code? Sure thing, buddy, that’ll be $20 per month! Or $40+, if you want to try multiple providers. Or, if you do it enough to get sick of hitting rate limits, it’ll be somewhere in the $100-$400 range. Or, if you want to do it at scale, it might actually be closer to $1000 every single day, according to these freaks.
It’s easier than ever to make software, if you have access to the newest commercial tools, which get more expensive the more you use them. How many things do you do just for fun that cost hundreds of dollars every month? Do you want to add another one? I am hugely incentivized to learn how to use these tools—my entire career literally depends on it—and I still haven’t convinced myself to pay for one of these subscriptions out of pocket.
Through an optimistic lens, the AI research will keep advancing and the advancements will trickle down. Today’s top-of-the-line models will get cheaper and more efficient every year, computer hardware will improve, and soon enough we’ll all be running top-tier LLMs on our ThinkPads. But what I think is more likely is that corporate expectations of productivity will just keep pace with state-of-the-art model improvements. The standards and practices around software development will adapt to fit each of OpenAI’s and Anthropic’s latest releases, and will confer a massive advantage to whichever companies can afford the most tokens from the newest models. Capital, mediated through compute, will separate the folks who can pay for agents from the folks who can’t, and the output of the former will far outweigh the latter simply because they can pay to do more, faster. So what exactly are we democratizing here?
A camera stands for the difference between representation (using symbols to convey the idea of a thing) and recording (directly capturing a consequence of a thing’s existence). Whether or not a photograph can truly be considered an index is a complex subject, especially if we need to account for whatever a pixel actually is. And of course, photography and film are art forms built on the idea of hijacking a photo’s perceived truthiness in order to lie to audiences. But in spite of semiotics, and in spite of Hollywood trickery, a camera is still broadly understood to be a tool for truth.
The ability to capture and convey truth is power, power which can be wielded for or against authority. Photographs allow us to remember things that governments want to erase. The right (or requirements) to use cameras can hold armed police accountable to their actions and limitations. Automated cameras keep drivers honest at red lights and in school zones, but they also enable mass surveillance of citizens. Cameras can change public opinions on wars and politicians, and can be used to expose wrongdoings or moments of weakness. Photos and videos can turn local revolutions into global ones, can inspire aid or intervention. In recording reality, or in being understood to record reality, photography can cut through rhetoric and democratize access to truth.
I do not know, but I do fear, what will happen when the power of a camera is weakened. When video and photo are trivially generated, when recordings of atrocities or confessions cost any shmoe a few cents to create, what does accountability even mean anymore? What becomes of our understanding of documentaries, or interviews, or live coverage of events? Without a shared sense of trust that a photo implies that a camera was physically held by a human in a specific location at a specific time, what tools do we, and by we I mean the rest of us, the ones without infinite accumulated wealth and access to ungodly amounts of compute and control over broadcast channels and aggregation algorithms, have at our disposal to access and advance the truth?
It’s astonishing to me that there are people making this happen. Actual people, who presumably have friends and families, are spending their time researching and developing technology that can replicate anyone’s speaking voice, that can turn anyone’s likeness into a puppet, that can recreate any analog or digital photography style. These tools will be used to defraud, to mislead, to harass, to abuse, and to terrorize. They will undermine the very idea of truth, and the idea that it can be used to protect against corruption. And yet people are building these things. They’re making the hill steeper and the ground muddier. I don’t know how they sleep at night.
-
A hypocrite! On the internet! Get him! ↩︎
-
The word count might be a me problem more than a the topic problem, to be fair. ↩︎
-
If you want to play with some free ones that can run on your laptop, I recommend installing Ollama. You’ll quickly understand why you pay someone else to run better models for you. ↩︎