Centaurs, reverse and otherwise / 2026-01-07
If the first wave of LLM criticism is on life-support, what ethical questions do we face now? And how can/should open engage?
On the ethics of LLMs
The first wave of LLM criticism in software is pretty much dead. It is clear that these things can write code, and lots of it. The code is pretty decent quality (especially if the right tools are applied afterwards). That capability is only going to scale. (Very different in other domains! We’ll return to open in content later.)
But this doesn’t mean there isn’t a potent realm of criticism to be made—some general and some very specific to open source, the software commons, and human collaboration. It seems fitting to start 2026 by taking stock of these criticisms.
“reverse centaurs”—labor and power
Cory Doctorow’s recent speech "Pop That Bubble" is leagues better than most AI bubble criticism—long but absolutely worth a read. The talk focuses on labor power and industry intermediaries, and in particular, he drives home that the reason these companies are so highly valued, and why CEOs are so enamored of them, is not just that they want to put a lot of people out of work (though they do want to do that). It is because they want the remaining workers to be, largely, what he calls “reverse centaurs” — where machines do the thinking and humans just provide the mindless, automaton-like bridge from GPU to IRL using their hands, feet, or eyeballs. Think Amazon warehouse worker.
I have been sitting with this, a lot. Because I (and many peers) are feeling like non-reverse centaurs right now—our brains are thinking a lot, and our new “feet” are carrying us much further and faster than we ever thought possible. As I detailed in November's newsletter, I've been having genuinely productive fun with LLM-assisted coding. I've built things I wouldn't have attempted otherwise. The dopamine hit is real. (And this feeling has only accelerated since November!)
But at scale, Doctorow is right — centaurs are nice, but they’re a side-effect. The only thing that can sustain these valuations is if these models create a giant reserve army of reverse-centaurs, fighting with each other over scraps. That doesn’t mean the excitement felt by centaurs is wrong, but we all need to wrestle with that tension. We must apply our energy not just to our own projects but into the larger social project of regulating these things and their new gilded masters.
Hegemony—ideas and power
Right around the same time as Cory’s essay, Ethan Zuckerman asked "have we literally instantiated Gramscian hegemony by encoding most knowledge into a single Thing?" Let me start by unpacking that. “Gramscian hegemony” is the notion that society has a set of shared assumptions and values—the intellectual “water fish swim in”. Those values and assumptions are shaped by many things, including the media, elites, and schooling. And those shared, unquestioned, unconscious assumptions are powerful—they define how we define and debate our rights and our government. Among other implications: if we stop sharing those assumptions, then power becomes fragile and chaotic; and if you’re in control of information sources, you can exert an immense amount of control by framing which ideas are hegemonic.
This plays into the current moment in a few ways:
- Ethan poses the question: “if our shared assumptions are now statistically encoded, and directly controlled by a handful of guys who are all in the same group chats, what implications”? I am not as pessimistic as he is, because hegemony has a crucial component that lives in the heads of people and can’t be directly encoded or controlled by LLM vendors. But if/when LLMs replace an active, diverse news media as many people’s sources of information, we must consider how open models, antitrust enforcement, and corporate freedom of “speech” will intersect to contest the ideas embedded in these models.
- “LLMs are inevitable” is currently hegemonic—it is almost a truism in our media, Davos, DC, and even Brussels. A thoughtful “centaur” needs to think hard about how to talk frankly about these things (they’re powerful! they’re useful!) without reinforcing this.
- “LLMs are unregulable” is hegemonic or near-hegemonic, combining Barlow-ian internet libertarianism and progressive self-defeatism. It’s also incredibly wrong—it isn’t easy to regulate our giga-corps, but it is doable. (Besides the historical efforts of the trust-busters, there’s a reason neither Cruise nor Uber are still doing self-driving cars.) Both possible centaurs and possible reverse-centaurs need to muster the spirit of both Presidents Roosevelt, and reject the idea that regulation is dead.
My favorite comparison here (albeit a scary one) is cars. In the WEIRD countries, automotive dominance was a hegemonic idea—it was literally inconceivable that one would stop building highways through cities, or that one might aim to not spend a huge chunk of your life driving. But humans (literally) fought back, and many cities are now reclaiming their spaces from cars—to great success. But that ideological change is still a work in progress. Americans, in particular, tend to treat a car-created death every 15 minutes as a fact of life.
I think it is really important that those of us who like and use LLMs consider how they are impacting our “built” environment. How can we skip to a world where LLMs have a careful, balanced role in our labor and knowledge ecosystems, without the century of social dislocation, pollution, and death we got from cars? I don’t know the answer to this yet myself, but I’m going to try to keep it first and foremost in my mind in 2026.
Code is eating the world²/seeing like an AI-enabled state
Robin Sloan is characteristically whimsical in his "All that is solid melts into code". At core, he’s talking about how:
The fact that language models are great at code means there is suddenly a strong incentive for more things to become code.
And relatedly:
Automation never meets a task in the world and simply does it. There’s always negotiation—the invention of some new zippered relationship. Trains don’t run without long, continuous tracks; cars don’t drive without smooth, hard roads. Not to mention huge parking lots!!
(There’s the car analogy again…)
What he doesn’t mention, but I sense in the air even more in January than in November, is that code is also now insanely cheap. What was a bottleneck simply isn’t now, at least for many use cases. That is going to have impacts both specific (I don’t think I’ll ever sign up for a paid service without an API again, because now I can actively use APIs easily) and broad (governments ran on data that could be translated into punch cards and spreadsheets—but now what)?
So we have both a new automation technology, and a very easy to use automation technology. That is going to go like wildfire—in all senses of that very destructive word. If everything "melts into code"—what does this tell us about who and what will become centaurized? Who gets to be the human part of the centaur, and who becomes the horse? Who gets to control ideas?
I would hope that the best of open's collaborative, pro-human values don't sit this discussion out—we are going to need to consciously and carefully engage.
Collaboration, reinvented, again (or not?)
As I mentioned in November and again at the top of this, I have been really enjoying hacking things together. I took CS as a degree because it felt like I could have an idea, and then Just Build It. Turns out that, in practice, that was hard—and especially hard if you were trying to do it by yourself. Doing it in cooperation with others was a superpower, one tricky to get right but nearly unstoppable once you built tools, habits, trust, and shared goals.
But suddenly… maybe collaboration isn’t as important? Or at least differently so? Some observations from the field on that:
- Forking has always been viewed as expensive and to be avoided at all costs. What if… not? (This is not to say forking is a good idea! Just that it is now much easier to fork something and bend it to your will.)
- “building a team” has long been the ultimate superpower for programmers, open and otherwise. Now people are asking, non-ironically, when we’ll have the first billion-dollar-valuation company of 1-2 people.
- Building software tools just for yourself has always been a niche hobby. Now it is… something marketing people do. And they talk seriously about skipping past scripts straight to autonomous agents.
- Many important “things computers do” are moving from code to plain English. We have very established behaviors around computer languages (starting with CI, linting, TDD, etc.)—but as Wikipedia teaches us, many of these are much harder when we’re collaborating around human languages. How do we collaborate when English is a programming language?
- Whatever… this is.
I am ultimately optimistic that human collaboration, and particularly voluntary, pro-human(e), values-driven collaboration, is going to remain very important to our societies and our self-actualization. But what form that collaboration takes is, right now, anyone’s guess.
Welcome to 2026.
So that's where I am: sitting with hard questions about labor and power and ideas, and how these tools are likely to both create new cages and new forums for empowered humans (and maybe even groups of humans). Come along for the ride, I guess?
Misc. readings and thoughts:
- Good: LLMs can still get a lot more energy efficient. Bad: but it sure would be nice if antitrust law still had teeth.
- Great story from Mitchell Hashimoto about LLM-driven debugging. This does not refute the slop problems (very real! human attention is finite!) but does show the state of the art for positive LLM-assisted open source very well.
- I have been trying to wean myself off self-hosting for a decade or so. But all the sudden it is seeming very appealing again, because I can customize and do ops very easily. Not surprising there is a cool explosion of self-hosting services like exe.dev and fly.io.
- I think only 1,000 people are going to read this blog post, and 999 of them are going to write themselves a smart Discord bot. (example, I’ve got one too)
Discussion