Back in the Saddle / 2025-09-24
Vocabulary, ethics, and disclosures, and some news and links from The Week That Was in AI + Open.

Vocabulary
It’s probably a lost cause, but as I restart the newsletter I’m going to try to stick to consistent vocabulary in this blog. A few things that are jumping out already since I last tried to communicate rigorously about this:
- I still dislike the general usage of “AI” for everything under the sun. Since LLMs are the core technology that is impacting coding the most I will try to stick to that, though I may slip back to AI from time to time when I’m tired or when I just need some variation.
- I have found the definition of "agent" to be excruciatingly vague, but in this post Simon Willison provides a useful definition of AI "agents": “An LLM agent runs tools in a loop to achieve a goal.” I might clarify by saying “to achieve the controller’s goal” but this is still very helpful for me.
- “jagged frontier” is a new bit of jargon that I dropped into the last newsletter (and indeed the newsletter’s tagline in some places). In short, it is a helpful reminder that LLMs are advancing very rapidly but very unevenly—so you have to be careful about extrapolating from your own experience.
Actual news
Open experience & culture
How LLMs are changing the culture and lived experience of open source communities
- How even do we begin to measure the impacts of LLMs? Gergely Orosz points this morning to a very basic question: “is kicking off agents in parallel actually worth it?” Certainly there’s a suggestion from what companies are saying in public statements that… we’re not seeing much impact. And yet devs are feeling impact. (I really want to get into Dr. Cat Hicks’ work on the psychology of this soon; my coworker Edgar Kussberg is also writing and thinking about it.) I still think this is less blockchain, more web1.0 / iPhone—but we have to be honest that the data to back that up isn’t there yet.
- Open is a labor movement (whether we want to admit that or not) so it is very interesting to see what other labor groups are saying about LLMs. If you have pointers to open communities that have made similar statements (especially in 2025, rather than earlier) I would be very interested—please share!
Infrastructure for open development
How LLMs are changing the technical infrastructure and tooling of open development
- CRDT-based revision control: what if version control was redesigned with deeply collaborative (i.e., many coders at the same time) coding was the priority? And yes, by coders here obviously they mean “LLM agents”. But it is very interesting to hear about tooling that acknowledges that code is way more than just code these days. (Not a great sign, though, that this innovation is coming from VC and not from a week-long hackathon.)
Access and barriers
How AI is changing who can participate in open source and what barriers remain
- Toward AGI: What is Missing?: Mark Riedl suggests that to advance the state of the art in AI is going to require some extremely capital-intensive investments. In particular, getting AI techniques to understand “the world” will require a loooot of investment in simulating that same real world (sometimes called “world models”). This could be a two-edged sword: this sort of infrastructure often becomes open (because it’s a complement to the actual source of revenue) but it also costs a pile of capex (which leads towards corporate-captured open).
Making simulation environments is equivalent to creating dataset generators.
— Mark Riedl (@markriedl.bsky.social) 2025-09-21T23:32:56.394Z
- Some stories about people writing useful and/or fun tools with a combination of LLMs and existing open infrastructure: My old partner-in-crime Christian Schaller writing a 3d-printing tool he might not otherwise have tried to build; Danny O'Brien's writes "day in the life of an llm user", detailing a whole day’s command-line usage of llms; and Simon Willison’s list of 124 tools he’s built with AI.
- My point in the last newsletter about LLMs making open data more accessible really resonated with John Fleck:
Power and Centralization
Who controls AI development, and how power dynamics are shifting in the open source ecosystem
In comparison to the almost blissfully "borderless" ethos of traditional open, open LLMs are almost inextricably bound up in the current moment's nationalism. Two must-reads on that, both from Nathan Lambert of the Allen Institute:
- China's open source AI trajectory: Nathan’s most recent analysis on China concludes that, exactly because so much of the Chinese AI effort is hyper-competitive and perfectly happy to see corporate efforts start and fail, they will keep a lot of it open (or at least open weights).
- The American DeepSeek Project: Nathan’s (pre-)response to this is an “American DeepSeek”, trying to keep key AI knowledge and skills reproducible and in the open—and in the US.
One does have to wonder what Europe could do on this front if it weren’t sending a quarter-trillion a year to US tech companies.
Misc, inside the bubble
- "Not a Robot" game: Make sure to play through at least level four! And then you’ll be hooked.
- AlignmentAlignment: the question of “alignment” is a silly one (as I pointed out in the old newsletter, OpenAI can’t even align its own board much less its AI) and this new organization takes it exactly as seriously as it should.
Outside the bubble
Each week I'll try to pull something from one or two of my favorite non-AI news sources.
- When Africa’s internet breaks, this ship answers the call: Our digital miracles rest on very real-world stuff, and here Rest of World reports on the (sole!) ship repairing Africa's undersea internet cables. If you read Wired’s Mother Earth Mother Board back in the day, this is a must-read followup.
Closing note I: on my use of, and the overall utility of, LLMs
I plan to write this newsletter essentially unaided, since writing is very much a part of my thinking process and my goal here is in large part about clarifying my thinking (with educating an audience as a very excellent side effect!) (Also my voice is… weird and LLMs definitely don’t capture it.)
That said, I am using LLMs to help prepare the newsletter, and I want to document that a bit—both for transparency and because I think it helps make clear how useful modern LLMs are.
- Claude has helped me create an iOS Shortcut that lets me quickly share links into a Google Sheet, and almost entirely wrote a Google Script that pulls those links into a Google Doc. These save me a bunch of copying and pasting, which 💯 increases the odds of this newsletter coming out on a regular basis.
- lex.page, which I'm experimenting with as a writing tool, pulled in the links and wrote a one-sentence summary. Those summaries serve as a refresher, since sometimes I don't recall why I saved a link. But then I pretty much rewrite all of that.
Tying together my personal experience and the big-picture questions in the newsletter: Claude, at least, seems very eager to suggest custom solutions for me over anything off the shelf or purely-open—eg encouraging me to build a Google Script instead of using or installing an existing link-archiving tool. I don’t think that’s a bad thing, per se, but I wonder what it indicates for the age-old “buy v build” dynamic. Will this meaningfully change the percentage of code in the average stack written by “a person in Nebraska”? And is that good or bad?
Closing note II: on ethics of LLMs
I do not plan for the refreshed newsletter to touch much on the ethics of LLMs in and of themselves. The topic is complex, and fraught, and frankly even starting to type this paragraph fills me with dread at the idea of people yelling at me about it.
This isn't to say I'm not aware, or don't care. Politics, power, and the climate, among other things, are never far from my mind. (Most of my volunteering these days is about climate, with all three of my non-profit board hats having important climate angles.) But they won't be the focus of the newsletter—they'll come up when it naturally fits into the context of what it means for open.
Discussion