
Recently I’ve consumed lots of interesting writing on AI. Most of what I’ve read the last few years has not been worth sharing. Everything listed here is important for specialists, but the asterisk (*) means I recommend good citizens read it to stay informed on looming disruptions caused by the automation and centralisation of control.
This list is subjective. The pieces mainly revolve around what I think are crucial points for assessing artificial intelligence (AI), artificial general intelligence (AGI), or artificial super-intelligence (ASI):
what AI can and can’t do in principle
AI through an evolutionary lens
AI through a “realist” lens
intelligence is about competence, not information
value “alignment” is a pipe-dream1
focus on the politics of AI rather than the ethics.
First, the doubters:
*“The Limits of Smart” over at the excellent Dynamite Substack. The ability to move molecules around is more important than having lots of data — plus other fundamental reasons to doubt ASI will come from current paradigms.
*“AI Doomerism is Bullshit” at David Pinsof’s high-quality Substack Everything is Bullshit. He questions assumptions in the science and economics of ASI.
“The Selfish Machine” by Maarten Boudry gives the standard neo-Darwinian line on whether and how AI will evolve. TL;DR: we’ll perform artificial selection on AI the way we breed dogs and cows, so it won’t be subject to natural selection and become truly “selfish”.2
*“The Five Stages of AI Grief” by Benjamin Brattan at Noema. A classy tour of the intellectual landscape on AI in terms of coping strategies for this huge topic.
“6 AI Realist Predictions for the Future” by David Ferris. In this post he’s responding to a long essay by Leopold Aschenbrenner, called “Situational Awareness”, which crystallised a lot of thinking about runaway AI.
The believers, in rough ascending order of pessimism:
*“Welcome to the Era of Experience” by Silver & Sutton3, where reinforcement learning again spearheads AI as autonomous agents learn with real-world feedback, unbiased by human data. Lucid writing; wildly sanguine.
Google’s DeepMind put out a huge technical report on safety, saying AGI is coming c.2030. I’ve only skimmed it, but it’s a revealing document. Though the engineers downplay the worst scenarios, certain passages disclose they’ve already lost control of AGI.4
Hendryks, Shmidt, and Wang published “Superintelligence Strategy”, an analysis focused on national security, strategy, and deterrence. Makes a lot of assumptions: nuclear deterrence works, AI will also be like that because it’s easy to sabotage any prospective ASI, we’ll endue ASI with human values. It would be great if these assumptions held; I smell wishful thinking. At least this paper drags the debate into a national security context.
*I liked Michael Nielson’s “ASI Existential Risk: Reconsidering Alignment as a Goal”. He’s sceptical of classic “doomer” scenarios but very concerned about how ASI would concentrate power and democratise existing threats.
“AI-Enabled Coups: How a Small Group Could Use AI to Seize Power” by Davidson et al. from the Forethought Foundation. Cliques of humans try to stage coups, but often fail for lack of good help. Loyal AI to the rescue! Terrifying.
Then there’s “The Case for AGI by 2030” by Benjamin Todd, the sanest and sagest of the EA/longtermists.
“Will AI R&D Automation Cause a Software Intelligence Explosion?” by Eth & Davidson, also from Forethought Foundation. Well-written piece, but it doubles-up on points made in other works listed here. It does make interesting points about how a purely software driven AI explosion could happen (many, including me, think that hardware limitations will hinder rapid ASI development).
*“AI 2027: What Superintelligence Looks Like” is a story/essay by Daniel Kokotajlo et al. A good read, with forecasts, infographics, neat dashboards, etc. It has gotten a lot of attention in tech circles the last few weeks.
Update on agents
In February I wrote about why assistant- or butler- or genie-style agents will never work. Recent hype about these agents includes:
the American' start-up Genspark’s Super Agent
the Chinese offering Manus
OpenAI’s Operator
Amazon’s upgrade to Alexa called Alexa+
Anthropic’s computer use function for Claude, and
Google’s Project Astra which promises to be the “everything agent”.
In the tech bros’ imaginarium, an agent will soon be curating an all-inclusive holiday package after you offhandedly ask it to book you a vacation. This won’t work because of:
(a) the limitations of natural language (which is too ambiguous and context-dependent to command machines, or humans) and
(b) the unclear success conditions of inherently messy and exception-riddled tasks like bookings, scheduling, and the myriad household tasks for which you have your own little idiosyncrasies.
But there will be ways for AI companies to try to make it work a little better. These will involve the agent bugging you for clarifications and permissions (which is more annoying than booking yourself: booking is something that is pretty easy on today’s Internet), or you giving up all your data so that Alexa+ can guess from your emails, texts, browser history, and overheard conversations, what you mean by “nice” restaurant.
The President of Signal and all-round privacy advocate legend, Meredith Whittaker, nails it in this interview, where she highlights the privacy nightmare entailed by the prospect of agents.
On the natural language front, contrast this with Demis Hassabis, CEO of DeepMind, saying that vibe coding or simply programming a computer with natural language, is the obvious next progression (full podcast interview with Reid Hoffman, relevant bit from c.37:00). It ain’t gonna work. But the way they’ll try to make it work will require us to give our credit card details and computer access to a bot that is prompted by natural language.
Incidentally, all persons should watch the full SXSW interview with Signal’s Meredith Whittaker. She covers the basic rationale for Signal, and why the only way to guarantee data is not leaked is to not collect it. She also analogises deep learning to the tech companies themselves: both are processes that seek to maximise some objective function. In the case of Meta or Google, it’s shareholder profit. There’s an AI-like nature to large corporations.5 which makes them purely amoral and highly inflexible in their aims, regardless of the generally ethical conduct of the individuals working there. My constant refrain: Big Tech is often ethical but always evil.
Then there are the “great men” at the helm of these tech leviathans…
Don’t listen to tech CEOs
Remember, regardless of how plausible you think this whole AI-doom thing is, the behaviour of those who are running AI companies is astonishing. By their own estimate, ASI is going to happen rather soon and is quite likely to snuff out human civilisation. Believing this, they push ahead and use all their time and ingenuity trying to make that happen. As I’ve said, they strive every day to achieve something they believe will likely murder their own children. Never ever take seriously what these goons say.
True, other industries, like coal and tobacco, don’t even admit the dangers of their product and actively lie about them. That’s also unconscionable. But these AI types are, if nothing else, one-hundred percent fascist. Fascism is their revealed preference. They identify as liberals or transhumanists or techno-optimists or whatever. What they actually do is: support an all-in bet of other peoples’ lives which is apparently worth it for how exciting and heroic the discovery might be, or for how it might help us transcend human limitations, or rejuvenate our decaying society, or usher in a techno-utopia, or accelerate a cleansing apocalypse, or give the “masses” not what they want exactly but what they need. Pure fascism a la Mussolini.6
This doesn’t mean they’re wrong about the dangers of AI. It’s even worse if they’re right, in which case they pursue their goals with the same mutilated sense of irony that first appeared on the scene circa 2012.7 It goes like this: runaway AGI is such a dangerous prospect that we must invent AGI to deal with the problem.8 Their reasoning is that the arms race is inevitable, so the goodies (them) must beat the baddies (China, other tech companies, a rogue self-improving AI) to the finish line — which is merely the start of a new dispensation, described like any other paradise, which is to say, with details only scant and creepy. Ordinary people do not figure in this prophecy. The decision to speed up the race is in no way democratic. Techno-utopians gloss over the outrages entailed by their egomaniacal and tyrannical ambition: the race is on to turn several markets into a monopoly (a crime), to displace existing systems of governance across national borders (imperialism), and to concentrate power in the hands of a few unelected private citizens (coup d’état).
Again, even if you think they are delusional — in some measure they certainly are — this is their own stated desire. Morally, they are void. Oppose them at every turn.
That’s probably enough for today. Good morning!
Postscript: May 9, 2025
Sam Altman’s lips are moving again. I wish I’d seen this TED interview with him before I posted, it was released a week before my newsletter. I recommend watching it in light of what I say above.
At several points in the interview, Altman betrays that he is either not used to people questioning his motives or feels indignant that anyone would. He gets testy and sarcastic at the host, Chris Anderson’s, questions about IP theft and his level of power.
Midway through, he says that the inevitability of the race to build AI is what informs his vision for OpenAI. Later, asked about the inevitability narrative — which had been questioned earlier in the conference by the consistently excellent Tristan Harris — he dissembles, and assures us that things can slow down, as they do all the time.
Amazingly, Anderson also asks him if he’d push a button that would, with 10% chance, kill Altman’s own son! This was by far Anderson’s most impertinent, crass, and rude question (but a fair one). Altman didn’t so much as flinch.
A fortiori so is “superalignment”.
This is my wheelhouse. I’ve been pondering AI-meets-evolution non-stop the last few years. I see artificial selection as a subset of natural selection which is itself a special case of a more abstract process sometimes called “survival of the stable”. It’s a truism: what’s good at persisting, persists. The universe is an unfolding filter for what persists. The stable replaces the unstable. The dynamic stability of self-replication vies with the static stability of black holes.
Silver is at DeepMind & Sutton’s a big deal: a pioneer in reinforcement learning & wrote a frequently cited blog post on how scale beats cleverness in AI research.
E.g.: “existential risks that permanently destroy humanity are clear examples of severe harm […] Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm” (p.15); “in order to keep pace with advancing AI capabilities, we believe it will be necessary for the vast majority of the cognitive labor relevant to AI safety to be performed by AI” (p.27); “AI systems may take over more and more political and economic responsibilities, threatening to a gradual loss of control for humanity” (p.55); “it might not be possible to train models to be totally robust to jailbreak inputs” (p.61); etc.
Cf. Dan Davies’ in his great book, The Unaccountability Machine, which I’ve recommended before.
It’s a commonplace to call them theistic or Manichaean: ASI can be ultimately good or ultimately evil, LLMs are the tower of Babel, etc. Fascism is more apt. Note, I don’t mean Nazism or free market fundamentalism, I mean fascism: the conviction that it’s better the world flame out in a personal attempt at glory, than putter along in a way that makes the “NPCs” happy. NPC is an even more odious term than masses.
Cade Metz’ book reports this in ch.9. I’m reminded of Churchill’s anticipation of MAD: “It may well be that we shall by a process of sublime irony have reached a state in this story where safety will be the sturdy child of terror, and survival the twin brother of annihilation.”
Philosopher Joe Carlsmith has the best version of “AI for AI safety”. He’s right: if we were in a sprint to the “singularity”, then we’d need AI to manage AI. By then, the age of humanity is over. We must avoid such a sprint.
Morally they are void indeed.
AI assisted coups sounds a lot like the plot of Deus Ex and its version of the illuminati. I'm struck by how much the tech CEOs and other boosters seem to think cyberpunk dystopias are desirable. My impression is that even the people at top of the extreme corporate hierarchies therein are not happy, well, or otherwise living a good life. They're in constant terror of being overtaken by a rival. Perhaps the CEOs have gotten so used to this headspace since their parents bullying them from birth that they don't understand that the world could be a different way.
How come nobody is exploring the solarpunk possibilities of AI? The Horizon games have robotic megafauna rejuvenating the ecosystem and planting multi-layered farms so that we can engage in cultural pursuits. Who is working on that?
Again, lots of these people take the view that even though there is X% chance of killing everyone, the magical AI future is worth pursuing, roll the dice. See the replies in this Twitter thread for examples.
https://x.com/spencrgreenberg/status/1916299004054278351?s=46