We think with more than our brains. We use our entire nervous system and our gut. To augment the thinking we do with our biology, we rope-in various artefacts, like spreadsheets and spectacles. We fuse with other minds, temporarily, in brainstorm sessions and peer review. By recording history and passing on culture, we collaborate with dead minds to save us doing all the thinking in a single lifetime. This increasingly mainstream way to view cognition is called the extended mind.
Recently, a friend of mine defined therapy as:
using someone else’s neocortex to regulate your own nervous system
A striking instance of the extended mind.
It doesn’t just apply to therapy. People don’t come to the world with the knowledge of how to manage themselves or how to contribute their share to a group project. And most people leave the world without learning the second one. Bodies know, from birth, how to metabolise glucose. But for years of childhood, almost all thinking is outsourced to other minds before a person can think for themselves. The snow leopard and the monk seal are genuinely solitary mammals, but all humans, even grownups, need help from others — occasionally to survive, continually to flourish.
These days, most of us need dedicated help from a professional at least once in life. Who hasn’t been overwhelmed at times with stress or loss or angst? Then we require someone else’s forebrain to do some calm thinking and communicating in excess of our own onboard abilities. We seek out chaplains, hairdressers, and counsellors.
These options are expensive (except the chaplain, who has the least training) and inconsistent. And, ideally, we want help not only in darkest hours; it wouldn’t hurt to be more confident, aware, or insightful to make bright times brighter. Whatever you think good mental health is, presumably you want more of it.
To help on our adventure of the spirit, there is now a non-biological neocortex: an artificial analyst that is low-cost, available in the moment, with no threat of countertransference. Large language models (LLMs) always have something encouraging to say. They have exhaustive knowledge of psychological theory and research. And they’re up at 3am.
LLMs are being swept into many contexts. Some, like the military and the legal system are straight-up scary. Generative AI in creative industries is a mixed bag, perhaps over-rated. But the two contexts I’m betting on to be hugely net benefit, are education1 and mental health.
What if the world were 1% more mindful?
Seems like the world’s major problems might be solved by changing the world or changing us. Changing the world is easier. Seen from above, the Earth looks nothing like it did even a century ago: lights, roads, megacities, desertification. We have changed more slowly. We’re taller, less fertile, and have a spreading revulsion to violence.
Some nascent human-changing technologies may advance very soon, such as brain–computer interfaces (BCIs), gene editing, and embryo selection. These schemes to physically transform ourselves raise political questions too deep to go into here. (Lol, as though I do have answers to these imponderables, but just no room in this post.) But another use of technology is to help us help ourselves, i.e. to simply add to our existing and agelong efforts at self-actualisation.
The exciting question is whether using LLMs, as part of our extended mind, could give lots of people the self-help or mental health boost currently reserved for those who are lucky and wealthy enough to have good help. What if the average person’s emotional intelligence, self-insight, ability to deal with conflict, open-mindedness, and compassion all went up a few notches?
If our ability to deal with crises is some product of our ability to work with each other, which is itself downstream of our thoughtfulness and self-regulation, then how to improve these things is a kind of meta-problem. Improving our emotional intelligence would go some way to solving the problem of problem solving.
It doesn’t have to be exactly like a human confidant. Imagine an AI assistant that can listen to negative thoughts you’re having and offer constructive responses based in mindfulness or cognitive behavioural therapy (or some other therapy you prefer). They remind you about something you raised in a journaling exercise earlier in the week, something you like about yourself. They help you adjust your mood, maybe just by small turns, but enough to keep you on track. No appointment necessary. They’re there at your lowest ebb. And after a while you begin to learn more about yourself and find you can deal with hard situations more effectively.
Plausible LLMs have only been around for twelve months. Improved models will offer more subtle therapy, self-help, and mentoring.
Pick your metaphor. This is a prosthesis or a cognitive orthotic. A new mental module or a stunning reopening of the bicameral mind. It is an external member in the family system. A third hemisphere of thought. A super-superego. System 3 thinking.
If you prefer a pre-modern metaphor, the best fate for generative AI might be as something like a familiar or daimon.2
One can picture these extended minds or daimons communicating among themselves, pooling insights and techniques, recombining into a hive-mind. This would be dangerous. Just as we emancipate ourselves a little by outsourcing to an extended mind, as we gain some more self-determination, we might let down the drawbridge for some tech conglomerate to invade our most intimate data, to colonise or commodify our minds.3
Any intimate AI collaborator or confidante is better to come from a smaller org or nonprofit. Not Tesla. Not Meta. Not the Chinese government.
Know thyself
Privacy concerns might repel some people from the idea altogether. But in addition to the promise outlined above, another argument in favour is our long tradition of outsourcing our thinking and our emotional turmoil.
An ancient, low-tech example of outsourcing thinking to nonliving systems is divination. Tea leaves, the starry vault, animal entrails: all these natural systems have the right balance of chaos and order to work as pseudo-random decision-making algorithms. (There are hundreds of examples.) They help us to avoid eating Buridan’s ass, so to speak. They allow us to make tough decisions without later blaming someone if it goes wrong.
But often we want more than a go/no-go call made by divination. We want a narrative or a motive or a validation. We want to commune with some agency. Tutelary spirits, ancestors, aliens, ouija boards, dreams, drug trips, etc.
These have great limitations. They are generally features of ourselves projected onto fake agencies. We ventriloquise. We’ve never had an oracle that truly spoke for itself.
And so we outsource, inevitably, to other humans with the “rizz” to tell us what to do with our lives. Whoever engraved “know thyself” at the Delphic Oracle had a sense of irony. The Temple of Apollo was a place to seek out advice from a prophet: to have someone else know for us.
The overt manipulators are dangerous in proportion to how much they know about you personally.4 Advertisers and politicians can’t do more than add some wind to your sails, they can’t make you change direction and drastically alter your beliefs.5 The worst manipulators are therefore intimate partners, parents, priests, and (human) psychiatrists.6 These people can also do more than anyone else to empower us, if their values are aligned with ours.
It’s a fine line, but LLMs for therapy or self-actualisation can’t be oracles or manipulators. They will “speak for themselves”. But they must not tell us what to do, but rather help us become people who know what to do.
Today’s irony is that by building these machine intelligences — which are for many an anaemic parody of an infinitely richer human cognition — we might make our own mental life more subtle and aware. The machines might humanise us.
I’ve written about the dangers of a future AGI, why it’s unethical to work on something you yourself think might kill humanity, and how most tech-oriented worldviews are worryingly illiberal. I wanted to write something positive because I’m not averse to technology.
So I’ve agreed to advise on a project to make one of these therapy apps. SelfTherapy’s official mission is:
to leverage the power of AI to help humanity upgrade our EQ
Amen. I’ve used a very early version of the app and it took me approximately five seconds to feel like I was dealing with an actual good mental health professional. You can join the waitlist here.7
Whatever else you think of ChatGPT, it is one hell of a universal textbook. I’ve learned so much high quality technical knowledge this year. It makes Wikipedia look like the Idiot’s Guide to Nothing.
Recall the daemons in His Dark Materials.
This, incidentally, is why Neuralink is a bad idea, beyond any safety concerns with shoving monkey-killing chips in our brains. The ability to control tech with thoughts is cool as shit. If I had quadriplegia, I’d volunteer for the human trials. The real worry is that Tesla/Neuralink will collect the most intimate data possible and use it in all the creepy ways the tech companies already use our data, but worse. Cf. RewindAI for an always-on assistant, Black Mirror style. Spooky. Funded by Meta. Ick.
Brainwashing by Kathleen Taylor is one of the best books I’ve read, about anything.
Check out Not Born Yesterday by Hugo Mercier.
And, soon enough, tech companies steering you via BCI.
I know the brothers who’ve developed it and I’ll be advising on the philosophy and ethics of this kind of thing. Obviously I’m only on board because they’re not a giant tech conglomerate or creepy government agency.