“Consciousness,” said Diane Ackerman, “is the great poem of matter”. And the hard problem of consciousness is about how some matter in our skulls is both a part of the world, like everything else, but also has an experience of the world.
I think the “solution” to the hard problem of consciousness will be a unique kind of solution, because it won’t be accepted by most people. If, tomorrow, some researcher wrote down an elegant explanation of consciousness, drawing on the latest evidence from neuroscience and psychology, which somehow accounted for the emergence of mind from matter, it simply wouldn’t convince people, most of whom see consciousness as inherently inexplicable or irreducible. The solution to the hard problem might already be out there.
More convincing than some theory written up in an academic journal, would be a demonstration. If someone could use the explanation to build a conscious robot, they would surely get the sceptics’ attention. This doesn’t seem likely in the short term.
In the meantime, a demonstration could be made with some kind of brain stimulation technique. Imagine someone had a theory of consciousness that said it was what happens when Brain Region A does so-and-so at the same time as Brain Region B doing such-and-such. Cartoonishly simple, but there are serious theories that amount to this.
So you visit the lab and a neuroscientist hooks you up to a deep brain stimulation machine. (These exist already, but they're crude.) The neuroscientist can target Brain Regions A and B and can inhibit activity in those regions, effectively switching them on and off. Now they toggle you back and forth between different states:
Brain Regions A and B activated. You begin by experiencing full waking consciousness, as normal.
A and B inhibited. You temporarily go unconscious, while still being “awake” with eyes open and not slouching — like having a concussion or a petit mal seizure.
Just A inhibited. This might (according to this hypothetical theory of consciousness) result in some semi-lucid state without any perception: you’re able to think in words and concepts, but you’re in a nowhere place, a dark mindspace.
Just B inhibited. You might experience normal perception, sights and sounds, but lose your use of language or ability to label anything.1
If you were subject to this first-person experience of different grades of consciousness, you would at least consider the theory to be a good explanation of which brain regions do what in their contribution to conscious awareness. And once consciousness was broken down into components, it might seem less intuitively mysterious as to how a bunch of chemical reactions in the brain make sentience.
To me, this is the future of consciousness research. Once you can manipulate a phenomenon, I think you understand it in a more important way than merely describing it.
But in the absence of such vivid demonstrations, people’s beliefs about what consciousness is trump all efforts at understanding. See the current debate over whether AI is conscious and how we’d know.
The problem of other minds
It is impossible to prove the absence of a property to someone who believes in it. So consciousness as a property — as some quality inherent to certain matter or processes — will remain an unfalsifiable theory. You can literally never prove that a thermostat, a shrimp, a P-zombie, a chatbot, the philosopher David Chalmers, or a carbon atom doesn’t contain some consciousness. This means panpsychism — the idea that consciousness permeates all things like some fundamental field — can’t be disproved if consciousness is a property.
Consciousness as an experience, meanwhile, can never be proved or disproved. If consciousness can only be witnessed by the conscious subject themselves, then you can never know if I’m conscious and I can never know if you are. We can never know if animals, AIs, or any being other than ourselves is conscious. No other phenomenon in the universe shares this feature; I consider this a weakness of the experiential view of consciousness. It’s encapsulated in the common definition of consciousness as: what it’s like to be something.2 If there's something it is like to be a bat, a bat is conscious; if there’s something it is like to be an algorithm, it’s conscious. This presupposes what it’s meant to explain. The sense of privately experiencing something is precisely the extraordinary thing we’re trying to unpack. And now you’re telling me conscious experience can only be proved by… conscious experience?
If, however, consciousness is an ability, a competency, then we can at least have a positive test. Anything with the ability to do X can be counted as conscious — provided we find some X that is associated only with conscious cognition.
The Turing Test is the most famous example of a consciousness-as-ability test. There are different versions3 The contemporary Turing test determines if an AI can, via a chat window interface, convince human judges that it too is a human.
Unfortunately, Turing chose verbal behaviour as the flagship ability of conscious humans. But verbal behaviour is fakeable. This is because the content of words is completely arbitrary, untethered from what is real. Once a human or a machine has the ability to produce words in some recognisable grammar, it can say anything and doesn’t need to back this up with actions. Faking a statement is, once you can talk at all, incredibly simple. As Hamlet says, “‘Tis as easy as lying”.
Sure enough, the Turing Test has failed the AI test. Large language models can now pass the Turing Test. But instead of declaring them conscious, we’ve generally decided the test is unfit. It seems you can have a language-using system, saying it’s conscious, that is utterly unconscious.
There are other problems with current ability-based tests. There are behaviours we see in nonhuman animals, who cannot speak, that we take as evidence of consciousness: tool use, sign-language, planning. Robots can perform these actions. Nobody thinks robots are conscious.
What we want from a consciousness-as-ability test is some behaviour whose outer signs cannot be faked in the absence of the inner phenomenon.
Hamlet, again, encounters this problem. In the beginning of the play he’s in mourning — wearing black, moping about — because his father is dead. His mother, a widow who has very rapidly remarried the dead king’s brother, asks Hamlet why he “seems” so sad. Hamlet sneers that it’s not a matter of seeming. Anyone could fake the mourning clothes, the sullen looks, but he has a sincere, private feeling:
But I have that within which passeth show,
These but the trappings and the suits of woe.
Consciousness is internal, private, genuine. It is not merely the trappings that are on show. But we can only detect or witness the trappings. We have no way to witness what is within other minds.
Shakespeare solved the problem of other minds with a cheat: the dramatic convention of the soliloquy. The audience is given public access to what is private. Otherwise, the only way Hamlet can communicate his inner beliefs to other characters is through actions, which cannot be faked.4
The same goes for characters in plays without soliloquies, in novels written from a third-person limited perspective, in films without voiceover. And in real life — another genre bereft of others’ inner monologues — we have only actions to judge.
Actions (are said to) speak louder than words. In business and politics, someone is more credible if they have “skin in the game”. In evolution, this is called costly signalling — the peacock with a splendid tail must actually be fit if he can squander bodily resources on an unwieldy bit of conspicuous biological consumption. In poker, we’re apt to believe another player holds certain cards if their betting supports it.
But what behaviour could we take as a hard-to-fake signal, as a bet you wouldn’t make unless you were actually conscious?
Play’s the thing
The only test I can think of, the only ability that seems to be pointless unless something is conscious, is this: a conscious creature will, unbidden, do things purely for fun.
Crows slide down slopes. Dolphins ride the bow waves of ships. Chimps bathe in pools and play with their reflections. I bet those animals are, at least in those moments, conscious.
Time and energy spent on delight is too evolutionarily improbable unless the creature is actually aware of its existence. A beast that allocates time and energy to purely fun activities foregoes opportunities to eat or mate. Natural selection is a sieve through which only the most efficient organisms pass. The chillers, the bohemians, the Dudes are not long for this world.
And yet. The moonshot-style rise of Homo sapiens is partly down to our propensity to occasionally just do weird or frivolous things.
Our enjoyment of “aimless” leisure is seen by most evolutionary psychology onlookers as a byproduct. We have these abilities for improvisational behaviour, for creativity, for adopting new customs, for slow thinking. And so, in modern times, when we have three meals a day without trying, we have an excess of time and energy, so we discharge it on what is, from evolution’s point of view, parlour games: science, art, business, and religion.
Fun is the most unjustifiable expense before the auditor of natural selection. It would never be worth it. Until it is. Then it pays off big time precisely because of how costly it is.5
But why is fun the behavioural correlate of consciousness? Why can’t fun exist in a world of unfeeling robots?
Here’s my speculation. Although I think our sense of subjective time, the flow of experience, is in important ways illusory, it does attune us to a real feature of reality that would otherwise escape our senses. The world really is unfolding, at this gross scale, in one direction: from a low entropy beginning toward ever higher entropy. This asymmetry in the laws of physics is what accounts for everything interesting in the world: cause and effect, stable forms that endure, memory, evolution, the growth of complexity. Our conscious awareness could be spoken of as a sense, not exactly of time (our mental timekeeping isn’t all that accurate), but of the universal flow. We’re clued-in to the Situation in General. Only a being who understands, even for misguided reasons,6 that there is something going on, time passing, the present moment is happening — only they would take such a moment to revel in their existence, to be here now. Only for these lucid ones, in their confected sense of now, would it be worth spending energy on play.7
Again, the objection is: couldn’t we program a computer to do things just for fun? Couldn’t we code in some caprice or whimsy? I assume so. If you have a non-mystical view of intelligence and cognition, then it’s hard to deny that, in principle, you can program a computer to do X, as long as X doesn’t violate the laws of physics. All the cognition we do is apparently physically possible. Ergo, any of our faculties could be programmed into a machine, one day.
We need to add some selection pressure. The entity being tested needs to have skin in the game. Sadly this means the fun test only works on candidates who made it the hard way. A sufficiently ingenious automaton, built for purpose, could fool any behavioural test. But if an intelligence that has to make it in the world off its own steam, takes a hard won moment in this Darwinian shitfight of a universe to just smell the roses, then I believe it.
It’s true that play isn’t “just for fun”, that it pays off in the short run too: birds and mammals, whose young indulge in play, have filled niches the fish and reptiles couldn’t. But it’s still a deluxe item. And it only works if it isn’t faked. There is therefore nothing more serious than fun, nothing more real than play.
This isn’t hypothetical. The brain stimulation technology is, but one can do slower but equivalent experiments on oneself using psychedelics, meditation, lucid dreaming, etc., as indeed I have. It’s possible to break up the supposedly indivisible features of conscious experience to see which are necessary or sufficient for what we’d normally call being conscious. Even these self-experiments aren’t necessary. Scattered around the literature on traumatic brain injury, stroke, epilepsy, split-brain patients, hallucination, and sleepwalking is all the evidence we need for a scientific theory of consciousness that is philosophically respectable. For example, in the states of awareness described above, 2 is NREM sleepwalking, 3 is lucid sleeping or “sleep thinking” (as opposed to lucid dreaming), and 4 is a state accessible through meditation, acid, ketamine, etc.
See Erik Hoel’s great post on definitions of consciousness. I agree with everything he says there except that the what it’s like to be definition is a good avenue to understanding.
Turing’s original idea was curiously gendered. The test required an AI to convincingly act as a male imitating a female — the “imitation game” — under the surmise that this required sophisticated understanding of human psychology and language.
After teaching Hamlet for years, the layers of irony still impress me. Hamlet refrains from acting until the end of the play (though in the meantime he “acts” — as in pretends to be — mad). He refrains from killing Claudius while he prays because it will send him to heaven; then Claudius soliloquises that his thoughts didn’t match his prayers (“My tongue and soul in this be hypocrites”) — Hamlet is fooled by lying. When he finally does something unequivocal, kills Claudius and Laertes with his sword, it can’t be faked. (Indeed the sword-fighting is the lamest part of any production because you can’t sword fight for real on stage.) Yet the meaning of his actions are misconstrued. When Fortinbras turns up at the end of the play, he sees dead bodies everywhere and concludes Hamlet must have been a hell of a warrior, a martial prince. This is after he spent the whole play decrying war, refraining from battle. In death he will be misunderstood: merely the trappings and the suits of woe.
See also Slavoj Žižek, Less Than Nothing, p.652.
It’s akin to our sense of yuckiness or contamination. There isn’t actually an immaterial gross stuff that spreads via touch and makes other things gross to eat. But this sense clues-in to something real that is beyond our senses: microbes and even nano-scale viruses.
EDIT [15-09-23]: in my haste, chopping this piece down to size, I removed an important footnote. Nick Humphrey’s book Soul Dust (2011) was highly influential for me. I strongly recommend it. Of the countless books written about consciousness, I think it’s the best. The important point is that birds and mammals use play when they’re young to learn flexible behaviours. But to get them to play you need this new kind of awareness we call consciousness. And this obviously persists beyond childhood, as does play, and offers bonus abilities too.
Does your theory suggest that humans who do not have fun are not conscious? I would suggest that consciousness is most generally the act of learning. Once something is learned, we can perform it unconsciously, or subconsciously. I think the conscious mind is that process that is seeking new information to post to the memory.
This article was great, the best I've read this week.
Ted Chiang has an interesting story about artificial intelligence and what constitutes consciousness (The Life cycle of Software Objects). Reading this brought it to mind.