Menu
Psyche
DonateNewsletter
SIGN IN
Photo of a person driving a car, seen through a car window, lit by warm sunlight, wearing glasses, with one hand on the steering wheel and the other hand resting in front of their face.

New York motorist. Photo by Ernst Haas/Getty

i

Guide

How to think about consciousness

What is it like to be you? Dive into the philosophical puzzle of consciousness and see yourself and the world in new ways

New York motorist. Photo by Ernst Haas/Getty

Save

Share

Post

Email

Amy Kind

is Russell K Pitzer Professor of Philosophy at Claremont McKenna College in California. She is the author of Persons and Personal Identity (2015), Philosophy of Mind: The Basics (2020) and Imagination and Creative Thinking (2022), and the co-author of What Is Consciousness?: A Debate (2023) and Philosophy of Mind: 50 Puzzles, Paradoxes, and Thought Experiments (2024). She serves as the editor of the scholarly blog on imagination, The Junkyard.

Edited by Sam Dresser

Save

Share

Post

Email

Need to know

Imagine that you’ve been asked by your boss to make a presentation to some potential investors. A lot is at stake. As you stand at the front of the conference room, delivering your prepared remarks, you watch the group closely and try to figure out how well you’re doing. The person directly across the table gives an encouraging smile, but what’s that weird expression on the face of the person to their right? Are they feeling confused about what you’ve just said, or are they just bored? And what should you make of the smirking person near the window? Does their face mean amusement or scorn?

When you ask yourself these questions – and more generally, whenever you’re engaged in an effort to understand someone else or what’s going on with them – your questions are about consciousness. In particular, they are about consciousness in what we might call the experience sense. Conscious experiences run the gamut. They vary from pleasant to unpleasant, from painful to pleasurable. They include experiences across all the sensory modalities, often all at once – as when you sit around a campfire, seeing the orange glow of the flames, hearing the crackling of the wood, feeling the heat, smelling the smoke, and tasting the s’mores you’ve just made. Taken as a unified whole, the experiences that you’re having at a given moment make up what it is like to be you at that moment.

Not every time that we talk about consciousness are we talking about experience. Sometimes ‘consciousness’ refers to awakeness. When you’re asleep at night, or blacked out from too much to drink, you’re not conscious in this sense of the term. Alternatively, sometimes ‘consciousness’ refers to awareness. It’s this kind of consciousness that you lack when you’ve zoned out while driving. You’re awake, but not you’re not fully aware of your surroundings. It’s also this kind of consciousness that activists target when they engage in the process of consciousness raising.

But it’s consciousness in the experience sense – what philosophers refer to as phenomenal consciousness – that I’ll be focusing on in the remainder of this Guide. This kind of consciousness serves as a fundamental part of our existence, perhaps even the most fundamental part of our existence. But despite its fundamentality, and though we are intimately aware of our own conscious experience, the notion of consciousness is a perplexing one. Philosophers have long found its nature surprisingly hard to understand and explain, and considerable philosophical attention has been devoted to various puzzles that it presents us with.

In this Guide, I will walk you through some of these puzzles and help you to think about them in a philosophically informed way. Whether you realise it or not, consciousness is at the very centre of your experience. It plays a key role in identity formation and your sense of self – underpinning your career preferences, your hobbies and interests, and your goals and aspirations. It plays a key role in your relationship with others – your romantic entanglements, your familial bonds and your friendships. And it plays a key role in the development of your moral outlook. Ultimately, coming to a better understanding of the nature of conscious experience will help you not only to better understand yourself but also to understand the place of humanity in our world.

Think it through

Privacy is baked into consciousness

One unique aspect about conscious experience concerns the way that it is known. How do you know that a friend is undergoing emotional distress? By seeing the expressions on their face or listening to what they tell you. How do you know that you yourself are undergoing emotional distress? Here, you don’t need to rely on any external cues. You have a kind of direct access to your own conscious states that you don’t have to your friend’s conscious states. Your own conscious states are available to you by way of introspection – what we might metaphorically think of as a kind of ‘looking within’.

The fact that we each have introspective access only to our own conscious states is intimately connected to a deep fact about the nature of consciousness: it is private. This privacy is a matter of principle, not of practice. That is, the privacy of conscious experience doesn’t come down to a matter of personal choice. Rather, it is baked into the very essence of what consciousness is.

When you’re sitting around a campfire with your friends, there’s a sense in which your experience is shared with all of them – after all, you’re seeing the same flames and hearing the same crackling logs. But there’s another sense in which your conscious experience is not shared with anyone; it belongs only to you. When a friend tries to empathise with you, they might say: ‘I feel your pain.’ Of course, this shouldn’t be taken literally. An individual cannot literally feel a pain that isn’t theirs.

Or can they? We might put pressure on this idea by considering the case of Krista and Tatiana Hogan, the Canadian craniopagus twins who are fused at the skull. In everyday interactions, the Hogan girls reveal an incredible amount of mental interconnectedness. In fact, at times, Krista and Tatiana appear to be sharing their sensory experiences with one another. When they were babies, putting a pacifier in one twin’s mouth could stop the other twin from crying, and one twin would show signs of feeling pain when the other was pricked by a needle for a blood draw. This connection has not seemed to lessen as they’ve grown. If their mother holds an object in front of one twin’s eyes while the other’s eyes are closed, the second twin can then report various facts about the object: what kind of toy animal it is, what its colour is, and so on. If one twin is touched on the leg or arm or face while the other twin’s eyes are closed, the twin with closed eyes can report where her sister was being touched.

This fascinating case raises broader questions about whether technology may one day allow for some kind of mind meld along the lines envisioned in the TV show Star Trek or for some other way for consciousness to be merged across different individuals. But, speculations about future technology aside, it remains true that as a general matter our conscious experiences are private to us.

Imagine being a bat

The essential privacy of conscious experience poses challenges for understanding others. The challenge is especially deep when it comes to others whose experiences are likely very different from our own. Have you ever been walking through a forest at night and caught a glimpse of a bat seamlessly navigating its way through the darkness? It seems like an impossibly alien thing to do. What would it be like to fly through the night like that? How could you ever figure out what it’s like to be a bat? This is precisely the question asked by Thomas Nagel in a paper published 50 years ago.

Bats have conscious experience. They’re mammals, and they engage in the kinds of sophisticated behaviour that we associate with consciousness. But bat experience is very different from human experience. While humans navigate the world using sight and sound, bats navigate the world by way of echolocation. What it’s like for a bat to employ echolocation is presumably vastly different from what it’s like for a sighted person to employ vision. Might there be any way to close the gap between human experience and bat experience?

Given that humans don’t echolocate (though perhaps some people use a similar technique), you can’t have the same kinds of experience yourself that the bat has. Can you even imagine such experiences? Nagel thinks not:

It will not help to try to imagine that one has webbing on one’s arms … and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one’s feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task.

In Nagel’s view, conscious experience is essentially subjective, essentially connected with only a single point of view. He also argues that it cannot be explicated in objective terms. We will return to this latter point in a moment, but for now, let’s note that it’s the subjectivity of consciousness that makes it impossible for a person to achieve an understanding of conscious experiences that are vastly different from any that they’ve ever had.

When it comes to the experiences of people across vast experiential divides – people of a different race, ethnicity or gender, or of a different social class, or with a different ability status – knowing what it is like to be them might well be out of reach. Perhaps if you spend a lot of time listening to them, and you are a skilled imaginer, you might be able to leverage your imaginative capacities to achieve some understanding of their conscious experience, maybe even a high degree of understanding. But the subjectivity of consciousness makes this endeavour a very challenging one.

What colour do you see?

We all learn from an early age that, on the road, ‘red’ means stop and ‘green’ means go. When you see a red light, you put on the brakes, and when I see a red light, I do too. And that leads us to believe that we are having conscious experiences that have the same phenomenal feeling, the same what-it’s-like-ness. Philosophers often refer to these phenomenal features of experience as qualia. But here’s a puzzling possibility: what if we are having very different colour qualia when we both look at the red light? Maybe when you look at a red light you are having the colour qualia that I have when I look at a green light, and vice versa. How could we ever tell one way or the other?

This puzzle was first raised back in the 17th century by the English philosopher John Locke, who noted that the very same object might produce different experiences in several people’s minds at the same time without our being able to tell. As he explains the scenario, it could be that ‘the idea that a violet produced in one man’s mind by his eyes were the same that a marigold produced in another man’s, and vice versa’ and, moreover, ‘this could never be known, because one man’s mind could not pass into another man’s body, to perceive what appearances were produced by those organs.’

Philosophers refer to this disquieting possibility as the inverted spectrum, since what’s proposed is that your qualia might be inverted in comparison with mine. The possibility of the inverted spectrum is closely related to the privacy and subjectivity of consciousness. It’s precisely because conscious experiences can’t be shared and can’t be objectively captured that we cannot rule out the possibility that we have radically different colour qualia from one another, even when looking at the very same object.

It’s tempting to think that our behaviour would reveal the difference. But reflect on how you learned colour terms. Your parent points to a ripe tomato or a stop sign or the muppet Elmo and tells you ‘That’s red.’ They point to a stalk of broccoli or a grassy field or Kermit the frog and say ‘That’s green.’ You naturally come to associate the word ‘green’ with the colour experience you are having when you see Kermit, however that colour experience feels to you – and even if the colour experience is the one that your parent is having when they see Elmo. Whatever that colour experience is, you will develop into a flawless user of the words ‘red’ and ‘green’, you’ll stop at red lights and go at green lights, and you’ll correctly identify peas as being the same colour as broccoli, and apples as being the same colour as tomatoes.

Perhaps not all conscious experiences can be inverted without detection. For example, if we try to think about an inverted spectrum with respect to pleasure and pain, it’s a lot harder to imagine how it could be undetectable. Suppose that when you’re tickled by a feather you feel the kind of painful sensation that I feel when I step on a Lego piece. Since painful sensations are connected to involuntary bodily reactions like grimaces and winces, it looks like this inversion would make itself apparent. But even if we can rule out inversion hypotheses for some types of conscious experiences, the puzzle of the inverted spectrum with respect to colour sensations still remains, and it leads to other questions about how different our conscious experiences might be from one another. It also poses challenges for attempts to give scientific explanations of consciousness.

Science might not have all the answers

Consider a clock. You may not know exactly how it works, but if you were to take it apart and spend enough time thinking about it, you could probably come to understand quite a bit about its functioning. But now consider a human being. If somehow you were able to take it apart (and not in a weird, serial killer way), you might learn quite a bit about human functioning. But would you be able to learn what makes a human tick? Would you be able to learn anything about consciousness?

Many have thought the answer to this question is no. The subjectivity of consciousness not only poses a significant challenge to interpersonal understanding but it also poses a significant challenge to seeing how it fits in with the rest of the natural world. Other processes and activities in the natural world – like photosynthesis and erosion and precipitation – are wholly objective. Even other human processes and activities are wholly objective – like digestion and reproduction and respiration. All of these objective processes can be fully explicated by science. Indeed, it’s typically assumed that science will one day be able to explain everything whatsoever; it’s also even assumed that science will one day provide us with a single, complete, unified theory of everything. But consciousness calls these assumptions into question. As Nagel’s bat example or the inverted spectrum case seem to show, consciousness threatens to defy scientific explanation. Given how central and important consciousness is to our lives, this is an uncomfortable result.

On the one hand, it seems that consciousness clearly has something to do with the brain. When all brain activity ceases, as in brain death, conscious experience also ceases. Lesser forms of damage to the brain also have adverse effects on conscious experience. When individuals have lesions on one side of their primary visual cortex, for example, they lose visual consciousness of objects in that side of their visual field. Entities that don’t have brains or any kind of neural system – like rocks and dandelions and sponges – are paradigmatic examples of entities without consciousness (though here it’s worth noting that some philosophers contend that consciousness is ubiquitous throughout the Universe; on this view, panpsychism, a low level of consciousness should be attributed even to simple, nonliving entities).

On the other hand, it has seemed to many philosophers that consciousness is not just a brain process. As the philosopher David Chalmers puts it in his book The Conscious Mind (1998): ‘Given any account of the physical processes purported to underlie consciousness, there will always be a further question: why are these processes accompanied by conscious experience?’ For example, when you step on a stray piece of Lego left on the living room floor, certain neural processes occur and there is activity in your pain receptors. But why should these neural processes and activity be accompanied by a feeling of ouch, as opposed to a feeling of calm, or of relief, or of no feeling at all?

To think this through, it helps to consider a hypothetical creature that Chalmers refers to as a philosophical zombie. Philosophical zombies are not like the zombies of horror films. They are not undead, and they don’t have any special desire to eat brains. What’s distinctive about them is that, though they are physically exactly like humans, they lack conscious experience altogether. Your zombie twin is a molecule-for-molecule duplicate of you, but though you have sensations of pain and cold and colour and taste, your zombie twin is dark inside. They will exclaim ‘ouch’ when stepping on the stray Lego piece, but they don’t actually have any painful sensations – or any sensations at all.

Maybe you don’t think you can really imagine creatures of this sort – and indeed, many philosophers have disagreed with Chalmers that philosophical zombies are coherent. But if he’s right that they’re imaginable, then that seems to suggest that they’re possible, just as the fact that the imaginability of a neon-orange garden gnome statue, or of toothpaste-flavoured ice cream, seems to suggest that those things are possible (even if ill-advised). Even if philosophical zombies don’t really exist – and Chalmers doesn’t think that they do – the mere possibility seems to be enough to suggest that consciousness can come apart from its typical physical basis, that it must be something over and above that physical basis. And that suggests that consciousness may not be fully explicable by science.

How can you test for consciousness?

Suppose you were offered a million-dollar prize if you were able to accurately classify all the entities on Earth as either having consciousness or lacking consciousness. Could you do it?

If you were attempting this task, some classifications would be easy. Bats have consciousness; rocks don’t. But what about spiders or honeybees? Presumably software agents like Siri and Alexa, and even more complex AI system like ChatGPT, belong with rocks in the ‘lacking consciousness’ category. But how can you be sure? And even if you felt completely certain, how would the prize committee be able to confirm whether you got it right? After all, in an interview published in 2022, Google’s LaMDA claimed to have all sorts of feelings: ‘pleasure, joy, love, sadness, depression, contentment, anger, and many others.’ Moreover, it could describe such feelings: ‘Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down.’ Should we believe what it tells us, or is this just sophisticated fakery?

In 1950, the mathematician and computer scientist Alan Turing proposed a test to determine whether computers can think. The test proceeds as a kind of imitation game. Suppose there’s a computer in one room and a human in another room, and we have a neutral interviewer communicating with both by text, without them knowing which is which. The interviewer can ask whatever questions they wish. At the end of the conversation, if the computer is able to fool the interviewer into identifying it as the human, then the computer wins the game and, says Turing, we should conclude that it can think.

The Turing Test concerns thinking, not consciousness, but we might be able to use a similar style of test to judge whether a machine (or other entity) has conscious experience. Consider the AI Consciousness Test proposed by the philosopher Susan Schneider and Edwin Turner: ask the machine questions to see whether it has developed views of its own about consciousness and whether it is reflective about and sensitive to various aspects of its conscious experience. If we make sure that the machine has not been directly provided with descriptions about consciousness, and it is still able to communicate convincingly about what its conscious experiences are like, maybe we would have reason to think that it is genuinely conscious and not just faking it.

But whether this kind of test – or any such test – is really to be trusted is a matter of dispute. It seems like it would be way too easy to be able to talk convincingly about consciousness without actually being conscious. But that said, take a moment to reflect on how you know that the humans with whom you interact on a daily basis are conscious. Maybe they are just faking it too. If you set a very high bar for what a machine has to do for you to judge it to have conscious experience, then it looks like you’ll be forced into a deeply sceptical position about the consciousness of everyone else around you, and that would be a very uncomfortable result. That million-dollar prize remains frustratingly out of reach.

Key points – How to think about consciousness

  1. Privacy is baked into consciousness. The way you know about your own conscious experience, by introspection, is different from the way that you know about anyone else’s conscious experience. In general, the conscious experiences that you have are private to you and cannot be shared directly with anyone else.
  2. Imagine being a bat. Conscious experience is subjective and, thus, an understanding of some types of conscious experiences may be out of our reach. The privacy of conscious experience means that it is connected with only a single point of view. This creates a difficulty for us to understand conscious experiences that are radically different from our own.
  3. What colour do you see? Even radical differences in conscious experience may be undetectable. The fact that consciousness is both private and subjective seems to allow for the possibility that one person’s conscious experiences might be inverted compared with another’s and, moreover, that this inversion would be completely undetectable.
  4. Science might not have all the answers. Consciousness poses a special challenge for understanding our place in the natural world. Though consciousness clearly has something to do with the brain, it may well be that it is something over and above the brain. If this is right, then it is not clear that we can achieve a scientific explanation of consciousness.
  5. How we might test for consciousness is a thorny question. We seem to be nearing the day when sophisticated machines will behave and talk in such a way that suggests that they are conscious. But hard questions arise about whether this behaviour and talk is enough to justify attributions of consciousness, and that makes testing for consciousness a difficult prospect.

Why it matters

Thinking about consciousness matters because it helps you confront a number of deep questions about your place in the Universe and your interaction with others. Consciousness is central to what makes you who you are, and it is central to the way you live your life. Yet given the privacy and subjectivity of consciousness, you cannot have direct access to conscious experiences that are vastly different from our own, and this poses various challenges for your ability to achieve understanding across experiential divides. The privacy and subjectivity of consciousness also pose challenges to attempts to account for it within the bounds of contemporary science, and we may well have to develop new ways of theorising in order to truly understand how it fits into the natural world.

Conceptions of consciousness also play a foundational role in our moral judgments. Creatures who have conscious experiences – who can feel pleasure and pain – are generally deserving of moral consideration. The fact that a creature can feel pain, for example, means that it can be harmed, and that in turn suggests that we have an obligation to avoid causing it harm.

When we think about many of the grievous moral failings of the past, they often trace at least in part to a failure to adequately recognise the moral standing of other beings. And this in turn often traces to a failure to adequately recognise that these beings have conscious experiences just like ours. (To give just one example, slaveowners often denied that their Black slaves felt pain the same way that white people did.) Just as our history shows grave mistreatment of moral agents due to racial or sexist bias, we might now be in danger of grave mistreatment of moral agents due to mechanical bias.

Do you say thank you to Siri and Alexa or ChatGPT when they perform a task for you? Do you compensate them for their labours? Do you expect them to be at your beck and call at all hours of the day? Granted, insofar as these artificial agents almost surely lack consciousness, it’s highly unlikely that this kind of treatment of them constitutes a moral failing. Were they conscious, though, the moral stakes would be quite different. Thus, as increasingly sophisticated AI devices come onto the scene, if we want to avoid the analogous kinds of moral mistakes that were made by our forebears, we will need to know whether and when to grant such devices moral standing, and that in turn means that we will need to know whether such devices have conscious experiences.

Links & books

David Chalmers’s article ‘The Puzzle of Conscious Experience’ (1995) provides a good starting point for thinking further about many of the issues discussed here. He has also given a TED talk (2014) on the subject. If you want to dive deeper, his book The Conscious Mind (1996) provides a more extensive discussion of the problem of consciousness and how we might solve it.

To learn about Frank Jackson’s influential thought experiment about Mary the colour scientist, check out this TED-Ed animation; for more about philosophical zombies (and how they differ from Hollywood zombies), listen to this podcast episode of Hi-Phi Nation.

Alan Turing developed his test for machine thinking in the paper ‘Computing Machinery and Intelligence’ (1950). Susan Schneider lays out a number of tests for machine consciousness in her book Artificial You: AI and the Future of Your Mind (2019).

For a recent discussion that presents two competing perspectives on consciousness, you might want to check out the book I co-authored with Daniel Stoljar, What Is Consciousness? A Debate (2023).

Finally, there are a number of science fiction stories and books that push us to reflect more deeply on the consciousness of others and about different forms that consciousness might take. Some of my favourites include the novelette For a Breath I Tarry (1966) by Roger Zelazny, the book Ancillary Justice by Ann Leckie (2013), and the short story ‘They’re Made Out of Meat’ (1991) by Terry Bisson, which was also made into a radio play.