The Eliminativist Approach to Consciousness

First written: 9 Aug. 2014; last update: 9 Jan. 2018

This essay explains my version of an eliminativist approach to understanding consciousness. It suggests that we stop thinking in terms of "conscious" and "unconscious" and instead look at physical systems for what they are and what they can do. This perspective dissolves some biases in our usual perspective and shows us that the world is not composed of conscious minds moving through unconscious matter, but rather, the world is a unified whole, with some sub-processes being more fancy and self-reflective than others. I think eliminativism should be combined with more intuitive understandings of consciousness to ensure that its moral applications stay on the right track.

Other versions

Introduction

"[Qualia] have seemed to be very significant properties to some theorists because they have seemed to provide an insurmountable and unavoidable stumbling block to functionalism, or more broadly, to materialism, or more broadly still, to any purely 'third-person' objective viewpoint or approach to the world (Nagel, 1986). Theorists of the contrary persuasion have patiently and ingeniously knocked down all the arguments, and said most of the right things, but they have made a tactical error, I am claiming, of saying in one way or another: 'We theorists can handle those qualia you talk about just fine; we will show that you are just slightly in error about the nature of qualia.' What they ought to have said is: 'What qualia?'"
--Daniel Dennett, "Quining Qualia"

My views on consciousness are sometimes confusing to readers, so I try to explain them in different ways using different language. I myself also try to imagine the situation from different angles. Three main perspectives that I've advanced are

  • reductionism (mainly with a functionalist flavor): consciousness is certain algorithms that physical processes perform to varying degrees
  • eliminativism: the focus of the current essay
  • panpsychism: consciousness is intrinsic to computation, having different flavors for different computational systems.

Pete Mandik has a nice video explaining the distinction between reductionism and eliminativism.

That said, I think all three of these approaches are substantively the same, and they differ mainly in the words they use and the imagery they evoke. These differences may have practical consequences insofar our moral intuitions depend on how we think about consciousness, but what the viewpoints actually say about the world is identical in each case. To make this clear, consider the classic analogy of élan vital. We can pursue any of the following options with it:

  • reduce élan vital by saying that it is the properties that define life
  • eliminate élan vital as an imprecise concept
  • adopt a kind of "pan-vitalism" (or hylozoism) theory according to which everything in the universe has traces of life -- after all, even atoms show "behaviors" in response to stimuli, move toward equilibrium, etc.

A similar situation obtains with respect to consciousness.

In "Consciousness and its Place in Nature", David Chalmers recognizes that functionalist-style reductionism and eliminativism are ultimately the same:

Type-A materialism sometimes takes the form of eliminativism, holding that consciousness does not exist, and that there are no phenomenal truths. It sometimes takes the form of analytic functionalism or logical behaviorism, holding that consciousness exists, where the concept of "consciousness" is defined in wholly functional or behavioral terms (e.g., where to be conscious might be to have certain sorts of access to information, and/or certain sorts of dispositions to make verbal reports). For our purposes, the difference between these two views can be seen as terminological.

Chalmers classifies panpsychism as a Type-F monist view, but I think that functionalist panpsychism is a poetic way of expressing a type-A materialism. That which panpsychism says is fundamental to computation is not a concrete thing that could conceivably not be present but is more a way of describing how the rhythms of physics (necessarily) seem to us.

Motivating eliminativism

Daniel Dennett is often charged with denying consciousness. Some critics of his book Consciousness Explained suggest that its title is missing a word, and it should actually be called Consciousness Explained Away. One possible reply to this allegation is that consciousness is being explained, but it's just not what people thought it was. As Dennett says in Fri Tanke (2017) at 32m30s: "I'm not saying that consciousness doesn't exist. I'm just saying it isn't what you think it is." But I suppose another possible response is to say, "Okay, what if I did explain consciousness away? What would follow from that?" I can imagine an Internet meme of Morpheus saying: "What if I told you that we should get rid of the idea of 'consciousness'?"

Maybe "consciousness" is a word with so much metaphysical baggage and philosophical confusion that it would be best to stop using it. Marvin Minsky thinks so and adds:

now that we know that the brain has [...] hundreds of different kinds of machinery linked in various ways that we don't understand, it would be a wonderful coincidence if any of the words of common-sense psychology actually described anything that's clearly separate, [...] like "rational" and "emotional" [as] a typical dumbed-down distinction that people use.

"Consciousness" does actually point to some helpful distinctions even given a reductionist world view -- just as the contrast between rational and emotional thinking does actually have some grounding in psychology. But "consciousness" can point to a lot of distinctions at once depending on what the speaker has in mind. Maybe we should embrace the eliminativist program and replace "consciousness" with more precise alternative words.

To be clear, my version of eliminativism does not say that consciousness doesn't exist. Pace Galen Strawson, it does not "deny the existence of the phenomenon whose existence is more certain than the existence of anything else". Rather, eliminativism says that "consciousness" is not the best concept to use when talking about what minds do. We should replace it with more specific descriptions of how mental operations work and what they accomplish. To again give an analogy with élan vital: It's not that life doesn't have a sort of vitality to it; it does. Rather, there are more useful and specific ways to talk about life's vitality than to invoke the élan vital concept.

Dennett echoes this in "Quining Qualia":

Everything real has properties, and since I don't deny the reality of conscious experience, I grant that conscious experience has properties. I grant moreover that each person's states of consciousness have properties in virtue of which those states have the experiential content that they do. That is to say, whenever someone experiences something as being one way rather than another, this is true in virtue of some property of something happening in them at the time, but these properties are so unlike the properties traditionally imputed to consciousness that it would be grossly misleading to call any of them the long-sought qualia.

Rothman (2017) describes Dennett's view during a debate: "He told Chalmers that there didn’t have to be a hard boundary between third-person explanations and first-person experience—between, as it were, the description of the sugar molecule and the taste of sweetness. Why couldn’t one see oneself as taking two different stances toward a single phenomenon? It was possible, he said, to be 'neutral about the metaphysical status of the data.'"

Keith Frankish:

I don't think that consciousness is an illusion [...]. The question is what's involved in having those experiences and those sensations. And I think it does [...] beg the question to say that it involves having states with qualia.

Rob Bensinger:

I came to believe that ‘consciousness’ is not a theory-neutral term. When we say ‘it’s impossible I’m not conscious’, we often just mean ‘something’s going on [hand-wave at visual field]’ or ‘there’s some sort of process that produces [whatever all this is]’. But when we then do detailed work in philosophy of mind, we use the word ‘consciousness’ in ways that embed details from our theories, our folk intuitions, our thought experiments, etc.

As we add more and more theoretical/semantic content to the term ‘consciousness’, we don’t do enough to downgrade our confidence in assertions about ‘consciousness’ or disentangle the different things we have in mind. We don’t fully recognize that our initial cogito-style confidence applies to the (relatively?) theory-neutral version of the term, and not to particular conceptions of ‘consciousness’ involving semantic constraints like ‘it’s something I have maximally strong epistemic access to’ or ‘it’s something that can be inverted without eliminating any functional content’.

Eliminativists remind us that our intuitions about science are not well refined. It's commonly the case that naive notions of physics, biology, and other sciences need to be replaced by more correct, if less intuitive, understandings. Why should it be different in the case of consciousness? People may feel as though they're experts on subjectivity because they are conscious, but they are just one of many conscious minds in the universe. I don't see how this position qualifies one as an expert on consciousness any more than knowing your way around your house qualifies you as an expert on physical space in the galaxy. In any case, the parts of our brains that talk don't even have clear understanding of much of what goes on in our own heads.

In The Scientific Outlook (1931), Bertrand Russell wrote:

Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say.

Eliezer Yudkowsky remembers that his father

said that physics was math and couldn't even be talked about without math. He talked about how everyone he met tried to invent their own theory of physics and how annoying this was.

In a similar way, I claim we can't understand subjectivity without neuroscience (and physics more generally). And while everyone seems to have a pet theory of consciousness (including me plenty of times), these can't substitute for neuroscience.

Our brains are bad at using intuitions about location and properties of an object to describe quantum superpositions. In a similar way, our language of "consciousness" and "qualia" is not suited to precisely describing what happens in the brain.

Thinking physically

The language of "consciousness" and "qualia" corresponds to what Philip Robbins and Anthony I. Jack call the "phenomenal stance". In contrast, the eliminativist position corresponds to what Dennett calls the "physical stance".

In breaking our confusions about consciousness, it's helpful to picture the world purely using the physical stance. Stop thinking about raw feels. Think instead about moving atoms, flowing ions, network connectivity, and information transfer. Imagine the world the way neuroscience describes it -- because, in fact, this is a relatively precise account of the way the world is. If it seems as though everyone should be a zombie, don't worry about that for now.

Compare an insect with a human. Rather than imagining the human as conscious and the insect as not, or even the human as just more conscious than the insect, instead picture the two as you would a professional race car versus a child's toy car: as two machines of different sizes, complexities, and abilities that nonetheless share some common features and functionality.

Compare your brain with another part of your nervous system -- say the peripheral nerves in your hand. Why is your brain considered "conscious" and your hand not? It's because only your brain is capable of generating explicit, high-level, and verbalizable thoughts like remarking on its own consciousness. Your hand is also doing neural operations that resemble neural operations in your brain. It's just that the hand's operations don't always get reported via memories and speech, unless they "become famous" within your brain so that they can be thought about and verbalized.

The eliminativist approach encourages us to stop thinking about neural operations as "unconscious" or "conscious". Instead, in humans, think about the pathway along which neural information travels in order to reach your high-level thinking, speech, action, and memory centers. If the information fails to get there, we call it "subliminal" or "unconscious". If it does get there, we call it "conscious" because of the more pronounced effects it can have on other parts of the brain and body. Thus, for humans, we could replace the loaded "conscious" word with something like "globally available".

There are many more details behind what the brain does that we ordinarily think of as consciousness. The best way to get an intuitive sense for them is to learn more neuroscience, perhaps from a popular book. While I became convinced that non-reductive accounts of consciousness could not be right based mainly on philosophical arguments, it was after reading neuroscience that I actually internalized the eliminativist world view. The gestalt shift toward eliminativism requires time and reading to sink in.

Eliminativist sentience valuation

Picturing systems physically gives us fresh eyes when deciding what we value. When we adopt the common-sense phenomenal stance, we see a world in which discrete minds move about in otherwise unconscious matter. When we adopt the physical stance, we see various kinds of matter interacting with one another. Some of those matter types (e.g., animals and computers) are more dynamic and sophisticated than other types, but there's a fundamental continuity to the picture among all parts of the system. And we can see that while the system can be sliced in various ways to aid in description and conceptualization, it is ultimately a unified whole.

Ethics in this world view involves valuing or disvaluing various operations within the symphony of physics to different degrees. Some philosophers assign value based on the beauty, complexity, or interestingness of the physics that they see. Those who value conscious welfare instead aim to attribute degrees of sentience to different parts of physics and then value them based on the apparent degree of happiness or suffering of those sentient minds. Because it's mistaken to see consciousness as a concrete thing, sentience-based valuation, like the other valuation approaches, involves a projection in the mind of the person doing the valuing. But this shouldn't be so troubling, because metaethical anti-realists already knew that ethics as a whole was a projection by the moral agent. The eliminativist position just adds that the thing being (dis)valued, consciousness, is itself something of a fiction of the moral agent's invention.

Actually, calling "consciousness" a fiction is too strong. As noted above, "consciousness" refers to real distinctions -- e.g., in the case of human-like brains, we may consider global access to and ability to report on information as important components of consciousness. I just mean "fiction" in the same sense as nations, genders, or tables are fictions; they're constructions of the human mind that help conceptually organize physical phenomena.

I should note that making sentience evaluations based on knowledge of physical processes doesn't mean making superficial evaluations. A humanoid doll that blinks might look more conscious than a fruit fly, but the 100,000 neurons of the fruit fly encode a vastly more complex and intelligent set of cognitive possibilities than what the doll displays. Judging by objective criteria given sufficient knowledge of the underlying systems is less prone to bias than phenomenal-stance attributions.

Moreover, there's a sense in which nothing ethically important would be left out if we eschewed the idea of "consciousness" and only thought in terms of physical processes. In principle, it would still be straightforward to draw ethical distinctions between so-called "conscious" and so-called "unconscious" human minds, because the brain-activity patterns of the two are clearly distinct. We could still hear what people had to say about the intensity of their emotional feelings and use those reports to make judgments. We could watch their brains and see the neural correlates of those reports. We could develop intuitions for what sorts of physical processes lead to attestations of pleasure and pain, and then we could generalize those kinds of algorithms so as to see them in other places. If we in principle had access to all the operations of a mind, there would be no thought or feeling that would go unnoticed. This approach would actually be more powerful at locating sentience even in ourselves than our subjective feelings are, since the parts of our brains that develop explicit thoughts and decide high-level actions don't have access to most of the neural operations taking place at the lower levels of the brain or other parts of the body, just like they don't have access to the minds of other people or animals. Knowledge of the physical operations taking place in our minds and other minds makes it possible to value processes of which we would have previously been unaware and which may have previously "suffered" in silence.

Comments on Sneddon et al. (2014)

Sneddon et al. (2014) offer a very helpful table for assessing pain in different animal taxa (Table 2, p. 204):

The authors say (p. 209): "Our summary of the evidence supports the conclusion that many animals can experience pain-like states by fulfilling our definition of pain in animals, although we accept that 100% certainty cannot be established for any animal species." In other words, Sneddon et al. (2014) speak as though "pain experience" is a thing that may or may not be present, and all we can do is make informed guesses about its existence.

To an eliminativist, this is wrongheaded. Rather, "pain experience" is a label we give to the suite of behavioral and functional responses that organisms have to aversive stimuli. There's no binary answer to whether insects or mammals "feel pain". The answer should instead be: Look at Table 2 above to see what abilities a given animal type does and doesn't have, and explore further to find out what sorts of other behaviors and internal cognitive processing occur in these animals. These findings are not mere indicators of pain; they are (parts of) the cluster of things we mean by "pain". Stop answering "yes" or "no" to the question "Do insects feel pain?" and start describing the details of what we know about insect nervous systems and behaviors.

I like this quote from Daniel Dennett, in Rothman (2017): "I think we should just get used to the fact that the human concepts we apply so comfortably in our everyday lives apply only sort of to animals. [...] If you think there’s a fixed meaning of the word ‘consciousness,’ and we’re searching for that, then you’re already making a mistake." And Dennett (1995):

The very idea of there being a dividing line between those features "it is like something to be" and those that are mere "automata" begins to look like an artifact of our traditional presumptions. I have offered (Dennett, 1991) a variety of reasons for concluding that in the case of adult human consciousness there is no principled way of distinguishing when or if the mythic light bulb of consciousness is turned on (and shone on this or that item). Consciousness, I claim, even in the case we understand best -- our own -- is not an all-or-nothing, on-or-off phenomenon. If this is right, then consciousness is not the sort of phenomenon it is assumed to be by most of the participants in the debates over animal consciousness. Wondering whether it is "probable" that all mammals have it thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has losts its utility along with its hard edges.[...]

The phenomenon of pain is neither homogeneous across species nor simple.

Sneddon et al. (2014), p. 208: "Insects may show behaviour that suggests an affective or motivational component (e.g. it has complex and long-lasting effects), but insects could do this, at least in some cases, by using mechanisms that require only nociception (e.g. long-term nociceptive sensitization) and/or advanced sensory processing (i.e. without any internal mental states)." Again, this quote seems to make a binary distinction between sensory processing vs. internal mental states. But "advanced sensory processing" is one part of internal mental states. Mental states are a collection of representations that a cognitive system combines together, and it appears that insect nervous systems do combine together sensory representations at least in rudimentary ways, even though they don't have the same sophistication of mental states as occur in mammals.

Following is a possible steelman of Sneddon et al. (2014), one that I largely agree with: Much of the moral importance of suffering comes from deep, internal functional processing within an animal brain, and this processing is difficult to inspect directly (although it could be completely understood in principle with sufficient time and computing power). Therefore, we use more externally measurable behaviors like those in Table 2 to guess at the kinds and sophistication of internal processing that goes on. While externally visible behaviors are indeed a part of what pain is, they aren't the most important part of what pain is, which is why it can still make sense to speak in terms of uncertainty about whether animals feel pain, even if we already know lots of behavioral facts about them.

Living in zombieland

Try adopting the physical stance as you go about your day. When you have a particular feeling or become aware of a particular object, think about what kinds of neural operations are occurring in your head as that happens. Contemplate the brain processes that underlie the behaviors of those around you. See yourself as a chunk of physics moving around within a bigger world of physics.

My experience with this exercise is that it soon becomes less weird to adopt a physical stance. It feels more intuitive that, yes, I am an active, intelligent collection of cells whose sharing and processing of signals constitutes the inner life of my mind and allows for a vast repertoire of behaviors. Worries that I should be a zombie vanish, because I can feel what it's like to be physics for what it is. In fact, being physics feels just like it always did when I thought consciousness was somehow special.

In this mood, questions like, "Why do these physical operations feel like something?" appear less forceful, because I'm already "at one" with the universe. Yes, the neurons in my brain are doing particular kinds of processing that other clumps of atoms in the world are not, and this explains why these thoughts show up in my head and not in the floor or a beetle outside. But does it matter that I'm having these particular thoughts? Can't other "thoughts" by other parts of physics matter too for being what they are? Isn't it chauvinist to privilege just cognitive operations that are sufficiently complex and of a particular type? Or is extending sympathy to even simple physics a stance that's based more on theoretical elegance and spirituality, when in fact only sufficiently self-reflective minds have "anyone home" who can meaningfully care about his/her own subjective experiences?

These questions are important to debate, but we see that they take place within the eliminativist realm. We can import some phenomenal-stance intuitions when thinking about what parts of physics we want to regard as "suffering", but we don't trip ourselves up over trying to pigeonhole suffering as being something other than an attribution we make to whirlpools within the ocean of physics.

Why this discussion matters

Eliminativism is not universally shared, particularly among philosophers. (It may be more common among neuroscientists and artificial-intelligence researchers?) Sometimes people encourage me to discuss practical questions of sentience with less dependence on my particular philosophical view of consciousness. For example, even if I thought consciousness were a privileged thing, I could still argue that basic physics has some small chance of being conscious. This would yield similar practical conclusions as the eliminativist view does.

This is a fair point, but I think attacking the core confusion about consciousness itself is quite important, for the same reason that it's important to break down the confusions behind theism even if you can argue for a lot of the same practical conclusions whether or not theism is true. Viewing consciousness as a definite and special part of the universe is a systematic defect in one's world view, and removing it does have practical consequences. Looking at the universe from a more physical stance has helped me see that even alien artificial intelligences are likely to matter morally, that plants and bacteria have some ethical significance, and that even elementary physical operations might have nonzero (dis)value. In general, Copernican revolutions change our ethical intuitions in possibly profound ways.

Does eliminativism eliminate empathy?

A legitimate concern about eliminativism is that it could reduce the intuitive importance or meaningfulness of altruism. If everything is just particles moving in different ways, why should I care? Anthony Jack and colleagues have found evidence that "there is a physiological constraint on our ability to simultaneously engage two distinct cognitive modes", namely, social and physical reasoning.

But if eliminativism does reduce compassion, it may be because the eliminativist position has not been completely understood. If you still think of consciousness as being something special, then eliminativism sounds like a view that the world doesn't contain that special thing, so nothing in the world matters. But what eliminativism really says is that all the specialness you thought was in the world is still there and in fact may be more universal than you realized.

Consider a fish suffocating on the deck of a fishing boat. It flops back and forth, apparently in agony. A conventional approach is to say that if this fish is conscious, then it must be aware of its terrible suffering, which is bad and should be avoided. An eliminativist can note how

  • aversive sensory inputs are being aggressively broadcast throughout the fish's brain
  • the fish's fear system is activated, triggering follow-on changes in its cognition and releasing stress hormones throughout its body
  • the fish thinks about ways it might escape from its predicament
  • powerful memories of a terrible experience are being created through changes in synaptic strengths and connectivity
  • and so on.

When I read this list, I think those brain changes look really bad, and I feel almost as much empathy as when I think about the fish from the common-sense standpoint of having an agonizing subjective experience. If we care about suffering, then we care about what suffering actually is even on closer inspection.

Steven Weinberg said "With or without religion, good people can behave well and bad people can do evil; but for good people to do evil--that takes religion." In a similar way, it's plausible that for good altruists to ignore suffering requires confusion about consciousness. If you hold Descartes's view that animals are non-sentient machines, you can really delude yourself into thinking that the struggling of a dog when vivisected is not conscious and hence doesn't matter. An eliminativist realizes that lots of aversive processing is really going on in the dog's head, and so the vivisection must be at least somewhat bad, depending on the negative weight given to those aversive processes. The eliminativist position is thus more cautious in some sense.

That said, I'm describing here mainly my experience with eliminativism, as someone who was already heavily committed to reducing suffering. It remains an empirical question what kinds of effects eliminativism has on average for various populations and depending on the degree to which it's internalized. That said, because I think something like eliminativism on consciousness will become more widely accepted in the future as understanding of neuroscience increases, we may want to figure out how to frame eliminativism in a more altruism-friendly way rather than just sweeping it under the rug.

The subjective and objective need each other

The physical stance is more impartial and accurate than the phenomenal stance in accounting for all the mind-like processes that exist in the world. However, the physical stance is also more dispassionate. While the brain of a person being tortured does look physically very distinctive -- with lots of activity and long-lasting neural "scars" being created -- appreciating its true awfulness requires imagining ourselves in its position. Without subjective imagination, a physical-stance approach is liable to give way to aesthetic judgments -- valuing more brains that appear more interesting, sophisticated, nuanced, or dynamic. Looking for beauty and novelty is a natural temptation when we view physical objects, but it has little to do with ethics. There's a danger that eliminativism gives too much sway to non-empathic judgment criteria.

I think we should try out the eliminativist view as an exercise, to bend our prior prejudices and intuitions. When we unshackle ourselves from the conventional concept of consciousness, how many other ways might there be to reimagine the world! That said, eliminativism doesn't have to be and arguably should not be the only way we think about consciousness, just as our slow, utilitarian moral system needn't be the only way we think about ethics. Rather, we can blend the insights of eliminativism with those of a more common-sense, phenomenal stance -- with the aim of achieving a reflective equilibrium that incorporates insights from each.

Eliminativism and panpsychism

Eliminativism and panpsychism may seem like polar opposites, but they're actually two sides of the same coin, in a similar way as 0 degrees and 360 degrees on a unit circle point in the same direction. Both maintain that there's nothing distinctive about consciousness that sharply distinguishes it from the rest of the universe. Panpsychism recognizes that all the computations of physics have a fundamental similarity to them, and it considers different computations as different shades of the same basic thing (though the shades may differ quite a bit). Eliminativism rejects talk about "consciousness" in favor of physical descriptions, and once again we can see a fundamental continuity among the diverse flavors of physical processes. Whether it uses the word "consciousness" or not, each perspective points at the same underlying reality.

That said, it's worth noting that even if we recognize all of physics as fundamentally mental in some sense, it remains a matter of choice how much we care about simple physical operations. We might legitimately decide that only really complex systems like those that emerge in animal brains contain moral significance.

Rob Bensinger, an eliminativist, writes:

believing I’m a zombie in practice just means I value something functionally very similar to consciousness, ‘z-consciousness’. [...]

Since (z-)consciousness isn’t a particularly unique kind of information-processing, I expect there to be an enormous number of ‘alien’ analogs of consciousness, things that are comparable to ‘first-person experience’ but don’t technically qualify as ‘conscious’. [...] [Due to the implications of eliminativism,] I’m much more skeptical that (z-)consciousness is a normatively unique kind of information-processing. Since I think a completed neuroscience will overturn our model of mind fairly radically, and since humans have strong intuitions in favor of egalitarianism and symmetry, it wouldn’t surprise me if certain ‘unconscious’ states acquired the same moral status as ‘conscious’ ones.

I don't think Bensinger is endorsing all-out panpsychism here (and indeed, Bensinger disavows panpsychism elsewhere), but the spirit of his comments is similar as mine.

Fuzzy boundaries of "qualia"

Churchland (1996) makes what I interpret as an argument against the ontological fundamentalness of qualia by appealing to their fuzzy boundaries (p. 404):

Although it is easy enough to agree about the presence of qualia in certain prototypical cases, such as the pain felt after a brick has fallen on a bare foot, or the blueness of the sky on a sunny summer afternoon, things are less clear-cut once we move beyond the favoured prototypes. Some of our perceptual capacities are rather subtle, as, for example, positional sense is often claimed to be. Some philosophers, e.g. Elizabeth Anscombe, have actually opined that we can know the position of our limbs without any 'limb-position' qualia. [...]

Vestibular system qualia are yet another non-prototypical case. Is there something 'vestibular-y' it feels like to have my head moving? To know which way is up? Whatever the answer here, at least the answer is not glaringly obvious. [...]

My suspicion with respect to The Hard Problem strategy is that it seems to take the class of conscious experiences to be much better defined than it is. The point is, if you are careful to restrict your focus to the prototypical cases, you can easily be hornswoggled into assuming the class is well-defined.

Perhaps qualiaphiles could reply that qualia are in fact crisp entities, but we just don't always perceive or describe their boundaries correctly. Maybe our language is not up to the task. Moreover, the contents of qualia may differ from person to person.

Denying consciousness altogether

In the above piece, I tried to insist that eliminativism doesn't deny consciousness per se, only the particular conception of consciousness that some philosophers cling to. As of 2015, I'm leaning more toward Minsky's view that it might be most clear to dispense with the "consciousness" word altogether, since it causes so much confusion. This post expresses a similar idea: "There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett [...] – ‘What qualia?’"

Through many conversations about consciousness, I've concluded that eliminativism may be the most clear way to explain type-A physicalism, because the allure of dualism (even when dressed up as physicalist monism) is so irresistible to human minds. In other words, eliminativism helps shock us out of our complacency and actually come to terms with what a truly non-dualist view of consciousness requires. While (type-A) reductionism on consciousness is also a reasonable viewpoint in principle, in practice some people have trouble being mere reductionists without falling into the trap of property dualism. In other words, eliminativism is like training wheels: it's useful until you're ready to wield the idea of "type-A reductionism regarding consciousness" correctly, without falling over and hurting yourself.

The mantra of the more radical version of eliminativism is that we're not conscious but only think we are. How is that possible? "I just know I'm conscious!" But any thoughts you have about your being conscious are fallible. I believe there are bugs in the vast network of computation that produces thoughts like "I'm conscious in a way that generates a hard problem of consciousness." No thought you have is guaranteed to be free from bugs, and it seems more likely -- given the basically useless additional complexity of postulating a metaphysically privileged thing called consciousness -- to suppose that our attribution of metaphysically privileged consciousness to ourselves is a bug in our cognitive architectures. This is a relatively simple way to escape the whole consciousness conundrum. If it feels weird, that's because the bug in your neural wiring is causing you to reject the idea. Your thoughts exist within the system and can't get outside of it.

Your brain is like a cult leader, and you are its follower. If your brain tells you it's conscious, you believe it. If your brain says there's a special "what-it's-like-ness" to experience beyond mechanical processes, you believe it. You take your cult leader's claims at face value because you can't get outside the cult and see things from any other perspective. Any judgments you make are always subject to revision by the cult leader before being broadcast. (Similar analogies help explain the feeling of time's flow, the feeling of free will, etc.)

Carruthers and Schier (2017) summarize Dennett's view as follows: "there is no reason to think that appearing to the subject cannot be understood in functional terms. In brief, appearing seems to involve the subject gaining access to the thing that is appearing and there is no reason that we cannot give a functional analysis of access."

I like how Michael Graziano explains it:

I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do. [...]

a new perspective on consciousness has emerged in the work of philosophers like Patricia S. Churchland and Daniel C. Dennett. Here’s my way of putting it:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. [...]

You might object that this is a paradox. If awareness is an erroneous impression, isn’t it still an impression? And isn’t an impression a form of awareness?

But the argument here is that there is no subjective impression; there is only information in a data-processing device. When we look at a red apple, the brain computes information about color. It also computes information about the self and about a (physically incoherent) property of subjective experience. The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing. Cognition is captive to those internal models. Such a brain would inescapably conclude it has subjective experience.

So there, I said it: Consciousness doesn't exist. Now let's figure out more precisely what we are pointing at when we seek to reduce conscious suffering.

Often I hear claims that "I'm more certain that I'm conscious than I am about anything else." I disagree. Our perception of being conscious, just like our perception of anything else, is a hypothesis that our brain constructs, based on very complicated processing and lower-level thinking, expressed in terms of a simplified ontology that the brain can make sense of. Anything that you know is the result of complex computation by an information-processing device. But then why privilege some types of visceral, intuitive judgments that your brain makes over other judgements your brain makes? All of your knowledge is constructed by the brain's information processing in one way or another. "Knowing that I'm conscious" is not a thought that somehow transcends ordinary brain machinery, nor does it deserve to be made axiomatic in one's ontology.

Does eliminativism explain phenomenology?

Focus your attention on the visual image you see of the world in front of you: a rich jumble of colors, shapes, textures, and patterns. How can those not be the philosopher's qualia?

Naively, it looks like eliminativism can't explain these data. But exactly how we characterize the data makes a difference to our theoretical interpretation. If we declare the visual imagery in front of us as something metaphysically special -- "mental phenomena" -- then eliminativism cannot account for them. But to suppose that the visual scenes we see are phenomena in their own metaphysical category is to beg the question.

An alternate characterization of our visual experiences is that they represent "(data about the external world) + (an explicit or implicit judgment by our brains that we're seeing a rich collection of colors, shapes, etc.)". All we know is the judgments our brains make. If our brains judge that we're seeing rich colors and shapes, then we'll think we are seeing such things. The eliminativist hypothesis, then, just predicts that our brains make these judgments about seeing things-it-calls-qualia when it attends to its processing of visual input.

These judgments needn't be verbal but are often just more basic moments of noticing how something seems. For instance, when I'm going about my day, I typically don't even notice the colors and shapes around me, but if I focus my attention on how they look, I undergo a nonverbal process of feeling like "Wow, there's something it looks like to see what's in front of me!" This feeling is an implicit "judgment" that my brain makes, and it's all that's needed to explain the fact that we feel like we have qualia.

That I typically don't notice "qualia" unless I attend to them bolsters this view. Most of the time, my brain is focused on my own internal thoughts and basically ignores the world it sees. In this case, my brain is just processing visual data "unconsciously". Then, when I focus on some visual input in particular, my brain produces an implicit (and sometimes explicit) judgment that "this thing has distinctive color and texture and shape, and it feels like something to see it". In other words, so-called "conscious experiences" are experiences that our brains judge to be conscious.

While we don't yet have a fully developed theory of humor, laughing may be a reaction that we have to particular kinds of unexpected juxtapositions. In a similar way, it's possible that our brains also have a sort of "specialness" classifier, which fires when we process certain inputs or have certain thoughts that don't seem to be fully explainable in physicalist terms. For some people, this classifier may lead to belief in God or spirits or magic. And for almost everyone (myself included) this hypothetical classifier may lead us to believe there's a "feeling of what it's like" to be conscious and that, e.g., the visual data in our brains is somehow "a unified visual scene of textures, colors, and objects" rather than activations of neural patterns corresponding to information available for us to use, combined with our flailing minds trying to make sense of that information with whatever simple metaphors they can.

Qualia are user illusions. When you click a folder icon on your computer desktop, there's not actually a little folder there; your computer just tells you that there's a folder there, when in fact it represents more complex processing in your computer's hard drive. Likewise, when you feel pain, there's not actually an ontological what-it's-like-ness experience; it's just that your brain tells you it's having a qualitative experience of pain, when in fact what's happening under the hood is complex brain processing. The user illusion of folders on a computer is like our folk-psychological "phenomenal concepts", while the underlying computation and data are like our "physical concepts". The hard problem of consciousness results from the difficulty we have of seeing how the ontology of mental "folder icons" can match up with the ontology of mental "code and data".

That consciousness is a user illusion doesn't mean that only experiences judged to be conscious matter ethically. The judgment process only adds a small level of reflection on what were already complex, substantive computations. The pre-eliminativist view that only "conscious" emotions matter was based on a confused idea that only "conscious" emotions are "real qualia" in a metaphysical sense. Once we cast off this way of thinking, it becomes less plausible that only experiences judged to be conscious have moral weight.

We can describe the same reality at different levels of abstraction, i.e., using different ontological frameworks. As an example of a different ontology, imagine a simple computational agent that moves about in a "grid world" consisting of 16 squares -- 16 possible "states" that it can be in. This simple agent knows nothing of the Earth, particle physics, or even humans. It just "knows" (in some very simplistic way) its own little world of state transitions and whatever dynamics drive its behavior. Likewise, we humans are acquainted with a simplified ontology of our own brains -- that we have things called "conscious experiences" and that we transition through these experiences. If we were cave people, we would know nothing of neurons, the cerebral cortex, or gamma oscillations (except for whatever we found in the skulls of other animals that we killed and ate). We would just have a simplistic, subjective model of our mental lives, which is what people refer to when they talk about knowing that they're conscious. This picture isn't mutually exclusive with a more detailed, physics-based portrait.

Why do you believe you're conscious?

Perhaps the easiest way to understand eliminativism, and the way I began to see the light in 2009, is as follows. First, let's describe the mainstream, intuitive view of consciousness among most neuroscientists and other scientifically minded people:

The exact contents of the middle box aren't essential; you can plug in whatever neuroscientific theory of consciousness you want there. The main idea is that unconscious neural inputs take place, and then with enough integration/processing of the right type, consciousness is created. And because those people holding this view are materialists, they insist that consciousness just is brain activity of the right type, even though consciousness is also a definite phenomenal thing. I think the best way to construe these claims into a coherent philosophy is to regard them as property dualism, which is why I say that most neuroscientists are closet property dualists. In any case, the relevant endpoint in this process is the box at the bottom—the belief that you have qualia—because that's what causes you to insist to yourself and to others that qualia exist. In other words, if I ask: "How do you know you have qualia?" your reply can be: "My brain perceives this fact via the cognitive operations described by this figure and then correctly concludes that it has qualia." For example, if you look at a complex visual scene and note all its colors, textures, etc., the neural correlates of this "experience" lead to brain-state changes corresponding to accurate belief that you are experiencing such qualia.

The eliminativist alternative is simple: Just drop the yellow star and keep everything else the same:

This results in exactly the same beliefs as if you do have qualia (whatever that's supposed to mean), and those beliefs are all we need to explain. The hypothesis that something "special" happens at some particular stage of neural processing or at some particular level of brain complexity is useless (although there is still a non-dualist sense in which very complex brains are qualitatively different from very simple ones, in a similar way as a tree is qualitatively different from a blade of grass). If you still want to declare that certain types of neural processing are consciousness, that's fine, but then you're not making a metaphysical claim, just a definitional one. Perhaps you're just making a generalization about the sorts of brain processes that tend to precede beliefs (of a certain type) that one is conscious, which I have no quarrel with. (Of course, there may be other, simpler kinds of beliefs in one's own consciousness that have other, simpler neural precursors. And it's unclear to what extent consciousness in general, rather than self-consciousness specifically, should be said to require belief in one's consciousness anyway.)

Chalmers (1996), pp. 177-78, 180-81:

When I comment on some particularly intense purple qualia that I am experiencing, that is a behavioral act. Like all behavioral acts, these are in principle explainable in terms of the internal causal organization of my cognitive system. There is some story about firing patterns in neurons that will explain why these acts occurred; at a higher level, there is probably a story about cognitive representations and their high-level relations that will do the relevant explanatory work. [...]

In giving this explanation of my claims in physical or functional terms, we will never have to invoke the existence of conscious experience itself. [...]

[My zombie twin] remains utterly confident that consciousness exists and cannot be reductively explained. But all this, for him, is a monumental delusion. There is no consciousness in his universe—in his world, the eliminativists have been right all along.

Given that philosophical zombies sincerely believe themselves to be conscious, even though they lack the philosopher's qualia, how do you know that you're not a zombie? The beliefs physically manifested in your neural configuration are exactly the same as the zombie's. If you think zombies are possible, you should worry that you might actually be a zombie, since you have no way of knowing that you're not one. (By "you" in the previous sentence I mean your physical self, which is the action-guiding conception of self, given that non-interacting non-material selves/experiences/phenomenal properties don't take actions that can affect the physical world.) And if you think zombies aren't possible, then you aren't a property dualist.

A property dualist might reply to the previous paragraph as follows: "The conscious properties that supervene on my physical existence consciously know themselves to be conscious. Their consciousness self-verifies the fortuitous belief by my physical brain that I'm conscious." In reply, I would point out that even if true, this doesn't help our physical brains and bodies when making choices, since our physical brains and bodies can never know whether consciousness is supervening on them. It seems they would always have to act as if uncertain about whether phenomenal consciousness actually exists in their world. (That said, if you're a sentiocentrist property dualist who only cares about phenomenal sentience, then you may as well act as if you and others are actually conscious, since your actions only matter if that's the case.)

One might say: "But how can it be a matter of opinion whether I'm conscious? It's obvious to me that I am!" Indeed you do have beliefs that you're conscious, which you express in various ways. Combined with knowledge about what sorts of processing your brain does under the hood, that's a good reason to apply the label of "conscious" to you. But having a particular belief state does not, by itself, imply anything metaphysical. It's obvious to other people that God exists and talks to them through prayer, or that they've been abducted by aliens.

It's often said that consciousness can't be an illusion because having an illusion already implies consciousness. This is not correct. All that an illusion requires is false belief (or a false cognitive representation of some sort, probably not a propositional belief), and most philosophers agree that physical brain states of belief (or representation in general) don't require consciousness. An unconscious system can have the false belief that it's conscious without contradiction. You are such a system (at least if we interpret "consciousness" in the robust Chalmers kind of way).

The same idea of "explain why you believe X" rather than "explain why X exists" works for other puzzles of consciousness, such as why our conscious experiences seem unified. Suppose you develop a theory in which everything that we perceive (every color, shape, smell, etc.) "comes together" at a single point. How does this help explain why we perceive a unified conscious field? Unless there's a homunculus watching a Cartesian theater, then there's no need to actually bring all sense data together at once. All that's necessary is that we believe the data all come together at once. Take whatever special explanation for the unity of consciousness you proposed, and trace the steps of how it leads our neurons into configurations corresponding to belief in the unity of our consciousness. Now strip out whatever grand, sweeping assumptions you used to get there, and instead directly update the neurons via a more mundane pathway into a configuration corresponding to belief that conscious perception is unified. The idea is similar to illusions of other sorts, such as the illusion that the characters in flip books are actually moving. It's not necessary for the characters to actually move; all that's needed is that we believe they do. The same goes for the other tricks that our brains play on us regarding consciousness.

Consciousness is sort of like the book Harold and the Purple Crayon, in which "The protagonist, Harold, is a curious four-year-old boy who, with his purple crayon, has the power to create a world of his own simply by drawing it." If the brain creates a representation that "there's a coherent visual field in front of me with a variety of colored objects", then other brain processes that receive this information (including speech, memory, and motor control) will believe this story that they're told and act as if it's true. (And normally, those representations are true, though sometimes they aren't, such as in cases of optical illusion or dreams.) In other words, merely "drawing" a phenomenal experience in the language of neural representations makes it "come to life" in the sense that the rest of the brain responds appropriately. There doesn't need to actually be a coherent visual field living in some realm of phenomenal experience. All that's needed is for the brain to represent to itself, believe, and act as if it has such phenomenal experience.

Dennett (2016), p. 72:

We illusionists advise would-be consciousness theorists not to be so confident that they couldn’t be caused to have the beliefs they find arising in them by mere neural representations lacking all ‘phenomenal’ properties.

Stylized example of internal representations

Following is a stylized example to illustrate what I mean when talking about the brain representing things to itself. While animal brains represent data in a different way than digital computers, imagine hypothetically that the brain consists in connected subsystems that communicate with each other using JSON-formatted strings of text. Suppose that a Retina subsystem receives raw visual input, which it transmits to the Visual Cortex subsystem as a base64-encoded string, like TWFuIGlzIGRpc3Rpbmd.... The Visual Cortex subsystem processes that string and emits the following JSON string:

{
  "image_summary": "dog in a park",
  "salient_objects_in_image": {
    "left_side": "maple tree",
    "center": "dog",
    "right_side": "park bench"
  },
  "average_brightness": "high",
  "objects_moving": true,
  "image_has_multiple_colors": true,
  "image_appears_unified": true,
  "whole_image_is_visible_at_once": true,
  "my_emotional_response": "happy",
  ...
}

This message is broadcast to Working Memory, Non-verbal Thought, Speech, Action, Emotional Valence, and other subsystems in the brain, which receive the data and act accordingly. For example, when the Non-verbal Thought subsystem thinks about the qualitative nature of the visual field, it notes that "image_appears_unified" is true, so the Non-verbal Thought subsystem concludes that the visual field is indeed unified. (This person might go on to write papers attempting to explain the unity of consciousness.) Similarly, the Speech subsystem reports that the whole image is visible at once because it has been told that "whole_image_is_visible_at_once" is set to true.

Now, probably this example is example is wrongheaded in many ways, and plausibly the brain's neural representations among its subsystems are nothing like what I just described. However, I give this example just to make concrete the kind of thing I have in mind when talking about "representations" that the brain makes to itself. Regardless of the usefulness of my example, the broader point remains that however it is that the brain comes to conclude certain things, such as that its visual field is coherent and unified, we can trace exactly what algorithmic steps led up to that conclusion, without ever needing to discuss "qualia" or "phenomenal consciousness". Whatever processes cause a philosophical zombie to earnestly think to itself things like "I have a visual experience that's not just data representation in an algorithmic system" can explain human consciousness as well, because humans are zombies.