Flavors of Computation Are Flavors of Consciousness

First written: 19 Jul. 2014; last update: 21 Jun. 2018

If we don't understand why we're conscious, how come we're so sure that extremely simple minds are not? I propose to think of consciousness as intrinsic to computation, although different types of computation may have very different types of consciousness – some so alien that we can't imagine them. Since all physical processes are computations, this view amounts to a kind of panpsychism. How we conceptualize consciousness is always a sort of spiritual poetry, but I think this perspective better accounts for why we ourselves are conscious despite not being different in a discontinuous way from the rest of the universe.

Introduction

"don't hold strong opinions about things you don't understand" --Derek Hess

Susan Blackmore believes the way we typically think about consciousness is fundamentally wrong. Many "theories of consciousness" that scientists advance and even the language we use set us up for a binary notion of consciousness as being one discrete thing that's either on or off.

We can tell there's something wrong with our ordinary conceptions when we think about ourselves. Suppose I grabbed a man on the street and described every detail of what your brain is doing at a physical level -- including neuronal firings, evoked potentials, brain waves, thalamocortical loops, and all the rest -- but without using suggestive words like "vision" or "awareness" or "feeling". Very likely he would conclude that this machine was not conscious; it would seem to be just an automaton computing behavioral choices "in the dark". If our conceptualization of consciousness can't even predict our own consciousness, it must be misguided in an important way.

Given perfect neuroscience, where is consciousness?

Imagine we have perfect neuroscience knowledge. We understand how every neuron in the brain is hooked up, how it fires, and what electrical and chemical factors modulate it. We understand how brain networks interact to produce complex patterns. We have high-level intuitions for thinking about what the functions of various neural operations are, in a similar way as a programmer understands the "gist" of what a complex algorithm is doing. Given all this knowledge, we could trace every aspect of your consciousness. Every thought and feeling would have a signature in this neural collective. Nothing would be hidden exclusively to your subjective experience; everything would have a physical, observable correlate in the neural data.

We need a conception of consciousness which makes it seem obvious that this collection of observable cognitive operations is conscious. If that's not obvious, and especially if that seems implausible or impossible, then our way of thinking about consciousness is fundamentally flawed, because this neural collective is in fact conscious.

Sometimes I have conversations like this:

Brian: Do you think insects are conscious?

Other person: No, of course not.

Brian: Why do you think they're not?

Other person: Well, it just seems absurd. How could a little thing executing simple response behaviors be conscious? It's just reacting in an automatic, reflexive way. There's no inner experience.

Brian: If you didn't know from your own subjective experience that you were conscious, would you predict that you were conscious, or would you see yourself as executing a bunch of responses "in the dark" as the behaviorists might have seen you?

Other person: Hmm, well, I think I would know I'm conscious because I behave more intelligently than an insect and can describe my inner life.

Brian: Can you explain what about your brain gives rise to consciousness that's not present in an insect?

Other person: Uh....

Brian: If you don't understand why you're conscious, how can you be so sure an insect isn't conscious?

Other person: Hmm....

Seeing consciousness from a third-person perspective

I know that I'm conscious. I also know, from neuroscience combined with Occam's razor, that my consciousness consists only of material operations in my brain -- probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations -- as Eliezer Yudkowsky puts it, "How An Algorithm Feels From Inside". Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs. Now, some people would object at this point and say that maybe consciousness is only a subset of what brains compute -- that most of brain activity is "unconscious", and thoughts and feelings only become "conscious" when certain special kinds of operations happen. In response, I would point out that there's not a major discontinuity in the underlying computations themselves that warrants a binary distinction like this. Sure, some thoughts are globally broadcast and others aren't, and the globally broadcast thoughts are accessible to a much wider array of brain functions, including memory and speech, which allows us to report on them while not reporting on signals that are only locally broadcast. But the distinction between local and global broadcasting is ultimately fuzzy, as will be any other distinction that's suggested as the cutoff point between unconscious and conscious experience.

If we look at computations from an abstract perspective, holding in abeyance our intuitions that certain kinds of computations can't be conscious, we can see how the universe contains many varieties of computation of all kinds, in a similar way as nature contains an enormous array of life forms. It's not obvious from this distanced, computation-focused perspective that one subset of computations (namely those in brains of complex animals) is privileged, while all other computations are fundamentally different. Rather, we see a universal continuity among the species of computations, with some being more complex and sophisticated than others, in a similar way as some life forms are more complex and sophisticated than others.

From this perspective, it is clear why our neural collective is conscious: It's because (one flavor of) consciousness is the process of doing the computations that our brains do. The reason we're "not conscious" under general anaesthesia is because the kinds of global information distribution that our brains ordinarily do are prevented, so we can't have complex thoughts like "I'm conscious" or store memories that would lead us to think we had been conscious. But there are still some other kinds of computations going on that have their own kinds of "consciousness", even if of a different nature than what our intuitive, analytical, or linguistic brain operations would understand.

I should add a note on terminology: By "computation" I just mean a lawlike transition from input conditions to output conditions, not necessarily something computable by a Turing machine. All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say "computations" in this piece, one could just as well substitute "physical processes" instead.

Imagining other kinds of consciousness

Talk about consciousness is always somewhat mystical. Consciousness is not a hard, concrete thing in the universe but is more of an idea that we find important and sublime, perhaps similar to the concept of Brahman for Hindus. When we think about consciousness, we're essentially doing a kind of poetry in our minds -- one that we find spiritually meaningful.

When we conceive of consciousness as being various flavors of computations, the question arises: What is it like to be another kind of computation than the one in our heads? I've suggested elsewhere that there's some extent to which we can't in principle answer this question fully, because our brains are our brains, and they can't perfectly simulate another computation without being that computation, in which case they would no longer be our current brains. But we can still get some intuitive flavor of what it might mean for another consciousness to be different from ours.

One way to start is just to notice that our own minds feel different and have many different experiences at different times. Being tired feels different from being alert, which feels different from being scared, which feels different from being content in a warm blanket. Even more trivially, seeing a spoon looks different from seeing a fork, which looks different from seeing a penny. Our brains perform many different computations at different times, and these each have their own textures. More extreme examples include being on the edge of sleep, dreaming, waking up slowly after "going under" for surgery, or meditating.

Pretending to be a worm

What about other animals? Can we imagine what it's like to be a worm? Fundamentally we can't, but here's an exercise that may at least gesture in the right direction. Read the following instructions and then try it:

Instructions: Close your eyes. Stay still. Stop noticing sounds and smells. Turn off the linguistic inner voice that thinks verbal thoughts in your head. In fact, try to stop thinking any thoughts as much as possible. Now, poke your head with your fingers. Scratch it softly with your fingernails. Tap it with your hand. Face your head toward a light and notice how it looks bright even though you can't see anything definite due to your eyes being closed. Turn your head away. Notice air moving gently across your skin.

This exercise helps mimic the way in which worms have no eyes or ears and presumably no complex thoughts, especially not linguistic ones. Yet they do have sensitivity to touch, light, and vibrations.

Now, even this exercise is far from adequate. Human brains have many internal processes and computing patterns that don't apply to worms. Even if we omit senses that worms lack and try to suppress high-level thoughts, this human-like computing scaffolding remains. For instance, maybe our sense of being a self with unified and integrated sensations is mostly absent from worms. Probably many other things are absent too that I don't have the insight to describe. But at least this exercise helps us begin to imagine another form of consciousness. Then we can multiply whatever differences we felt during this exercise many times more when we contemplate how different a worm's experiences actually are.

Flavors of computation and consciousness

In some sense all I've proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what's happening in a brain on a lazy afternoon. How can we capture that difference?

Every subjective experience has corresponding objective, measurable brain operations, so the awful experiences of pain must show up in some visible way. It remains to be seen exactly what agony corresponds to, but presumably it includes operations like these: neural networks classifying a stimulus as bad, aversive reactions to the negative stimulus, negative reinforcement learning, focused attention on the source of pain, setting down aversive memory associations with this experience, and goal-directed behavior to escape the situation, even at cost to other things of value. There may be much more, but these basics are likely to remain part of the equation even after further discoveries. (Note: It may be that we should want neuroscience discoveries to come slower rather than faster.) But if so, it becomes plausible that when we see these kinds of operations in other places, we should disvalue them there as well.

This is why an ethical viewpoint like biocentrism has something going for it. (Actually, I prefer "negative biocentrism", analogous to "negative utilitarianism".) All life can display aversive reactions against damage to some degree, and since these are computations of certain flavors, it makes sense to think about them as being conscious with certain flavors. Of course, the degree of importance we place on them may be very small depending on the organism in question, but I don't see fundamental discontinuities in the underlying physics, so our valuations functions should not be discontinuous either. Still, our valuation functions can be very steep. In particular, I think animals like insects are vastly more complex than plants, fungi, or bacteria, so I care about their flavors of consciousness more.

My perspective is similar to that of Ben Goertzel, who said:

My own view of consciousness is a bit eccentric for the scientific world though rather commonplace among Buddhists (which I'm not): I think consciousness is everywhere, but that it manifests itself differently, and to different degrees, in different entities.

Alun Anderson, who spent 10 years studying insect sensation, believes "that cockroaches are conscious." He elaborates:

I don't mean that they are conscious in even remotely the same way as humans are[...]. Rather the world is full of many overlapping alien consciousnesses.

[...]

To think this way about simple creatures is not to fall into the anthropomorphic fallacy. Bees and spiders live in their own world in which I don't see human-like motives. Rather it is a kind of panpsychism, which I am quite happy to sign up to, at least until we know a lot more about the origin of consciousness. That may take me out of the company of quite a few scientists who would prefer to believe that a bee with a brain of only a million neurones must surely be a collection of instinctive reactions with some simple switching mechanism between them, rather [than having] some central representation of what is going on that might be called consciousness. But it leaves me in the company of poets who wonder at the world of even lowly creatures.

William Seager:

The argument for panpsychism, I guess, is: If strong emergence is ruled out, then you will not be able to get this "jump" from the non-conscious to the conscious, and therefore consciousness must be a fundamental feature in nature.

Allen (2016):

Velmans (2012) distinguishes between ‘discontinuity theories’, which claim that there was a particular point at which consciousness originated, before which there was no consciousness (this applies both the the universe at large, and also to any particular consciousness individual), and ‘continuity theories’, which conceptualize the evolution of consciousness in terms of “a gradual transition in consciousness from unrecognizable to recognizable.” He argues that continuity theories are more elegant, as any discontinuity is based on arbitrary criteria, and that discontinuity theories face “the hard problem” in a way that continuity theories don't. Velmans takes these arguments to weigh in favor of adopting, not just a continuity theory, but a form of panpsychism.

Daniel Dennett in Fri Tanke (2017) at 54m50s:

I think that the very idea that consciousness is either there or not is itself a big mistake. Consciousness comes in degrees, and it comes in all sorts of different degrees and varieties. And the idea that there is one property which divides the universe into those things that are conscious and those that aren't is itself a really preposterous mistake.

Robin Hanson:

It seems to me simplest to just presume that none of these [computational, creature-like] systems feel, if I could figure out a way to make sense of that, or that all of them feel, if I can make sense of that. If I feel, a presumption of simplicity leans me toward a pan-feeling position: pretty much everything feels something, but complex flexible self-aware things are aware of their own complex flexible feelings. Other things might not even know they feel, and what they feel might not be very interesting.

For many more quotes of this type, from ancient Greeks to contemporary philosophers of mind, see David Skrbina's encyclopedia entry on panpsychism. I disagree with at least half of the specific views cited there, but some of them are spot-on.

It's unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, "consciousness" is a concept -- a "cluster in thingspace" -- and all points in thingspace are less than infinitely far away from the centroid of the "consciousness" cluster. By a similar argument, we might say that any system displays nonzero similarity to any concept (except maybe for strictly partitioned concepts that map onto the universe's fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that principle.

Critics of this view may complain that, like a hypothetical unfriendly artificial intelligence, I'm not applying a sufficiently conservative concept boundary for the concept of consciousness. But one man's wise conservatism is another's short-sighted parochialism. My view could also be characterized as "concept creep"—a situation in which increasing sensitivity to harm leads to expanding the boundaries of a concept (which in my case is the concept of "consciousness" or "suffering").

Human correlates vs. fundamental principles

Exploration of neural correlates of consciousness helps identify the locations and mechanisms of what we conventionally think of as high-level consciousness in humans and by extension, perhaps the high-level consciousness of similar animal relatives. Stanislas Dehaene's book Consciousness and the Brain provides a superb overview of the state of neuroscience on how consciousness operates in the brain in terms of global workspace theory.

But describing how consciousness works in human-like minds can't be the end of the story. It leaves unanswered the question of whether consciousness could exist in slightly different mind architectures as long as they're doing the same sorts of operations. We could imagine gradually tweaking a human-type mind architecture on subtle dimensions. At what point would these theories of consciousness say it stops being conscious? What if an agent performed human-like cognitive feats without centralized information broadcasting? Global-workspace and other neural-correlation theories don't really give answers, because they can only interpolate between a set of points, not extrapolate beyond that set of points.

Consciousness cannot be crucially tied up with the specific organization of human minds. Consciousness is just not the kind of thing that could be so arbitrarily determined. Consciousness is what consciousness does: It is the suite of stimulus recognition, internal computation, and action selection that an organism performs when making complex decisions requiring help from many cognitive modules. It can't be something necessarily tied to thalamus-cortex connectivity or cross-brain wave synchronization. Those are too specific to the details of implementation; a particular implementation can't be relevant because it doesn't do anything different from another implementation of the same functionality. Rather, consciousness must be about what the process is actually trying to accomplish: receiving information, manipulating it, combining thoughts in novel ways, and taking actions. In other words, consciousness must be related to computation itself.

But if consciousness is about computation in general, then it would seem to appear all over the place. Some embrace this conclusion as a natural deduction from what consciousness as computation must be. For instance, Giulio Tononi's integrated information theory (IIT) suggests that even this metal ball has a small degree of consciousness. Dehaene, on the other hand, says he's "reticent" to accept IIT because it implies a kind of panpsychism (p. 279, Ch. 5's footnote 35).

I agree that IIT is not necessarily the ultimate theory of consciousness. There may be many more particular nuances we want to apply to our criteria for what consciousness should be. But ultimately I think Tononi is right that consciousness must be something fundamental about the properties of the system, not something specific to the implementation. Consciousness as a general phenomenon is the kind of thing that needs a general theory. It just doesn't make sense that something so basic and so tied up with functional operations would require particular implementations.

Note that the functionalist view I'm defending here is not behaviorism. It's not the case that any mechanism that yields human-like behavior has human-like consciousness, as the example of a giant lookup table shows. A giant lookup table may have its own kind of consciousness (indeed, it should have at least some vague form of consciousness according to the thesis I'm advancing in this essay), but it's a different, shallower kind than that of a human. We could see this if we looked inside the brains of the two systems. Humans when responding to a question would show activation in auditory centers, conscious broadcasting networks, and speech centers before producing an answer. The lookup table would do some sort of artificial speech recognition to determine the text form of the question and then would use a hash table or tree search on that string to identify and print out the stored answer. Clearly these two mind operations are distinct. If we broaden the definition of "behavior" to include behavior within the brain by neurons or logic gates, then even by behaviorist criteria these two kinds of consciousness aren't the same.

Old-school behaviorism is essentially a relic of times past when researchers were less able to look inside brains. Cognitive algorithms must matter in addition to just inputs and outputs. After all, what look from the outside like intermediate computations of a brain can be seen as inputs and outputs of smaller subsystems within the brain, and conversely, the input-output behavior of an organism could be seen as just an internal computation to a larger system like the population of organisms as a whole.

So the specific flavor of consciousness that a system exhibits can indeed depend on the algorithms of a mind, which depend on its architecture. But consciousness in general just seems like something too fundamental to be architecture-dependent.

In any case, suppose you thought the architecture was fundamental to consciousness, i.e., that consciousness was the static physical pattern of matter arranged in certain ways rather than the dynamic computations that such matter was performing. In this case, we'd still end up with a kind of panpsychism, because patterns with at least a vague resemblance to consciousness would be ubiquitous throughout physics.

Sentience and sapience

If consciousness is the thoughts and computations that an agent performs when acting in the world, there seems to be some relationship between sapience -- the ability to intelligently handle novel situations -- and sentience -- inner "feelings". Of course, it's not a perfect correlation. For instance, Mr. Spock calmly computing an optimal course of action may be more successful than a crying baby demanding its juice bottle. But in general, minds that have more capacity for complex thought, representation, motivational tradeoff among competing options, and so on will also have more rich inner lives that contain more complex sensations. As Daniel Dennett notes in Consciousness Explained (p. 449): "the capacity to suffer is a function of the capacity to have articulated, wide-ranging, highly discriminative desires, expectations, and other sophisticated mental states."

One overly simplistic argument could run as follows:

  1. Intelligence is the ability to "understand" things (where "intelligence" and "understanding" are complex concepts that come in degrees).
  2. Consciousness/sentience is "understanding" of one's emotions, drives, and other mental states.
  3. Therefore, greater intelligence, when directed at one's own thoughts and feelings, implies greater sentience.

Of course, "understanding" is a concept about as complex as "intelligence" or "consciousness", so this argument does no real work; it just casts general ideas in a potentially new light.

In reading Consciousness and the Brain, I realized that many of the abilities characteristic of consciousness are those cognitive functions that are high-level and open-ended, such as holding information in short-term memory for an arbitrary time, being able to pay attention to arbitrary stimuli, and controlling the direction of one's thoughts. The so-called "unconscious" processing tends to involve feedforward neural networks and other fixed algorithms. One forum post proposed that Turing-completeness may be part of what makes human-like minds special. They not only compute fixed functions but could in theory, given sufficient resources, compute any (computable) function. Maybe Turing-completeness could be seen as a non-arbitrary binary cutoff point for consciousness. I'm skeptical that I'd agree with this definition, because it feels too theoretical. Why should subjectivity be so related to a technical computer-science concept? In any case, I'm not quite sure where the Turing-completeness cutoff would begin among animal brains. But it is an interesting proposal. Rather than thinking in binary terms, I would note that human mental abilities, while powerful, could still be improved upon in practice (given that we don't have infinite memory and so on), and presumably, more advanced minds would be considered even more conscious than humans.

'The motherboard to a Sony PSone video game console.' By Evan-Amos (Own work) [Public domain], via Wikimedia Commons
The correlation between sapience and sentience seems plausible among Earth's animals, but does it hold in general? Nick Bostrom argues that it doesn't have to. In his book Superintelligence (2014), Bostrom explains:

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today -- a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

I tend to differ with Bostrom on this. I think if we dissolve our dualist intuitions and see consciousness as flavors of computation, then a highly intelligent and complex society is necessarily consciousness -- at least, with a certain flavor of consciousness. That flavor may be very different from what we have experience with, and so I can see how many people would regard it as not real consciousness. Maybe I would too upon reflection. But the question is to some extent a matter of taste.

Imagine robotic aliens visiting Earth. They would observe a mass of carbon-based tissue that performs operations that parts of it find reinforcing. The globs of tissue migrate across the Earth and engage in lots of complex behaviors. The tissue globs change Earth's surface dramatically, much like a bacteria colony transforming a loaf of bread. But the tissue globs don't have alien-consciousness. Hence, the aliens view Earth like a wasteland waiting to be filled with happy alien-children.

Note that my view does not equate "consciousness" with "goodness". I think many forms of consciousness are intrinsically bad, and I would prefer for the universe to contain less consciousness on the whole. That said, we have to know the enemy to fight the enemy.

Is The Rite of Spring classical music?

On 29 May 1913, the opening of Igor Stravinsky's The Rite of Spring in Paris caused an uproar among the audience:

As a riot ensured, two factions in the audience attacked each other, then the orchestra, which kept playing under a hail of vegetables and other objects. Forty people were forcibly ejected.

The reason:

It's more likely that the audience was appalled and disbelieving at the level of dissonance, which seemed to many like sheer perversity. "The music always goes to the note next to the one you expect," wrote one exasperated critic.

At a deeper level, the music negates the very thing that for most people gives it meaning: the expression of human feelings. [...]

There's no sign that any of the creatures in the Rite of Spring has a soul, and there's certainly no sense of a recognisable human culture. The dancers are like automata, whose only role is to enact the ritual laid down by immemorial custom.

Arguing over whether an abstract superintelligence is conscious is similar to pre-modern musicians arguing whether The Rite of Spring is classical music, except maybe that the former contrast is even more stark than the latter. Abstract machine intelligence would be a very different flavor of consciousness, so much that we can't do it justice by trying to imagine it. But I find it parochial to assume that it wouldn't be meaningful consciousness.

Of course, sometimes being parochial is good. If you don't favor some things over others, you don't favor anything at all. It's completely legitimate to care about some types of physical processes and not others if that's how you feel. I just personally incline toward the view that complex machine consciousness of any sort has moral standing.

Consciousness is like life

I think the concept "consciousness" is a lot like the concept "life" in terms of its complexity and fuzziness. Perhaps this is unsurprising, because as John Searle correctly observes, consciousness is a biological process.

But aren't the boundaries of life relatively clear? No, I don't think so. Biologists have agreed on certain properties that define life by convention, but the properties of life taught in biology class are just one arbitrary choice out of many possible choices regarding where to draw a line between the biological and abiological.

Viruses are one classic example of the fuzziness of "life":

Opinions differ on whether viruses are a form of life, or organic structures that interact with living organisms.[67] They have been described as "organisms at the edge of life",[8] since they resemble organisms in that they possess genes, evolve by natural selection,[68] and reproduce by creating multiple copies of themselves through self-assembly. Although they have genes, they do not have a cellular structure, which is often seen as the basic unit of life. Viruses do not have their own metabolism, and require a host cell to make new products. They therefore cannot naturally reproduce outside a host cell[69] – although bacterial species such as rickettsia and chlamydia are considered living organisms despite the same limitation.[70][71] Accepted forms of life use cell division to reproduce, whereas viruses spontaneously assemble within cells. They differ from autonomous growth of crystals as they inherit genetic mutations while being subject to natural selection.

Definitions become even hazier when we imagine extraterrestrial life, which may not use the same mechanics as life on Earth. Carol Cleland: "Despite its amazing morphological diversity, terrestrial life represents only a single case. The key to formulating a general theory of living systems is to explore alternative possibilities for life. I am interested in formulating a strategy for searching for extraterrestrial life that allows one to push the boundaries of our Earth-centric concepts of life."

There are some "joints" in the space of life-like processes that are more natural to carve things up at than others. The current biology-textbook definition of life may represent one such "joint". In the case of consciousness, I could imagine a similar "joint" being "living things that have neurons", which I think would only include most animals. (This page says: "Not all animals have neurons; Trichoplax and sponges lack nerve cells altogether.") But this definition is clearly arbitrary, as neurons are but one way to transmit information. Likewise, the requirement that a system must be organized into cells is an arbitrary cutoff in the standard definition of life, since "having cells" is just one form of the more general property of "having an organized structure".

Delineating consciousness based on possession of (biological) neurons would also exclude artificial computer minds from being counted as conscious, in a similar way as the standard biological definition of life excludes artificial life, even when artificial life forms satisfy most of the other criteria for life.

And I think that even examples normally seen as paragons of lifelessness, like rocks, have some of life's properties. For example, rocks are organized into regular patterns, absorb and release energy from their surroundings, change in size with age (such as shrinking through weathering), "respond" to the environment by moving away when pushed with enough force by wind or water, and can "reproduce" into smaller rocks when split apart. And some rocks, like these crystals, are even more lifelike: "The particles aren’t truly alive — but they’re not far off, either. Exposed to light and fed by chemicals, they form crystals that move, break apart and form again."

If I cared about life as a source of intrinsic moral value, I would probably be a hylozoist for similar reasons as I'm a panpspychist: Every part of physics shows at least traces of the kinds of properties that we normally think should define life and consciousness.

How not to think about panpsychism

This essay has defended a sort of panpsychism, in which we can think of all computational systems as having their own sorts of conscious experiences. This is one particular kind of panpsychism, which should be distinguished from other variants.

Pathetic fallacy

Panpsychism should not commit the pathetic fallacy of seeing full-fledged minds in even simple systems.

Once I was using a Ziploc bag to carry flies stuck inside a window to the outside. I asked myself whimsically: "Is this what it feels like to be a proton pump -- transporting items to the other side of a membrane?" And of course the answer is "no", because the cognitive operations that constitute "how it feels to remove flies" (visual appearance, subjective effort, conceptual understanding, etc.) are not present in proton pump. Such pumps would need tons of extra machinery to implement this functionality. The pathetic fallacy is only possible for dualist conceptions of mind, according to which elaborate thoughts can happen without corresponding physical processing.

On the flip side, it's mainly dualist theories of consciousness that allow a functionalist kind of panpsychism not to be true. If physics represents everything going on, then there must indeed be traces of mind-like operations in physics, depending on how "mind" is defined. In contrast, if mind is another substance or property beyond the physical, then it could not be present in simple physical systems.

Mind dust

In "Why panpsychism doesn't help explain consciousness" (2009), Philip Goff presents panpsychism as a theory that the universe's "physical ultimates" are intrinsically conscious. He then argues that if we imagine a person named Clare:

Even if the panpsychist is right that Clare's physical ultimates are conscious, the kind of conscious experience had by Clare's ultimates will presumably be qualitatively very different to the kind of conscious experience pre-theoretical common sense attributes to Clare on the basis of our everyday interactions with her [...]. (p. 290)

I find this objection misguided, because my version of panpsychism doesn't propose that whole-brain consciousness is constituted from lots of little pieces of consciousness (what some call "mind dust"). Rather, the system of Clare as a whole has its own kind of consciousness, because the system as a whole constitutes its own kind of computation, at the same time that subcomponents of the system have their own, different kinds of consciousness corresponding to different computations, and at the same time that Clare is embedded in larger systems that once again have their own kinds of consciousness. Mine is a "functionalist panpsychism" focused on system behavior rather than on discrete particles of consciousness. On p. 298, Goff admits that functionalists would not agree with his argument. On p. 304, Goff considers a panpsychism similar to mine, in which functional states of the whole organism determine experiential content. He rejects this because he conceives of consciousness as a separate thing (reification fallacy). In contrast, I believe that "consciousness" is just another way of regarding the functional behavior of the system. In other words, I'm defending a kind of poetic panpsychism, in which we think about systems as being phenomenal, without trying to turn phenomenality into a separate object.

And if you do insist on regarding consciousness as an object, why can't we see a dynamic system itself as an object? Mathematicians and computer scientists are familiar not just with manipulating points but also with manipulating functions and other complex structures. Functions can be seen as points in their own vector spaces. Some programming languages treat functions as first-class citizens. I wonder how much intuitions on philosophy of mind differ based on one's academic department.

Marvin Minsky regards concepts like "consciousness" as "suitcases" -- boxes that we put complicated processes into. "This in turn leads us to regard these as though they were 'things' with no structures to analyze."

In a 2012 lecture, Goff proposed a kind of panpsychism in which each particle in his mind contains his whole subjective experience, so his mind occupies many locations at once within his brain. This again is misguided, because it reifies a whole subjective experience into a fundamental object. Rather, subjective experience is the collective behavior of one's whole brain; it's not a separate thing that can live in a single particle.

I would be okay with a "mind dust" picture if instead of conceiving of each particle as having a complete phenomenal experience, we picture each particle as constituting a little sliver of computation that can combine with other slivers of computation to form more complete computational patterns. As William Seager explains: "Presumably the same way that physical complexity grows, there will be a kind of matching or mirroring growth in mental complexity." Our subjective experiences are holistic systems composed of many computational pieces, each of which can poetically be thought of as having its own simple, incomprehensible-to-us form of mentality.

Combination problem

Some panpsychist and panprotopsychist philosophers believe that the "quiddities" of physical reality may be conscious in a basic way (panpsychism) or may contain the building blocks of consciousness in a sense beyond embodying structural/functional properties (panprotopsychism). David Chalmers toys with a view of this kind, but as he notes, it leads to the "combination problem": How do these smaller parts combine to yield macrophenomenal consciousness like our own? Note that this sounds an awful lot like the regular mind-body problem: How do physical parts combine to yield phenomenal experience like ours? I suspect that Chalmers finds the panpsychist question less puzzling because at least the panpsychist problem already has phenomenal experience to start with, so phenomenal parts just need to be put together rather than appearing out of nowhere.

I think this whole project is wrongheaded. First of all, why should we believe in quiddities? Why should we think there's more to something than how it behaves structurally and functionally? What would it mean for the additional "essence" to be anything? If there were such an essence, either it would have structural/functional implications, in which case we've already accounted for them by structural/functional characterization, or it doesn't have any structural/functional implications, in which case the quiddity is wholly unnecessary to any explanation of anything physical. Quiddities face the same problems as a non-interacting dualist soul. On the other hand, could the same argument be leveled against the existence of physics too? One could say that the "existence" of physics is an additional property over and above the (logical but not actual) structure or function of mathematical descriptions of physical systems. I don't know whether I endorse out-and-out eliminativist Ontic Structural Realism (relations without relata), and I'm more confused about this topic than about consciousness. Still, it seems weird to "squeeze in" extra statements about the relata (beyond that they exist and that they have particular structures/functions), like that they have phenomenal character. It's true that we sometimes need to expand the ontology of physics to accommodate new phenomena, but physics has always been structural/functional, so expanding it to include phenomenal properties would be unlike any past physical revolutions.

Anyway, let's say we have quiddities of physics. What does it mean to say they have a phenomenal character? I have no idea what such a state of affairs would look like. Sure, I can conjure up images of little balls of sensation or feeling or whatever, but that act of mental imagination doesn't appear to describe anything more coherent than imagining little particles of good luck being emitted by discovered four-leaf clovers. I mean, where would that mental stuff come from? What is it? The hard problem of consciousness would remain as fierce as ever, just pushed back to the level of explaining why the consciousness primitive exists.

Panpsychism is about ethics

Augustine Lee rejects panpsychism by suggesting an analogy with a car: A whole car can drive, but that doesn't mean a steering wheel by itself has a "drive"-ness to it. Likewise, consciousness involves complicated brain structures, and simple physics by itself needn't have those same types of structure. This is a valid point, and it suggests that we may want to rein in the extent to which we attribute consciousness to fundamental physical operations.

But what's important to emphasize is that panpsychism is always an attribution on our part -- as I say, a kind of poetry. How much "mind" we see in simple physics depends on our intuitions about how broad we want our definitions to be. We can fix definitions anywhere, but the most helpful way to set the definition for consciousness is based on our ethical sentiments -- i.e., we say that process X is conscious to degree Y if we feel degree Y of moral concern about X. So, for instance, if we regarded driving as morally important, we would decide how much (if at all) a steering wheel on its own mattered, and then would set the amount of "drive"-ness of the steering wheel at that value.

For what it's worth, I think the operations we consider as "consciousness" are more multifarious and fundamental than what we typically consider "driving", which suggests that "consciousness" will have more broad definitional boundaries than "driving".

Panpsychism vs. unconscious sleep?

While we can speculate about some kind of consciousness existing in all entities, it might be objected that we already have firsthand experience with the possibility of non-consciousness -- namely, our own non-REM (NREM) sleep. Doesn't this prove that panpsychism can't be true, because we can see for ourselves that our sleeping brains aren't conscious? Following are some points in reply.

  • It's worth noting that we can be conscious during NREM sleep, such as with hypnagogia during stage 1 and dreams during various NREM stages. Night terrors typically occur during stage 3 of NREM sleep. So the strict delineation of REM as "conscious" and NREM as "unconscious" is too simple. But it still seems that during some parts of sleep, we are not conscious.
  • One might hold that NREM sleeping brains are indeed conscious but with a very different kind of consciousness -- one that looks mostly empty. Maybe we have an extremely low degree of consciousness during NREM sleep, which we call unconscious. If so, we wouldn't reject panpsychism, but we would see that different computational systems may have very different degrees of importance. While conscious experience involves high-frequency brain oscillations (e.g., gamma waves around 40 Hz), slow-wave sleep involves delta waves often less than 1 Hz. So even if there is conscious activity during NREM sleep, it may be vastly slower than during waking consciousness or dreaming.
  • A more speculative response is to suggest that maybe we are conscious during NREM sleep, but our memories don't store the experiences the way they do our waking conscious experiences. Many of our dreams during sleep are forgotten, and dreams are considered "conscious", so it might not be a stretch to suppose that less pronounced NREM activity would be forgotten even more. It seems one could investigate this possibility further by exploring whether the mechanisms that inhibit memory formation are active during NREM sleep. I have no data on this, so right now this is just a (perhaps unlikely) supposition.
  • Even if our brain-wide mind is absent during NREM sleep, smaller subsystems within that brain might still be "conscious" to themselves in some alien way. Sleep seems to resemble the more general question of whether subcomponents of oneself can be considered conscious even if one's explicit, verbal thinking can't access them.

Panpsychism does not imply environmentalism

David Skrbina argues that panpsychism

has implications for, e.g., environmentalism. So if we see mind in things in nature -- whether it's animals or plants or even rocks and rivers and streams and so forth -- this has a definite ethical component that I think is very real and has a pragmatic kind of aspect.

Elsewhere he suggests:

Arguably, it is precisely this mechanistic view -- which sees the universe and everything in it as a kind of giant machine -- that lies at the root of many of our philosophical, sociological, and environmental problems. Panpsychism, by challenging this worldview at its root, potentially offers new solutions to some very old problems.

Freya Mathews moves from a panpsychist outlook, combined with the Taoist idea of wu wei ("non-action"), to the position that

The focus in environmental management, development and commerce should be on “synergy” with what is already in place rather than on demolition, replacement and disruption.

She writes:

from a panpsychist point of view it is not enough merely to conserve energy, unilaterally extracting and transforming it here and storing it there. One has to allow planetary energies to follow their own contours of flow, contours which reveal local and possibly global aspects of a larger world-purpose.

There seems to be much in common between panpsychism and deep ecology / other forms of environmental ethics. But there's no necessary connection, and indeed, one can make the opposite case. There are several problems with the leap from panpsychism to environmentalism:

1. Ecosystems may matter less than animals

If the welfare of an ecosystem as a whole conflicts with that of individual animals within the ecosystem, which takes priority? Unless the ecosystem matters more than many animals, the animals may still dominate the calculations. The highly developed and emotion-rich consciousness of a single mammal or bird brain seems far more pronounced than the crude shadows of sentience that we see in holistic ecosystems. Maybe ecosystems get more weight because they're bigger and more intricate than an animal brain, but I doubt I'd count an ecosystem's welfare more than, say, 10 or 100 individual animals.

2. Not clear if the environment wants to be preserved or changed

Suppose we grant, say, the Earth as a whole nontrivial ethical weight compared with animal feelings. Who's to say that changing the environment is against Earth's wishes? Maybe it concords with Earth's wishes.

One argument for conservation might be that the Earth tries to rebound from certain forms of destruction. For instance, if we cut a forest, plants grow back. Typically an organism resists damage, so growing back vegetation may be the Earth's way of recovering from the harm inflicted by humans. But then what should we make of cases where Earth seems to go along with human impacts? For instance, positive greenhouse-gas feedback loops might be the Earth's way of saying, "I liked how you added more CO2 to my atmosphere, so I'm going to continue to add greenhouse gases on my own accord." In any case, it's also not clear that vegetation isn't like the Earth's hair or toenails -- something it's glad to have cropped even though it keeps coming back. Maybe the Earth created us with the ultimate purpose of keeping it well shaved. The first photosynthesizers also tampered with the Earth when they oxygenated the atmosphere. Was that likewise an assault on the Earth's goals?

The language I'm using here is obviously too anthropomorphic, but it's a convenient way of talking about ultimately more abstract and crude quasi-preferences that the Earth's biosphere may imply via its constitution and behavior. And it's probably wrong to think of the Earth as having a single set of quasi-preferences. There are many parts to what the Earth does, each of which might suggest its own kinds of desires, in a similar way as human brains contain many subsystems that can want different things.

Finally, who's to say that ecosystems are more valuable subjects of experience than their replacements, such as cities, factories, highways, and the like? Are environmentalists guilty of ecocentrism -- discrimination against industrial and digital systems? Luciano Floridi makes a similar point and argues for replacing biocentrism with "ontocentrism".

3. Ecosystems may experience net suffering

If forests, streams, and the whole Earth do have quasi-feelings, who's to say they're feelings of happiness? They might just as easily be feelings of frustration. These systems are always adapting -- and so perhaps are always restless, never satisfied. Maybe it would be better if this discomfort didn't have to be endured. That is, maybe ecosystems would be better off not existing, even purely for their own sakes. This is particularly clear for those who consider reducing suffering more urgent than creating pleasure. So maybe panpsychism leads to an anti-environmental ethic. Of course, whatever replaces an ecosystem will itself suffer. But hopefully parking lots and solar radiation not converted to energy by plants are on balance less sentient (and hence suffer less) than ecosystems.

Personal spirituality does not imply universal joy

I think part of why panpsychism often elicits intuitions of nature's goodness is that the experience of imagining oneself as part of a larger, conscious cosmos is often beautiful and serene. We feel at peace with the universe when thinking such thoughts, and then we project those good feelings onto what we're thinking about -- forgetting how awful it may actually "feel" to be the universe. To her credit, Freya Mathews acknowledges the importance of suffering: "The path of awakened intersubjectivity, Mathews cautions in conclusion, is nonetheless far from universally joyous: on the contrary, it renders the pain of more than human others more salient for us, even while we find delight in our surprise encounters with them."

Spiritual/panpsychist experiences are elevated by certain types of drug use:

For example, a recent study found that about 60% of volunteers in an experiment on the effects of psilocybin, who had never before used psychedelic drugs, had a “complete mystical experience” characterised by experiences such as unity with all things, transcendence of time and space, a sense of insight into the ultimate nature of reality, and feelings of ineffability, awe, and profound positive emotions such as joy, peace, and love (Griffiths, Richards, McCann, & Jesse, 2006).

[...] Psychedelic drug users endorsed more mystical beliefs (such as in a universal soul, no fear of death, unity of all things, existence of a transcendent reality, and oneness with God, nature and the universe).

I wouldn't be surprised if weaker versions of these brain processes are triggered naturally when people think spiritual thoughts. But we shouldn't mistake the bliss we feel in these moments as being what the other entities in the universe themselves feel.

(Note: I never have and never intend to try psychedelic drugs, both because they're illegal and because messing with my brain seems risky. But I think it's quite edifying to learn about the effects of such drugs.)

Entropy and sentience

A friend of mine sometimes asks why there's always so much badness in the world. I reply: "It could be worse." Indeed, the second law of thermodynamics is in some sense a great gift to suffering reducers, because it implies that (complex) suffering can only last so long (within a given Hubble volume at least). We just have to wait it out until the universe's negentropy is used up.

It's often observed that a characteristic of life is that it has extremely low entropy, and correspondingly that life is very efficient (though not necessarily maximally efficient) at increasing the entropy of the outside environment. This might lead us to wonder whether there's some relationship between "sentience" and "entropy production". If these two things were identical, then we would face a sharp constraint on efforts to reduce the net sentience of our region of the universe, since a given quantity of entropy must be produced as the universe evolves forward.

However, I don't think the two quantities are exactly equal. For example:

  • Your neurons are probably not significantly more effective at generating entropy than, say, your muscle cells, yet your brain has much higher sentience than your muscles.
  • Reversible computing may allow for a high level of sentience with minimal increases in entropy compared against irreversible computing.

So presumably suffering reducers would prefer systems with fewer neuron-like operations and more irreversible computations, which have a lower ratio of sentience per unit entropy increase.

Also note that "amount of sentience" is not identical to "amount of suffering". It's better to increase entropy with happy minds rather than agonized ones.

We might also wonder whether sentience is proportional to mass+energy. If so, then the law of conservation of mass+energy would imply that we can't change the amount of sentience. However, I find it implausible that sentience would be strictly proportional to mass/energy. For instance, a lot of energy can be stored in molecular bonds, which are pretty stable and so don't seem to qualify as a particularly sentient system compared with other systems that contain the same amount of energy in the form of organisms moving around. A stick of butter contains enough food energy to power a person for 5-10 hours, but there seems to be more sentience in a system in which the butter powers the person than a system in which the butter sits idle alongside a person who just died, even though both of these systems have the same amount of mass+energy.

Acknowledgments

Among many inspirations for this piece were conversations with Joseph Kijewski and Ruairí Donnelly.