Consciousness May Not Require a Brain

By Annaka Harris, New York Times bestselling author of Conscious: A Brief Guide to the Fundamental Mystery of the Mind. She is an editor and consultant for science writers, specializing in neuroscience and physics, and her work has appeared in the New York Times. This article was originally published by the Institute of Art and Ideas.

Our intuitions have been shaped by natural selection to quickly provide life-saving information, and these evolved intuitions can still serve us in modern life. For example, we have the ability to unconsciously perceive elements in our environment in threatening situations that in turn deliver an almost instantaneous assessment of danger — such as the intuition that we shouldn’t get into an elevator with someone, even though we can’t put our finger on why.

But our guts can deceive us as well, and “false intuitions” can arise in any number of ways, especially in domains of understanding — like science and philosophy — that evolution could never have foreseen. An intuition is simply the powerful sense that something is true without having an awareness or understanding of the reasons behind this feeling — it may or may not represent something true about the world.

And when we inspect our intuitions about consciousness itself — how we judge whether or not an organism is conscious — we discover that what once seemed like obvious truths are not so straightforward. I like to begin this exploration with two questions that at first glance appear deceptively simple to answer. Note the responses that first occur to you, and keep them in mind as we explore some typical intuitions and illusions.

  1. In a system that we know has conscious experiences — the human brain — what evidence of consciousness can we detect from the outside?
  2. Is consciousness essential to our behavior?

These two questions overlap in important ways, but it’s informative to address them separately. Consider first that it’s possible for conscious experience to exist without any outward expression at all (at least in a brain). A striking example of this is the neurological condition called locked-in syndrome in which virtually one’s entire body is paralyzed but consciousness is fully intact. This condition was made famous by the late editor-in-chief of French Elle, Jean-Dominique Bauby, who ingeniously devised a way to write about his personal story of being “locked in.” After a stroke left him paralyzed, Bauby retained only the ability to blink his left eye. Amazingly, his caretakers noticed his efforts to communicate using this sole remnant of mobility, and over time they developed a method whereby he could spell out words through a pattern of blinks, thus revealing the full scope of his conscious life. He describes this harrowing experience in his 1997 memoir The Diving Bell and the Butterfly, which he wrote in about two hundred thousand blinks.

Another example of bodily imprisonment is a condition called “anesthesia awareness,” in which a patient anesthetized for a surgical procedure experiences only the paralysis without losing consciousness. People in this condition must live out the nightmare of feeling every aspect of a medical procedure, sometimes as drastic as the removal of an organ, without the ability to move or communicate that they are fully awake and experiencing pain. These examples seem to come straight out of a horror movie, but we can imagine other, less disturbing instances in which a conscious mind might lack a mode of expression — scenarios involving artificial intelligence (AI), for example, in which advanced systems become conscious but have no way of convincingly communicating this to us. But one thing is certain: It’s possible for a vivid experience of consciousness to exist undetected from the outside.

Now let’s go back to the first question and ask ourselves: what might qualify as evidence of consciousness? For the most part, we believe we can determine whether or not an organism is conscious by examining its behavior. Here is a simple assumption most of us make, in line with our intuitions, which we can use as a starting point: People are conscious; plants are not conscious. Most of us feel strongly that this statement is correct, and there are good scientific reasons for believing that it is. We assume that consciousness does not exist in the absence of a brain or a central nervous system. But what evidence or behavior can we observe to support this claim about the relative experience of human beings and plants? Consider the types of behavior we usually attribute to conscious life, such as reacting to physical harm or caring for others. Research reveals that plants do both in complex ways — though, of course, we conclude that they do so without feeling pain or love (i.e. without consciousness). But some behaviors of people and plants are so alike that it in fact poses a challenge to our using certain behavior as evidence of conscious experience.

In his book What a Plant Knows: A Field Guide to the Senses, biologist Daniel Chamovitz describes in fascinating detail how stimulation of a plant (by touch, light, heat, etc.) can cause reactions similar to those in animals under analogous conditions. Plants can sense their environments through touch and can detect many aspects of their surroundings, including temperature, by other modes. It’s actually quite common for plants to react to touch: a vine will increase its rate and direction of growth when it senses an object nearby that it can wrap itself around; and the infamous Venus flytrap can distinguish between heavy rain or strong gusts of wind, which do not cause its blades to close, and the tentative incursions of a nutritious beetle or frog, which will make them snap shut in one-tenth of a second.

Chamovitz explains how the stimulation of a plant cell causes cellular changes that result in an electrical signal — similar to the reaction caused by the stimulation of nerve cells in animals — and “just like in animals, this signal can propagate from cell to cell, and it involves the coordinated function of ion channels including potassium, calcium, calmodulin, and other plant components.”1 He also describes some of the mechanisms shared by plants and animals down to the level of DNA. In his research, Chamovitz discovered which genes are responsible for a plant’s ability to determine whether it’s in the dark or the light, and these genes, it turns out, are also part of human DNA. In animals, these same genes also regulate responses to light and are involved in “the timing of cell division, the axonal growth of neurons, and the proper functioning of the immune system.” Analogous mechanisms exist in plants for detecting sounds, scents, and location, and even for forming memories. In an interview for Scientific American, Chamovitz describes how different types of memory play a role in plant behavior:

‘[I]f memory entails forming the memory (encoding information), retaining the memory (storing information), and recalling the memory (retrieving information), then plants definitely remember. For example a Venus Fly Trap needs to have two of the hairs on its leaves touched by a bug in order to shut, so it remembers that the first one has been touched . . . Wheat seedlings remember that they’ve gone through winter before they start to flower and make seeds. And some stressed plants give rise to progeny that are more resistant to the same stress, a type of transgenerational memory that’s also been recently shown also in animals.’2

The ecologist Suzanne Simard conducts research in forest ecology, and her work has produced breakthroughs in our understanding of inter-tree communication. In a 2016 TED Talk, she described the thrill of uncovering the interdependence of two tree species in her research on mycorrhizal networks — elaborate underground networks of fungi that connect individual plants and transfer water, carbon, nitrogen, and other nutrients and minerals. She was studying the levels of carbon in two species of tree, Douglas fir and paper birch, when she discovered that the two species were engaged “in a lively two-way conversation.” In the summer months, when the fir needs more carbon, the birch sent more carbon to the fir; and at other times when the fir was still growing but the birch needed more carbon because it was leafless, the fir sent more carbon to the birch — revealing that the two species were in fact interdependent. Equally surprising were the results of further research led by Simard in the Canadian National Forest, showing that the Douglas fir “mother trees” were able to distinguish between their own kin and a neighboring stranger’s seedlings. Simard found that the mother trees colonized their kin with bigger mycorrhizal networks, sending them more carbon below ground. The mother trees also “reduced their own root competition to make room for their kids,” and, when injured or dying, sent messages through carbon and other defense signals to their kin seedlings, increasing the seedlings’ resistance to local environmental stresses.3 Likewise, by spreading toxins through underground fungal networks, plants are also able to ravage threatening species. Because of the vast interconnections and functions of these mycorrhizal networks, they have been referred to as “Earth’s natural Internet.”4

Still, we can easily imagine plants exhibiting the behaviors described above without there being something it is like to be a plant, so complex behavior doesn’t necessarily shed light on whether a system is conscious or not. We can probe our intuitions about behavior from another angle by asking, does a system need consciousness to exhibit certain behaviors? For instance, would an advanced robot need to be conscious to give its owner a pat on the back when it witnessed her crying? Most of us would probably say the answer is “Not necessarily.” At least one tech company is creating computerized voices indistinguishable from human ones.5 If we design an AI that one day begins saying things like, “Please stop — it hurts when you do that!” should we take this as evidence of consciousness, or simply of complex programming in which the lights are off. We assume, for example, that an entirely non-conscious algorithm is behind Google’s growing ability to accurately guess what we are searching for, or behind Microsoft Outlook’s ability to make suggestions about whom we might want to cc on our next email. We don’t think our computer is conscious, much less that it cares about us, when it flashes Uncle John’s contact, reminding us to include him in the baby announcement. The software has obviously learned that Uncle Jack usually gets included in emails to Dad and Cousin Jenny, but we never have the impulse to say, “Hey, thanks — how thoughtful of you!” It’s conceivable, however, that future deep-learning techniques will enable these machines to express seemingly conscious thoughts and emotions (giving them increased powers to manipulate people). The problem is that both conscious and non-conscious states seem to be compatible with any behavior, even those associated with emotion, so the behavior itself doesn’t necessarily signal the presence of consciousness.

Read more …

This article was originally published at by the Institute of Art and Ideas and is partially reproduced here without the permission of the author, who is not affiliated with this website or its views.

  1. Daniel Chamovitz, What a Plant Knows: A Field Guide to the Senses (New York: Farrar, Straus & Giroux, 2012), pp. 68-69. []
  2. Gareth Cook, “Do Plants Think?” Scientific American, June 5, 2012. []
  3. Suzanne Simard, ted.com/talks/suzanne_simard_how_trees_talk_to_each_other []
  4. bbc.com/earth/story/20141111-plants-have-a-hidden-internet;
    ted.com/talks/paul_stamets_on_6_ways_mushrooms_can_save_the_world []
  5. Lauren Goode, “How Google’s Eerie Robot Phone Calls Hint at AI’s Future,” Wired, May 8, 2018; Bahar Gholipour, “New AI Tech Can Mimic Any Voice,” Scientific American, May 2, 2017. []


About the Author

One Response to Consciousness May Not Require a Brain

  1. “These examples seem to come straight out of a horror movie, but we can imagine other, less disturbing instances in which a conscious mind might lack a mode of expression — scenarios involving artificial intelligence (AI), for example, in which advanced systems become conscious but have no way of convincingly communicating this to us”

    A troubling sentence, to be sure. But what does she mean by “become conscious?”

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑