The Philosophy of ‘Her’

Her2013PosterBy Susan Schneider, originally published in The New York Times.

Set in the not-too-distant future, Spike Jonze’s film “Her” explores the romantic relationship between Samantha, a computer program, and Theodore Twombly, a human being. Though Samantha is not human, she feels the pangs of heartbreak, intermittently longs for a body and is bewildered by her own evolution. She has a rich inner life, complete with experiences and sensations.

“Her” raises two questions that have long preoccupied philosophers. Are nonbiological creatures like Samantha capable of consciousness — at least in theory, if not yet in practice? And if so, does that mean that we humans might one day be able to upload our own minds to computers, perhaps to join Samantha in being untethered from “a body that’s inevitably going to die”?

This is not mere speculation. The Future of Humanity Institute at Oxford University has released a report on the technological requirements for uploading a mind to a machine. A Defense Department agency has funded a program, Synapse, that is trying to develop a computer that resembles a brain in form and function. The futurist Ray Kurzweil, now a director of engineering at Google, has even discussed the potential advantages of forming friendships, “Her”-style, with personalized artificial intelligence systems. He and others contend that we are fast approaching the “technological singularity,” a point at which artificial intelligence, or A.I., surpasses human intelligence, with unpredictable consequences for civilization and human nature.

Is all of this really possible? Not everyone thinks so. Some people argue that the capacity to be conscious is unique to biological organisms, so that even superintelligent A.I. programs would be devoid of conscious experience. If this view is correct, then a relationship between a human being and a program like Samantha, however intelligent she might be, would be hopelessly one-sided. Moreover, few humans would want to join Samantha, for to upload your brain to a computer would be to forfeit your consciousness.

This view, however, has been steadily losing ground. Its opponents point out that our best empirical theory of the brain holds that it is an information-processing system and that all mental functions are computations. If this is right, then creatures like Samantha can be conscious, for they have the same kind of minds as ours: computational ones. Just as a phone call and a smoke signal can convey the same information, thought can have both silicon- and carbon-based substrates. Indeed, scientists have produced silicon-based artificial neurons that can exchange information with real neurons. The neural code increasingly seems to be a computational one.

You might worry that we could never be certain that programs like Samantha were conscious. This concern is akin to the longstanding philosophical conundrum known as the “problem of other minds.” The problem is that although you can know that you yourself are conscious, you cannot know for sure that other people are. You might, after all, be witnessing behavior with no accompanying conscious component.

In the face of the problem of other minds, all you can do is note that other people have brains that are structurally similar to your own and conclude that since you yourself are conscious, others are likely to be conscious as well. When confronted with a high-level A.I. program like Samantha, your predicament wouldn’t be all that different, especially if that program had been engineered to work like the human brain. While we couldn’t be certain that an A.I. program genuinely felt anything, we can’t be certain that other humans do, either. But it would seem probable in both cases.

If the Samanthas of the future will have inner lives like ours, however, I suspect that we will not be able to upload ourselves to computers to join them in the digital universe. To see why, imagine that Theodore wants to upload himself. Imagine, furthermore, that uploading involves (a) scanning a human brain in such exacting detail that it destroys the original and (b) creating a software model that thinks and behaves in precisely the same way as the original did. If Theodore were to undergo this procedure, would he succeed in transferring himself into the digital realm? Or would he, as I suspect, succeed only in killing himself, leaving behind a computational copy of his mind — one that, adding insult to injury, would date his girlfriend?

Ordinary physical objects follow a continuous path through space over time. For Theodore to transfer his mind into a computer program, however, his mind would not follow a continuous trajectory. His brain would be destroyed when the scan was made, and the information about his precise brain configuration would be sent to a computer, which could be miles away.

Furthermore, if Theodore were to truly upload his mind (as opposed to merely copy its contents), then he could be downloaded to multiple other computers. Suppose that there are five such downloads: Which one is the real Theodore? It is hard to provide a nonarbitrary answer. Could all of the downloads be Theodore? This seems bizarre: As a rule, physical objects and living things do not occupy multiple locations at once. It is far more likely that none of the downloads are Theodore, and that he did not upload in the first place.

Worse yet, imagine that the scanning procedure doesn’t destroy Theodore’s brain, so the original Theodore survives. If he survives the scan, why conclude that his consciousness has transferred to the computer? It should still reside in his brain. But if you believe that his mind doesn’t transfer if his brain isn’t destroyed, then why believe that his mind does transfer if his brain is destroyed?

It is here that we press up against the boundaries of the digital universe. It seems there is a categorical divide between humans and programs: Humans cannot upload themselves to the digital universe; they can upload only copies of themselves — copies that may themselves be conscious beings.

Does this mean that uploading projects should be scrapped? I don’t think so, for uploading technology can benefit our species. A global catastrophe may make the world inhospitable to biological life forms, and uploading may be the only way to preserve the human way of life and thinking, if not actual humans themselves. And uploading could facilitate the development of brain therapies and enhancements that can benefit humans and nonhuman animals. Furthermore, uploading may give rise to a form of superintelligent A.I. A.I. that is descended from us may have greater chance of being benevolent toward us.

Finally, some humans will understandably want digital backups of themselves. What if you found out that you were going to die soon? A desire to contribute a backup copy of yourself might outweigh your desire to spend a few more days on the planet. Or you might wish to leave a copy of yourself to communicate with your children or complete projects that you care about. Indeed, the Samanthas of the future might be uploaded copies of deceased humans we have loved deeply. Or perhaps our best friends will be copies of ourselves, but tweaked in ways we find insightful.


About the Author

6 Responses to The Philosophy of ‘Her’

  1. Although this article takes the empirical position that consciousness has a physical origin in the brain/mind, I find it interesting that the “problem of other minds” is being extended to technology.

  2. The Vedic literature tells us that there is life even on the Sun. How could it be biological life in the earthly sense? So life is not limited to carbon based forms. Modern computers and robotics are only two decades old. What will they become in 200 years? It is hard to fathom. I would recommend to watch the movie Artificial Intelligence (came out in 2001) for all who are interested in this subject matter.
    Why would a soul be able to function in this world through a biological body but not through a very sophisticated mechanical/electronic body?

    • Kula-pavana, it is interesting to think that science may one day create a body that could be inhabited by a jiva. However, I think the hard problem of consciousness would remain a problem. After all, the problem describes our inability to be empirically certain that anything has consciousness aside from ourselves!

      • The hard problem is actually a challenge to the idea of a-priori or reductive physicalism – i.e. assuming consciousness is identical to the brain/neurons or whatever. The hard problem presents a strong challenge to this idea. Because if we are to accept this, then the reductive physicalist needs to show a conceptual identity between neurons/brain/whatever and consciousness such that one can’t exist without the other.

        If reductive physicalism is true, then AI will be impossible since this precludes consciousness arising on different substrates (eg silicon or whatever). This is one reason why reductive physicalism is now out of favour.

        The article discusses the more popular idea of non-reductive physicalism which says that consciousness “supervenes” on the physical. Supervenes means we can’t have a change in the physical substrate without a corresponding change in consciousness. If this is true, this would mean we could have different substrates which produce consciousness, just as we can build our computers out of different physical material and still run software on them.

        But the issue of not being certain about the existence of other minds is only a point of epistemology in the debate, or “how” we know about our own mind and therefore infer the existence of other minds. This is part of the problem of explaining consciousness, but not really the issue the hard problem discusses.

        The author of this article assumes we infer consciousness in other creatures by studying their brains, but this isn’t really what we do. We generally infer it from behaviour. If a creature is injured and winces, cries etc, we can reasonably assume it feels pain. We don’t need to be certain of this, it just becomes the most reasonable inference.

      • Since consciousness is an observable reality in us and others, I’m not sure I would think of it as a ‘hard problem’ to explain or define. I’m also pretty sure that you can be empirically certain that someone else has consciousness – all it takes is a bit of interaction with our own consciousness – like conversation, observation of body language and physical reactions to outside effects, such as heat, cold, and pinching for example.
        We can even easily determine whether an animal is conscious or not – that is why veterinarians do not perform invasive surgery on fully conscious animals, but give them anesthetics to make them unconscious. When your dog is conscious, you can easily see that he is happy to see you when you get home.

  3. Then the question becomes can the subtle body, that includes mind et al, be transferred to an artificial ‘yantra’? The technical questions may be addressed with enough time and technological prowess but why would anybody need to go through that methodology? By understanding the subtle body and or going through aṣṭāṇga-yoga system one can enter many other forms of bodies from around the universe.

    Then there is this, “A global catastrophe may make the world inhospitable to biological life forms, and uploading may be the only way to preserve the human way of life and thinking, if not actual humans themselves.”

    This article begs the question as to a need for preserving the human species. If we are all just ‘biological-brain beings’ that just happened to appear by chance, then why is there a need to save the human mind; or genetic sequence for that matter? There is no meaning in being aware or not. The philosophical and existential enigma of existence/consiousness has no bearing in this worldview. Why struggle? At least a real buddhist will agree that suffering is worth trying to extinguish.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑