All Too Human
Recently, at a wedding reception, I polled some friends about immortality. Suppose you could upload your brain tomorrow and live forever as a human-machine hybrid, I asked an overeducated couple from San Francisco, parents of two young daughters. Would you do it? The husband, a 42-year-old M.D.-Ph.D., didn’t hesitate before answering yes. His current research, he said, would likely bear fruit over thenext severalcenturies, and he wanted to see what would come of it. “Plus, I want to see what the world is like
10,000 years from now.” The wife, a 39-year-old with an art history doctorate, was also unequivocal. “No way,” she said. “Death is part of life. I want to know what dying is like.”
I wondered if his wife’s decision might give the hus- band pause, but I diplomatically decided todrop it. Still, the whole thing was more than simply dinner- party fodder. If you believe the claims of some futurists, we’ll sooner or later need to grapple with these types of questions because, according to some experts, we are heading toward a postbiological world in which death is passé—or at least very much under our control.
The most fanciful version of this transcendent fu- ture is Ray Kurzweil’s. Inhis2005 best-selling book The Singularity Is Near, Kurzweil predicted that arti- ficial intelligence would soon“encompass all human knowledge and proficiency.” Nanoscalebrain-scan- ning technology will ultimately enable “ourgradual transfer of our intelligence, personality, and skillsto the nonbiological portion of ourintelligence.” Mean- whilebillions of nanobots inside ourbodies will “de- stroypathogens, correct DNA errors, eliminate tox- ins, and perform many other tasks toenhance our physical well-being. As a result, we will be able to live indefinitely without aging.” These nanobots will cre- ate“virtual reality from within the nervous system.” Increasingly, we will live in thevirtual realm, which will be indistinguishable from that anemic universe we might call “real reality.”
Based on progress in genetics, nanotechnology and robotics and on theexponential rate of technological change, Kurzweil set the date for the singularity— when nonbiological intelligence so far exceedsall hu- man intelligence that there is “a profound and disrup- tive transformation in human capability”—at 2045. To- day a handful of singulatarians still hold to that date, and progress in an aspect of artificialintelligence known as deep learning has only encouraged them.
Most scientists, however, think that any manifes- tation of ourcyborgdestiny is much, much farther away. Sebastian Seung, a neuroscientist and artificial
intelligence researcher at Princeton University, has argued that uploading the brain may never be possi- ble. Brains aremade up of 100 billion neurons, con- nected by synapses; the entirety of those connections make upthe connectome, which some neuroscien- tists believe holds the key to our identities. Evenby Kurzweilian standards of technological progress, that is a whole lot of connections to mapand upload. And the connectome might beonly the beginning: neu- rons can also interact with one anotheroutside of synapses, and such “extrasynaptic interactions” could turn out to beessential to brain function. If so, as Seung argued in his 2012 book Connectome: How the Brain’s Wiring Makes Us Who We Are, a brain upload might also have to include notjust every connection, or every neuron, butevery atom. The computational power required for that, he wrote, “is completely out of the question unless yourremote descendants sur- vive for galactic timescales.”
Still, the very possibility of a cyborg future, howev- er remote or implausible, raises concerns important enough that legitimate philosophers are debating it in earnest. Even if our technology fails to achieve the fullKurzwelian vision, augmentation of our minds and our bodies maytake us part of the way there— raising questions about what makes us human.
I ask David Chalmers, a philosopher and co-direc- tor of the Center for Mind, Brain and Consciousness at NewYork University who has written about the best way to upload yourbrain to preserve yourself- identity, whether he expects he will have the opportu- nity to live forever. Chalmers, who is 50, says he doesn’t think so—but that “absolutely these issues are going to become practical possibilities sometime in the next century or so.”
Ronald Sandler, an environmental ethicist and chair of the department of philosophy and religion at North- eastern University, says talking about our cyborg future
“puts a lot of issuesin sharp relief. Thinking about the limit case can teach you about the near-term case.”
And, of course, if there is even the remote possibili- ty that those of us alive today might ultimately get to choose between death or immortality as a cyborg, maybe it’s best to start mulling it over now. So putting aside the question of feasibility, it is worth pausing to consider more fundamental questions. Is it desirable? If my brain and my consciousness were uploaded into a cyborg, who would I be? Would I still love my family and friends? Would they stilllove me? Would I, ulti- mately, still be human?
One of the issues philosophers think about is how we treat one another. Wouldwe still have the Golden Rule in a posthuman world? A few years ago Sandler co-authored a paper, “Transhumanism, Human Dig- nity, and Moral Status,” arguing that “enhanced” hu- mans would retain a moral obligation to regular hu- mans. “Even if youbecome enhanced in some way, youstill have to care about me,” is how he puts it. Which seems hard to argue with—and harder still to believe would come to pass.
Other philosophers make a case for “moral en- hancement”—using medical or biomedical means to give our principles an upgrade. If we’re going to have massive intelligence and power at ourdisposal, we need to ensure Dr. Evil won’t be at the controls. Our scientific knowledge “is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process,” philosophers JulianSavulescu and Ingmar Persson wrote recently. “Wecould use these techniques to overcome themoral and psychological shortcomings that imperil the human species.”
In an op-edthis past May in theWashington Post entitled “Soon We’ll Use Science to Make People More Moral,” James Hughes, a bioethicist and associate provost at the University of Massachusetts Boston, argued for moral enhancement, saying itneeds to be voluntary rather than coercive. “Withthe aid of science, we will all be able to discover our own paths to technologically enabled happiness and virtue,” wrote Hughes, who directsthe Institute for Ethics and Emerging Technologies, a progressive transhumanist think tank. (Forhis part, Hughes, 55, a former Buddhist monk, says in our interview that he would like to stay alive long enough to achieve enlightenment.)
There is also the question of how we might treat the planet. Living forever, in whatever capacity, would change our relationship not just to one anoth- er but to the world around us. Would it make us more or less concerned about the environment? Would the natural world be better or worse for it?
The singularity, Sandler pointed out tome, de- scribes an end state. To get there will involvea huge amount of technological change, and “nothing changes our relationship with nature more quickly and robustly than technology.” If we are at the point where we can upload human consciousness and move seamlessly between virtual and non–virtual reality, we will already be engineering nearly everything else in significant ways. “By the time the singularity would occur, our relationship with nature would be radically transformed already,” Sandler said.
Although we would like to believe otherwise, in our current mere mortal state we remain hugely dependent on—and vulnerable to—natural systems. But in this future world, those dependencies would change. If we didn’t need to breathe through lungs, why would we care about air pollution? If we didn’t need to grow food, we would become fundamentally disconnected from the land around us.
Similarly, in a world where the real wasindistin- guishable from the virtual, we might derive equal ben- efit from digitally created nature as from the great out- doors. Our relationship to nature would be altered. It would no longer be sensory, physical. That shift could have profound impacts onour brains, perhaps even the silicon versions. A growing body of research shows that interacting with nature affects us deeply—for the better. A connection to nature, even at an unconscious level, may be a fundamental quality of being human.
Ifour dependence on nature falls away, and our physical ability tocommune with nature diminishes, then “the basis for environmental concern willshift much more strongly to these responsibilities to nature for its own sake,” Sandler says. Our capacity for solving environmental problems—engineering the climate, say—will be beyond what we can imagine today. But will we still feel that nature has intrinsic value? If so, ecosys- tems might fare better. If not, other species and the eco- systems they would still rely on might be in trouble.
Our relationship to the environment also depends onthe question of timescales. From a geologic per- spective, the extinction crisis we are witnessing today might not matter. But it doesmatter from the time- line of a current human life. Howmight vastly ex- tended life spans “change the perspective from which we ask questions and think about the nonhuman en- vironment?” Sandler asks. “The timescales really mat- ter to what reasonable answers are.” Will we become more concerned about the environment because we will be around for so long? Or will we care lessbecause we will take a broader, more geologic view?
“It’s almost impossible to imagine what it will be like,” Sandler says, “but we can know that the per- spective will be very, very different.”
Talk to experts about this stuff for long enough, and you fall down a rabbit hole; you find yourself hav- ing seemingly normal conversations about absurd things. “If there weresomething like an X-Men gene therapy, where they can shoot lasers out of their eyes ortake over your mind,” Hughes saystome at one point, then people who want those traits should have to complete special training and obtain a license.
“Are you using those examples to make a point, or arethey actual things you believe arecoming?” I ask him. “In terms of how much transhumanists talk about these things, most of us try not to freak out newbies too much,” he replies obliquely. “But once you’re past shock level 4, you can start talking about when we’re all just nanobots.”
When we’re all just nanobots, what will we worry about? Angst, after all, is arguably one our defining qualities ashumans. Does immortality render angst obsolete? If I no longer had tostress about staying healthy, paying the bills, and howI’ll support myself when I’m too old and frail to travel around writing arti- cles, would I still be me? Or would I simply be a placid, overly contented … robot? For that matter, what would I daydream about? WouldI lose my ambition, such as it is? I mean, if I live forever, surely that Great Ameri- can Novel can wait until next century, right?
Would I still be me? Chalmers believes this“is go- ing tobecome an extremely pressing practical, not just philosophical, question.”
On a gut level, it seems implausible that I would remain myselfif my brain was uploaded—even if, as Chalmers has prescribed, I didit neuron by neuron, staying conscious throughout, becoming gradually 1 percent silicon, then 5, then 10 and onward to 100. It’s the old saw about Theseus’s ship—replaced board by board with newer, stronger wood. Is it or isn’t it the same afterward? If it’s not the same, at what point does the balance tip?
“A big problem,” Hughes says, “is you live long enough and you'll go through so many changes that there's no loner any meaning to having lived longer. Am I really the same person when I was five? If I live for another 5,000 years am I really the same as now? In the future, we will be able to share our memories, so there will be an erosion of the importance of personal identity and continuity." That sounds like kind of a drag.
Despite the singularity's utopian rhetoric, it carries a tinge of fatalism; this is the only route available to us; merge with machines or fade away - or worse. What if I don't want to become a cyborg? Kurzweil might say that it's only my currently flawed and limited biological brain that prevents me from seeing true allure and potential of this future. And that the choices available to me - ay type of body, any experience in virtual reality, limitless possibilities for creative expression, the chance to colonize space - will make my current biological existence seem almost comically trivial. And anyway, what's more fatalistic than certain death?
Nevertheless, I really like being human. I like knowing that I'm fundamentally made of the same stuff as all the other life on Earth. I'm even sort of attached to my human frailty. I like being warm and cuddly and not hard and indestructible like some action-film super-robot. I like the warm blood that runs through my veins, and I'm not sure I really want it replaced by nanobots.
Some ethicists argue that human happiness relies n the fact that our lives are fleeting, that we are vulnerable, interdependent creatures. How, in a human-machine future, would we find value and meaning in life?
"To me, the essence of being human is not our limitations...it's our ability to reach beyond our limitations," Kurzweil writes. It's an appealing point of view. Death has always fundamentally been one of those limitations, so perhaps reaching beyond death makes us deeply human?
But once we transcend it, I'm not convinced our humanity remains. Death itself doesn't define us, of course - all living things die - but our awareness and understanding of death and our quest to make meaning of life in the interim, are surely part of the human spirit.
September 2016, Scientific American.