These are excerpts from an article by Bill Joy, a famous and influential computer scientist. The main theme of the article is that technologies currently being developed have at least the danger, and maybe the overwhelming likelihood, of wiping out human beings - nanotechnology and self-replicating robots, and genetics. (The paragraphs are largely selections widely separated in the text, so don't be surprised by the abrupt transitions.)
The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.
Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.
I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.
Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.
Given the incredible power of these new technologies, shouldn't we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?
But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.
We should have learned a lesson from the making of the first atomic bomb and the resulting arms race. We didn't do well then, and the parallels to our current situation are troubling...
It's important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima.
In November 1945, three months after the atomic bombings, Oppenheimer stood firmly behind the scientific attitude, saying, "It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences."
What is striking is how this effort continued so naturally after the initial impetus was removed. In a meeting shortly after V-E Day with some physicists who felt that perhaps the effort should stop, Oppenheimer argued to continue. His stated reason seems a bit strange: not because of the fear of large casualties from an invasion of Japan, but because the United Nations, which was soon to be formed, should have foreknowledge of atomic weapons. A more likely reason the project continued is the momentum that had built up - the first atomic test, Trinity, was nearly at hand.
As Thoreau said, "We do not ride on the railroad; it rides upon us"; and this is what we must fight, in our time. The question is, indeed, Which is to be master? Will we survive our technologies?
And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups.
This is the introduction to the Letters to the Editor section of the magazine that printed Joy's article, followed by a few of the letters.
Bill Joy, cofounder and chief scientist of Sun Microsystems, is one of the most influential figures in computing: a pioneer of the Internet, godfather of Unix, architect of software systems such as Java and Jini. That a scientist of his stature had chosen to confront with such candor the threats accompanying the benefits of 21st-century technologies made for more than a slew of headlines. It sparked a dialogue that has already been joined by business and technology leaders, members of Congress and President Clinton's inner circle, Nobelists and theologians, educators, artists, and schoolchildren.
What Bill Joy started in Wired's pages is one of the most essential conversations of this new century. On the Net and in the policy arena, in universities and communities large and small, the debate he provoked has been vigorous - and occasionally contentious. Which is as it should be.
Alex Vella, applications engineer, Protocol: Bill Joy has created the most profound, thought-provoking document in recent history. If this isn't Nobel Prize thinking, then there isn't any meaning in the Nobel Prize. The reading of "Why the Future Doesn't Need Us" should be made mandatory in universities throughout the world.
Doug Ellis, communications director, Aspen Research Group: More than fear, I felt inspired that one of our country's greatest minds was sounding the alarm, and drawing on inspiration from as far afield as Henry David Thoreau and the Dalai Lama in the process. To me, it was a harbinger of the future synthesis of science and spirit.
Ed Lazowska, chair of computer science, University of Washington: Extraordinary article. (See also longer statement by Ed about Joy's article.)
Scott McNealy, CEO, Sun Microsystems: Bill Joy has helped create some of the top networking technologies on the planet. He has made a habit of predicting and inventing the future. Nearly 20 years before almost anyone else, he knew the world would be built around the Internet. Given his track record, maybe we all need to spend more time thinking about the issues he addressed in Wired.