Digital Soul

Intelligent Machines and Human Values

Contributors

By Thomas Georges

Formats and Prices

Price

$11.99

Price

$15.99 CAD

Format

Format:

  1. ebook $11.99 $15.99 CAD
  2. Trade Paperback $16.99 $22.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around October 13, 2004. This date is subject to change due to shipping delays beyond our control.

Should the day come when intelligent machines not only make computations but also think and experience emotions as humans do, how will we distinguish the “human” from the “machine”? This introduction to artificial intelligence — and to its potentially profound social, moral, and ethical implications — is designed for readers with little or no technical background. In accessible, focused, engaging discussions, physicist and award-winning science writer Thomas Georges explores the fundamental issues: What is consciousness? Can computers be conscious? If machines could think and even feel, would they then be entitled to “human” rights? Will machines and people merge into a biomechanical race? Should we worry that super-intelligent machines might take over the world? Even now we continue to put increasingly sophisticated machines in control of critical aspects of our lives in ways that may hold unforeseen consequences for the human race. Digital Soul challenges all of us, before it’s too late, to think carefully and rationally about the kind of world we will want to live in — with intelligent machines ever closer by our sides.

Excerpt

preface
Most people have a short list of books that they discovered at pivotal points in their life and that changed the way they think in crucial ways. In a sense, this book began in the 1960s, when I picked up a copy of Dean Wooldridge's book The Machinery of the Brain.1 In this and his two succeeding books, The Machinery of Life and Mechanical Man: The Physical Basis of Intelligent Life, Wooldridge laid out an astounding idea: that human beings function entirely according to the laws of physics.2 This idea was both amazing and disturbing to me because it upset a lifetime of Catholic doctrine about an immortal soul and the spiritual nature of the human mind. In place of mysticism, it offered the possibility that the whole of human experience is not only understandable but reproducible. Could it possibly be true? I had to find out more.
My curiosity led me to books such as Huston Smith's The Religions of Man, B. F. Skinner's Beyond Freedom and Dignity, Edward O. Wilson's On Human Nature, Pamela McCorduck's Machines Who Think, Douglas Hofstadter's Gödel, Escher, Bach, and Robert Wright's Moral Animal as well as the 1988 Public Broadcasting Service series Joseph Campbell and the Power of Myth.3
I found out that the idea of man as a mechanism is an ancient one, but that this idea has been eclipsed by mystical and spiritual views of man promoted by the world's religions. I learned that science is discovering that the most complex human behaviors, including ethical and moral reasoning, are rooted in basic biological imperatives and that brain research is revealing more and more mental functions, like emotions and consciousness, as the workings of a wonderfully complex information processor.
Adding to the evidence for physical explanations of intelligent behavior are the great strides that have been made since the 1950s toward developing machines with artificial intelligence, or AI. As a result, machines that can learn are taking over more and more functions that we have always assumed to be uniquely human, such as playing championship chess. Given the present rate of growth of computing power, some scientists seriously predict that machines will become more intelligent and aware than we are in just thirty to forty years—outperforming humans in every important way, and perhaps even forming a global brain!
What are we to make of all this? Although we are still far from creating an artificial human being, it might be wise to start thinking about how the future intelligent machines could change the way we see ourselves and even alter humankind's future. A world populated with intelligent artifacts that have minds and feelings of their own raises many tricky questions, such as these:
• How would humans react on discovering that the club of sentient beings is not as exclusive as they thought?
• How would that knowledge change our moral and ethical values?
• How would it affect our notions of freedom and dignity?
• How would it affect our beliefs in God?
• Can computers be conscious?
• Can they have emotions?
• If so, what are their rights and responsibilities?
• If we make something that is indistinguishable from a person, should we treat it like one?
• Should we be worried that superintelligent machines will somehow take over the world?
• Are intelligent machines the next step in evolution?
• Will humans and machines somehow merge?
• Will humans become extinct? Or immortal?
• And most important . . . How much control do we have over the process?
This book is not for experts in computers or artificial intelligence. My audience is ordinary people who are curious enough to ask questions like these, and who want to be able to make informed decisions about the course of AI's development. It is even for those who think that all this is rubbish—that no matter how "smart" machines get, they will never truly think, have a soul, or be self-aware in the same sense we are. Whatever your leanings, it should do no harm to explore, poke around, ask questions, and try to find out what makes machines so smart, what their inherent limitations might be, and where the boundary between human and artificial intelligence might lie. If these questions make you vaguely uncomfortable, it may be because they challenge the very foundations of all our social, legal, and religious institutions.
Because there are already lots of books about the mechanics of AI, as well as works of fiction and nonfiction that speculate about future worlds inhabited by thinking machines, it is reasonable to ask what new territory I hope to cover here. We will explore an unfamiliar land along the boundary between science and human values. There, we will seek out the logical structure of human feelings, consciousness, and morality that would make it possible for machines to possess them as well. Then we will delve into the social, moral, ethical, and religious consequences of creating thinking and feeling artifacts.
What sort of consequences? As any science-fiction fan knows, the big problem with intelligent machines, going back as far as Frankenstein , is loss of control. The public probably had its first taste of the moral and ethical aspects of computer behavior in Stanley Kubrick's 1968 movie 2001: A Space Odyssey. The HAL 9000 computer in charge of running the Discovery spaceship suddenly turned on its crew, murdering all but one. This sort of "psychotic" behavior should come as no surprise when machines designed to be our servants become so complex that we can no longer understand how they work. If we put them in charge of critical aspects of our lives, then what is to stop them from taking over completely and pursuing goals that we can barely comprehend? And if they did so, how would superintelligent machines ultimately regard lowly humans? Would they even stoop to communicate with us?
How could we avoid such a grim future? Only by knowing what our choices are and by carefully thinking about the kind of future we want to live in can we hope to influence the science and technology policies that will take us there.
 
I wish to thank my friends and colleagues who let me pester them with my strange questions and who shared their insights with me. I also thank those who read the manuscript and made valuable suggestions, particularly my wife, Julianne Cassady; Bob and Janet Evans; Jay Palmer; Patricia Boyd; and Marilynn Breithart.



1
Artificial Intelligence—That's the Fake Kind, Right?

Pay no attention to that man behind the curtain!
THE WIZARD OF OZ

Just about everyone has an opinion about the prospects of creating artificial intelligence and artificial life. These opinions range from Are you crazy? to Why not? Many regard the idea as blasphemous and accuse scientists of playing God. Others confine it to the realm of science fiction—entertaining but not to be taken seriously. Still others think it can be done but question whether it should be done. Most people believe that there is something intangible or spiritual about the way that minds work that can never be captured in silicon circuitry. The very term artificial intelligence suggests that machine intelligence will always fall short of real intelligence. Whatever your beliefs, you probably hold them with religious conviction. The mere mention of machines that are conscious, have feelings, or could have rights usually generates such heat and emotion as to preclude rational debate.
Why does the idea of AI stir up such emotion? Some people get nervous about AI when their egos won't allow them to recognize any kind of intelligence except the human kind. Their idea of human dignity depends on a natural superiority over all other creatures (and even over some other humans). Recent movies like AI and Bicentennial Man reinforce the idea that, although we might someday create machines that act human in many respects, they will always lack (and secretly long for) that intangible quality that would make them truly human.
Others might be called carbon chauvinists. They scoff at AI because they believe that only living things made from flesh and blood can exhibit intelligence. When they look at the "smart" machines of today, they seize upon their lapses and say, "See, I told you that a mere machine can never be as smart as a living creature."
Such emotions are hard to set aside, especially since they form the basis for many of our moral and ethical beliefs. Yet they also form a barrier to rational scientific inquiry. To investigate objectively whether AI is really possible, we must put aside the insecurities that lie at the root of such feelings and keep an open and curious mind. Does this mean that we have to check our dignity at the door if we want to ask questions like those posed in the preface? If we derive our dignity from a natural superiority over others, then the answer is probably yes. But this is the kind of dignity that we lost when Copernicus displaced Earth from the center of the universe, and when Darwin suggested that humans evolved from lower life-forms. Intellectual honesty is incompatible with a need to be at the center of the universe. It also requires us to ask questions without fear of where the answers may lead us.
The questions we want to ask challenge religious traditions, as well as New Age thinking. Both assert that the human mind—soul, spirit, what have you—lies beyond the limits of scientific examination. If you ask why it can't be studied scientifically, they say It just can't! There are just some things that you can't analyze! and Subjective experience lies beyond the reach of scientific inquiry! Besides begging the question, such answers are all saying essentially the same thing: There are certain questions that shouldn't be asked. Whenever someone doesn't want you poking into something, it's generally because the answers would threaten cherished beliefs or entrenched power structures. Which is fine. That's an honest reason. But there's a big difference between saying something can't be studied and something shouldn't be studied.
We will proceed in the belief that ignorance doesn't solve any problem, that mysticism explains nothing, and that there is nothing that can't be studied. We won't find all the answers, but this will not stop us from raising the questions.

Catching Up

The emergence of electronic computers since the 1950s has prompted an explosion of books and articles about "thinking machines" and their relation (if any) to the workings of the human mind.1 But speculation and debate about the mechanical nature of the human mind are not new. They've been raging since the days of Aristotle and Plato. Automata and thinking machines of all sorts permeate human history.2 The Egyptians had mechanical statues of gods that spoke, gestured, and prophesied. Today, no respectable science-fiction epic is complete without robots like R2D2 and C3PO of Star Wars and superintelligent computers, like HAL in 2001: A Space Odyssey. Machines that think and act like humans are firmly established in our mythology.
How long will it take our science and technology to catch up with those fantasies? It may happen gradually or suddenly; in twenty years or two hundred years. It may be happening already. Intelligent machines may assume a form more or less as we imagine them now, or something entirely unexpected. In any case, we have already set off down that road, and there seem to be no exits. We are headed inexorably toward a future with intelligent machines doing more and more things that we now think of as uniquely human. Is it just a matter of time before there are machines that outperform humans in every important way?
Technology always perturbs our values and moral codes—the set of rules that guide the way we interact with each other. Fire, the wheel, the printing press, atomic energy, and genetic engineering—each made some of those rules obsolete. It is just as foolish to think that our moral and ethical codes are written in stone, as it is to imagine that knowledge of any of these technologies could somehow be suppressed or ignored. Citing our abuses of atomic energy, some say that we shouldn't even be thinking about intelligent machines, because we aren't morally and ethically ready to deal with the consequences of such an inquiry. And they would be right. We never are. But this is no reason to suppress scientific knowledge. Besides, even if we wanted to stop the development of intelligent artifacts, we couldn't. It's one of those ideas "whose time has come." But does this mean that we are powerless? That we have to passively watch as we become slaves to our technology? Of course not. It just means that once again we will be forced to rethink and overhaul outdated values and moral codes. This time, however, the changes will be so massive, they will upset the very way we think about ourselves.

Flawed Tools

We are handicapped in our inquiry by the tools available for describing our subject. Our very language of subjective experience, which is built on the notion of having a self, is full of loaded words that constrain and muddy our thinking. The pronouns I and you create images of autonomous agents. Linguistic traditions force us to think of body and mind as separate and distinct entities. Everyday notions like free will and moral responsibility contain underlying contradictions. Language also uses definitions and forms of the verb to be in ways that force us to think of classes of things as clearly defined (Is a fetus a human being or not?), when in fact every classification scheme has fuzzy boundaries and continuous gradations. In this book, I will argue that the distinction between "artificial" and "real" intelligence is merely a linguistic trap to be avoided.
One widely used tool for understanding intelligence is called introspection. At first, it seems to offer insights into mental processes by examining how they "feel" to us. This tool is flawed because the observing instrument itself is a part of what we are trying to explain. Such an instrument would most likely fool itself. What we are "conscious" or "aware" of at any given moment tells us nothing at all about how consciousness works. The feeling of consciousness, though "subjectively obvious," defies detection and analysis by conventional "objective" means, except that we know that it is strongly correlated with electrical activity in a certain part of the brain stem. The same can be said for feelings that we subjectively experience as anger, love, happiness, grief, intuition, or creativity. These labels could well be comfortable metaphors for complex genetic programs that create illusions, like the Wizard of Oz, for their own procreative purposes.
Another tool, used by brain researchers, maps the parts of the brain that are electrically or metabolically active when we feel certain emotions. Such maps, in their present state, are useful in diagnosing brain damage, but they are not the same as explaining how emotions and consciousness work, any more than an array of flashing lights on a computer panel tells how information is being processed inside.
How shall we proceed, then, in the absence of a suitable, objective vocabulary? We will muddle through by using the language of unambiguous definitions and subjective experience, but with the implicit understanding that when we use words like anger or happiness , we will agree that we mean, not a thing, but "the mental state that we subjectively experience as anger or happiness."
My "explanations" of these experiences will consist mostly of plausible mechanical analogies. These analogies may or may not model how the brain actually produces these experiences, but I do not regard this as important. What is important is that a plausible mechanical model can be constructed.
Once a logical structure for our own thinking is spelled out in detail, creating a physical realization of it by reverse engineering would seem just a problem in technology. But the logical structure of our own thinking patterns is as opaque to us as ever! Early optimism about understanding human thought from an informationprocessing point of view has been replaced with more restrained language. We are beginning to understand how deeply intertwined our thought patterns are with the accidents of our biological and social evolution and our environmental experiences. We may find it impractical, even undesirable, to replicate in detail all the human mental functions that nature took eons to develop. To be sure, many of nature's steps along the evolutionary trail are responsible for behaviors that cause us a great deal of trouble.
The study of intelligent behavior will therefore likely diverge along two different paths: One will explore the origins and structure of human thinking patterns from the point of view of the environment in which they evolved and the genetic purpose they served. This path has been called evolutionary psychology. The other path will pursue the evolution of intelligence less and less like our own, with utterly different awareness, priorities, and goals. As James Hogan put it in his book, Mind Matters, "Evolution produced flying things that have feathers and chirp, but engineering a flying machine gets you a 747."3

Losing Control

The consequences of an explosion of machine intelligence reach into every nook and cranny of our social fabric. Even today, the evening news usually reports several incidents of mishaps caused by the failure of complex technologies. A preview of the kind of social problem that "intelligent" computing systems pose has been called the software crisis. The large and complex computer programs that control and monitor so many facets of our lives are less and less comprehensible by humans. (The Windows 2000 operating system is said to contain 40 million lines of code.) The adaptive and self-modifying ("intelligent") programs in our future will be even less comprehensible and will produce less predictable results.
The potential for widespread social and economic disruption precipitated by failures of critical computer systems can hardly be overstated. When a NORAD (North American Air Defense Command) computer mistook the rising moon for a flight of attacking Soviet missiles, nuclear war could have been the result! The failure of software designers to foresee the arrival of the year 2000 prompted a panic to avert disruption of critical systems when the new millennium arrived. To avoid having to understand and fix the offending programs, many users simply replaced them.
Such catastrophes are the natural result of underestimating the chances that our machines can fail. One "obvious" solution is more care and discipline in engineering, testing, and maintaining programs, but that is not enough. It will also be essential to build into complex programs more intelligent self-monitoring and self-correcting capabilities. We will see that such capabilities are the ingredients of a rudimentary kind of consciousness. Indeed, our own consciousness can probably be traced to the survival value of such capabilities.
Perhaps we have very little control, on the average, over the course of evolution of intelligent machines—no more control than fourteenth-century Europeans had over the progress of the Black Death. If the victims of the bubonic plague had understood sanitation and the ways that pathogens are transmitted, the effects of the plague would have been minimized. But they didn't understand—any more than we understand today about how rapidly and inevitably intelligence evolves and takes on new and unpredictable forms. Creating machines that we cannot completely control, and then putting them in charge of critical aspects of our lives, poses risks whose consequences we may not have the luxury of contemplating afterward.

Survival

There is still time for clear, careful, rational thinking about where we are going, where we want to go, what we have to gain, and at what cost. As with human cloning, we are inclined to ask such questions only when the technologies are upon us—too late to deal with them effectively. So the sooner we understand, the more control we are likely to have. But if we fail to understand and adapt to the new world in which we are about to find ourselves, we may face extinction, or at least assimilation.4
Our place in that world will be largely determined by the choices we make, both as individuals and as a species. To survive our technological adolescence and to preserve even a facade of human dignity, we may have to lose some of our self-destructive evolutionary baggage. Before we can learn to live with intelligent machines, we may first have to learn how to live with each other. If we continue dealing with each other and with twenty-first-century technology, equipped only with rigid moral, ethical, and religious beliefs that haven't changed significantly in most of the world since the Middle Ages, then the machines will surely triumph. Instead of asking what will we do with intelligent machines, we may well be asking What will they do with us? If we're lucky, they may keep us as pets!
Is there anything we can do to keep this creepy scenario from playing out? Calls for global awareness of the problem, and for global agreement to regulate scientific inquiry into certain fields, do not paint an optimistic picture. Only in the last century have we developed technologies that could wipe out civilization, so we have not had much practice dealing with global-scale crises. Now we are talking about turning the scientific paradigm on its head! Yet, as odd as it may sound, there are still steps that we as individuals can take that may have some collective effect.



2
What Makes Computers So Smart?

The way in which information is stored is of no importance. All that matters is the information itself.
ARTHUR C. CLARKE

Once upon a time, the word computer described a human being who did arithmetic for a living. In those days, everyone agreed that computing required intelligence. Today, we are not so sure. Our electronic computers are vastly more capable than their human namesakes—performing such advanced cognitive tasks that we commonly call them smart machines and electronic brains. Yet, in a linguistic quirk, most of us still refuse to call them intelligent. Although our electronic machines seem to be getting smarter and smarter, we reserve the term intelligent for the shrinking number of tasks that only human beings perform.
How is it that "mere machines" can perform tasks so complex that we compare them with human brains? What goes on inside their silicon circuitry that seems so much like thinking? Are we ourselves merely organic machines executing programs coded in our genes and shaped by our environment? Is human thought just a fancy kind of computing? If so, will machines eventually get so smart that they will be able to do anything a person can do? If not, where is the gap that no machine will ever be able to bridge? Are there any limits to what machines whose capacities are not confined by organic vessels might be able to do?

In Our Image

The main reason we call some machines smart is that we've created them in our own image, that is, to perform the same kinds of mental tasks that we do—only faster and more accurately. Early humans first created mechanical computers (like Stonehenge) to keep track of the time, the seasons, the positions of heavenly bodies, and the rise and fall of the tides. These machines served as a kind of memory aid that followed the cycles of nature and helped the first farmers make decisions about planting and sailors to navigate the seas. After the invention of currency, mechanical calculators like the abacus made arithmetic and trade easier and faster. A third human function that machines help us with is making decisions. Make decisions? Since when can machines make decisions? The first crude examples may have been tossing a coin or throwing dice to let supposedly random forces (or gods) decide future courses of action. More sophisticated devices, like a tail on a kite, a governor on a steam engine, or the thermostat on your furnace, make second-tosecond decisions that automatically regulate and stabilize machines so that humans don't have to continuously adjust them.
So mechanisms have been around for centuries to help people remember , calculate, and make decisions. If you think about it, you will recognize that most computers in use today still serve these three basic functions, just in more elaborate combinations.
Today, we continue to develop machines to replicate more advanced human abilities—sensors that interact with their surroundings, speech, vision, language, and locomotion—even the ability to learn from their experiences to better achieve their goals. For example, many kinds of robotic pets now commercially available interact with their world, make seemingly autonomous decisions about their next actions, and communicate with people in ways that seem to express emotions like joy, anger, and anxiety.
Other learning machines emulate the part of human behavior that amasses knowledge and keeps track of the relations between things. These expert systems often know more about their specialty than do their human counterparts. Expert systems already compile and extend the experience of the world's specialists by dispensing medical diagnoses, legal analyses, and psychological advice, and by forecasting the weather, locating natural resources, and designing vehicles, structures—and even new computers!
Consequently, our efforts to supplement and enhance human intelligence have already paid off with machines that actually exceed human thinking capacity in many specialized areas: calculating, decision making, remembering, learning, creating, making plans, pursuing goals, and—who knows?—perhaps even enjoying it! A reasonable extrapolation of present capabilities would predict the connection and cooperation of computing systems that will increase the breadth of machine intelligence as well. Joined by networks like the Internet, machines might even combine and integrate specialized capabilities and eventually evolve their own communal and social rules akin to ethics and morality.

Genre:

On Sale
Oct 13, 2004
Page Count
296 pages
Publisher
Basic Books
ISBN-13
9780786752645

Thomas Georges

About the Author

Thomas M. Georges is a former research scientist at the National Bureau of Standards, the Institute for Telecommunication Sciences, and the National Oceanic and Atmospheric Administration. He lives in Boulder, Colorado.

Learn more about this author