Economist.com Economist.com ADVANCED SEARCH



Monday August 5th 2002 denotes premium content | Log in | Free registration | Help

OPINION
WORLD
BUSINESS
FINANCE
SCIENCE
Technology Quarterly


PEOPLE
BOOKS & ARTS
MARKETS
DIVERSIONS
Cities and Countries

EIU online store


Articles by subject
Backgrounders
Surveys
Style guide
Internet guide
Business database




Full contents
Subscriptions



Executive thinking, business education and more. Click here
Whitehead Mann



Books, diaries and more



Classifieds
Business education, recruitment, business and personal: click here



Economist.com
The Economist
Global Agenda
Contact us
Advertising info





Printable page

E-mail this



Artificial intelligence

2001: a disappointment?
Dec 20th 2001
From The Economist print edition



Machines are not as intelligent as Kubrick's film imagined. But they are more life-like than ever

THE scene is prehistoric earth. Strauss's timpani crash out their epic chorus, and the sun rises on the dawn of man: apes, mastering primitive tools for the first time. Cut to 2001. A space station orbits the earth. A human colony inhabits the moon. And an intelligent computer, called Hal, guides a manned expedition to Jupiter. Man has evolved.

Not, though, to the technological heights imagined by Stanley Kubrick in 1968, when he released the movie “2001: A Space Odyssey”. This year, we can now say at the safety of its end, did not bring us a Hal, or anything like it. Computers can play a pretty good game of chess, transliterate speech and recognise handwriting and faces. But their intelligence does not touch our own, and the prevailing scientific wisdom seems to be that it never will. There will be no David, the boy-android of “AI: Artificial Intelligence”, Steven Spielberg's Kubrick-inspired film of last summer. After half a century of frustrations and dead-ends, AI research has become famous not for success, but for failure.

Kubrick, of course, had larger designs than crass futurology. Intellectuals saw his film as a satire about the failure of language. For some souls, it became a religious experience. At a screening in Los Angeles, one member of the audience looked at the weird star-child in mysterious orbit about the earth at the film's end, ran down the aisle and crashed through the screen shouting “It's God! It's God!”

One of Kubrick's clearer preoccupations was the evolution of technology. Humans had evolved, from dawn-of-man apes, and so had technology, from bone clubs to sentient machines. Machines may not yet be intelligent. But they are clearly evolving, and in a way that, more and more, seems intimately related to the evolutionary mechanisms of life.

Our language hints at a connection between life and machines. “Viruses” and “worms” have begun to infect computers, replicate their code and proliferate thousands of digital copies across the World Wide Web, many times faster than any natural pathogen. Computer scientists “breed” programs, not in vitro but in silico, in virtual laboratories inside their computers. Poorly-performing computer code is killed off. Superior code is spliced with sibling programs and bred again. Computer languages evolve, generation succeeding generation.

Peer more closely, and the intimacy deepens. At its core, safely sheathed inside the protective metabolising cell, life is an information processor, a spiral double-helix molecule called DNA. DNA is both the self-replicating instructions for life and the computer which carries them out. Watching the fluffy, downy seeds float down from the willow tree at the bottom of his garden inspired this from Richard Dawkins, a British biologist:

It is raining instructions out there; it's raining programs; it's raining tree-growing, fluff-spreading, algorithms. That is not a metaphor, it is the plain truth. It couldn't be any plainer if it were raining floppy discs.

Biological viruses replicate themselves inside cells, fooling their hosts into lending them the metabolic machinery of life which they themselves lack. Software viruses use the machinery of computers in much the same way, exploiting their ability to copy instructions quickly and at high fidelity. For inspiration in the design of computer “immune systems”, software engineers are increasingly turning to biology.



For inspiration in the design of computer “immune systems”, software engineers are increasingly turning to biology

Life evolves by building on its own complexity. Technology evolves by building on past knowledge. As David Ackley of the University of New Mexico points out*, the evolution of computer technology claims a special kinship with life. Life processes information, and it advances by evolving more powerful information-processing techniques. Evolutionary breakthroughs have come with breakthroughs in life's software.

Life began with direct coding on bare, carbon-chemistry hardware, like amino acids and proteins. Higher programming languages, like DNA and RNA, evolved gradually. Computers began in a similar fashion, with programmers coding on to the bare machinery of their circuits. Higher programming languages have followed, each generation more powerful than the last.

There is, of course, one big difference between biological and machine evolution. It took life billions of years to evolve the information-processing skills that lie behind the evolution of the human brain. Computers have made giant strides in half a century.


Life in silico

Computers sprang from our minds. More than that, they are models of how we think—or thought—that our minds work. Alan Turing, a British mathematician popularly remembered for his wartime code-cracking work on the German Enigma machine, is cherished among geeks for earlier, theoretical work on a “universal computer”. This machine, which laid much of the intellectual groundwork for modern computing, works like a plodding human logician, advancing steadily and discretely, step by deliberate step, until its task is complete.

As the vacuum tubes of the first, serial computers fizzed and crackled for the first time, their designers must have recognised in them a shared kinship. Here were electronic minds, capable of the same deliberate, problem-solving steps that governed their own thoughts. Giant electronic brains were sure to follow. Amid high expectations, the research field of artificial intelligence was born.

By the late 1960s, however, it was dawning on AI scientists that rational thought had its limits. Computer-controlled robots could not even reason themselves from one end of a room to the other. Scientists tried to cut corners, replacing the baffling complexity of the real world with one made only of bare walls, doors and simple geometric shapes. But these defeated the computers too. Shakey, a 1.5m-tall contraption that rattled around the Stanford Research Institute in the early 1970s, did celebrate the odd success. But most of the time, Shakey was floored by his infantile world, unable to keep track of where he was or what he was doing. Shakey was seriously stupid.



Intelligence was rediscovered as the subtle interaction of many scattered parts

Amid disillusion, AI's second age was born. Central, top-down control lost currency. There was no pilot in the cockpit of the brain, directing body and mind. Intelligence was rediscovered as the subtle interaction of many scattered parts. By itself, each part was stupid. Working together, they achieved profound results. A single ant is not God's brightest creature. But as colonies, ants engage in food cultivation, temperature regulation, mass communication (using scent trails) and bloody, organised warfare. Ant colonies run themselves with an efficiency that outstrips human society. But no single über-ant manages the show.

Peering into the grey matter of the brain, you discover the same phenomenon at work. There is no Intel inside, a single processing unit cycling through a program called “humanmind.exe”. In its place are 100 billion neurons. By itself, each neuron performs simple, seemingly trivial, calculations. Yet wired together in the obscure wetware of the brain, these neurons somehow give rise to the mind.

All too soon, however, the hopes kindled by AI's second age dimmed as well. Using chips and computer programs, scientists built artificial neural nets that mimicked the information-processing techniques of the brain. Some of these networks could learn to recognise patterns, like words and faces. But the goal of a broader, more comprehensive intelligence remained far out of reach. There was plenty of processing power. By the end of the 20th century, computer speeds were doubling every year, maintaining an exponential rate of growth that began with the appearance of the very first mechanical calculating devices 100 years before. The problem was what to do with it.

The sort of parallel processing that goes on in the brain seemed to need software (or hardware wiring) of impossible complexity. It was not a simple problem of scale. Computer chips, for instance, have achieved great strides in scale. A 1971-vintage Intel chip contained 2,300 transistors. The company's latest products pack in more than 10m. But chips have not become more complex. The basic patterns of wiring, though vastly miniaturised, remain the same.

The problem was rooted instead in the top-down, controlling way in which software was designed to work. Top-down control requires top-down organisation. However, the bigger and the more complicated the task, the more difficult it becomes to organise it from the top, even for relatively simple, human-built systems. Already, for instance, engineers struggle to maintain the software that runs telephone exchanges.

These networks are made up of tens of millions of connections. Each of the brain's 100 billion neurons connects itself to 10,000 others. Inside a single human head are 1,000,000,000,000,000 connections—enough for 100m modern telephone exchanges. The scattered architecture of intelligence would have to run on a new sort of software, working from the bottom up and organising itself, like life does, into greater and greater complexity. The software crisis had arrived.

And so dawned the third age of AI. Its boosters abandoned hopes of designing the information-processing protocols of intelligence, and tried to evolve them instead. No one wrote the program which controls the walking of Aibo, a $1,500 robotic dog made by Sony, the Japanese consumer-electronics giant. Aibo's genetic algorithms were grown—evolved through many generations of ancestral code in a Sony laboratory.


The Cambrian era

On one level, this arc of scientific endeavour describes a journey towards a more sophisticated understanding of the mind. As a young man, the roboticist Hans Moravec worked on the sort of theorem-proving reasoning programs that controlled Shakey. By 1988, Mr Moravec was writing:

The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100,000 years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.

On another level, though, the three ages of AI chart not human progress, but the evolution of the computer itself. First came serial computing and rational thought. At fast enough speeds, serial computers were capable of feats of intellectual brilliance, like playing chess. But they were hopelessly bad at more complex tasks, such as the motor skills we take for granted. Faster central processors (simulating neural networks) and the growth of computer networks (linking many machines together) created the parallel-processing architecture of AI's second age, in which distributed societies of machines crunched numbers together, achieving feats of intelligence beyond simple serial processors. The first flickering signs of digital life that flashed across these networks inspired the third age of AI. Computers were by now looking more recognisably like the stuff of life, and scientists had begun to borrow directly from its book.

There is, of course, nothing to say that computers and computer software will continue to evolve in ways that replicate the patterns of life. There may be some hidden, basic difference between carbon and silicon that limits (or, alternatively, enhances) the evolutionary potential of machines, damning them forever to patterns of life that are no more exalted than the humble virus. Recent research, however, hints at greater potential.

Thomas Ray, an American biologist, wants to create, in a virtual world of evolving software, the same explosion in complexity that happened during earth's Cambrian age, 570m years ago, when multicellular life made its first appearance. A dummy run in 1990 of Mr Ray's virtual world, called Tierra, produced interesting results.

Mr Ray “seeded” his world with simple, self-replicating software programs. As he watched them evolve, he saw parasites develop—shorter bits of machine code which had found a way to latch on to other programs and borrow their code in order to reproduce. But although the overall ecology had become more complex, the programs themselves had not. On the contrary, Tierra bred shorter, more efficient bits of code.

A second experiment, conducted over networked computers, hopes to make amends. This time, each of Mr Ray's seed programs has both the ability to reproduce and to migrate from computer to computer in search of hospitable habitats. Mr Ray hopes that, in this more complex environment, his programs will evolve more complex behaviour, like migrating around the world to keep to the shadows of night-time.

Early results have been hard to decipher: humans have a hard time making sense of evolved machine code. But Mr Ray thinks that he may have found the first signs of “cell differentiation”, which was the key innovation in the Cambrian age that produced multicellular life. The bits of code that allow Network Tierra programs to migrate began as a single task, or cell. In some evolved programs, this cell has split in two.

Outside the laboratory, meanwhile, computer software continues its relentless evolution. Mr Ackley suspects that this process may, by itself, be drawing closer to some sort of Cambrian explosion in evolutionary potential. Gradually the computer algorithm, a finite set of steps that takes you from A to B, is being replaced by the notion of distributed computing, in which autonomous programs, each designed for a specific task and running on different machines sprinkled around the Internet, combine and interact with each other to perform complex calculations.

Single, monolithic algorithms evolve in the opposite direction to life, from complexity to greater simplicity and efficiency. Distributed computing, on the other hand, seems to capture some of the bottom-up characteristics of life's complex, self-organising software. Computer software may evolve its own solutions to the software crisis, not in the lab, but in the rich ecology of the global economy, across the electronic wilderness of the world's computer networks.


Then, alas, gigadeath

To what end do our computers silently code themselves? Western culture is full of apocalyptic warnings of a clash of rival species. Hugo de Garis, an Australian roboticist, predicts “gigadeath”, our imminent genocide. Less encumbered by Darwinism and its intellectual fellow travellers, eastern culture imagines a more co-operative, symbiotic future with evolving machines: for example, the curvaceous cyborgs of Masamune Shiro's manga comic books.

Biological life suggests both futures are possible. Life competes fiercely for the limited resources that allow it to metabolise, and so propagate itself. When it suits self-replication better, however, life also co-operates, in loose alliances or seamless symbiogenesis. On the shores of Brittany, relates Lynn Margulis, an American biologist, can be found a strange sort of seaweed that turns out not to be seaweed at all. Under the lens of the microscope the seaweed becomes green worms. They are green because they are packed with photosynthesising algae, which lives, reproduces and dies inside the body of the worms. Obligingly, they produce the food that the worm “eats”—the worm's mouth is entirely redundant.

The architecture of every cell in the human body—and the body of all plants and animals, for that matter—was made possible by an earlier, symbiogenetic fusing of simpler single-celled creatures. Bacteria live in the human gut, in eyelashes and other startling places, making us massive, walking colonies of micro-organisms. In the same way, an impartial observer might conclude, humans have become symbiotically intertwined with their machines. Humans work to further the replication of computer code, while computers help to propagate human code, supporting the highly-evolved economic and financial infrastructure that sustains human society. The two are not so intimately related as flatworm and algae. But neither, realistically, can each do without the other.

The emergence of evolved software and hardware that is grown, but not understood, deepens human dependence. Perhaps the physical relationship will deepen as well, with chip implants in brains, or human minds dubbed and downloaded into machines. Alternatively, man and machine might dispense with each other and part ways. The tree of life grows in on itself, says Mrs Margulis, but it branches as well.

Maybe, in the end, humans will never know the larger truth. Hal was alien to his human crew—a red eye, behind which lurked an unfathomable intelligence. Since 1968, the computer has become more alien still. Kubrick had no inkling of the networked computer, with its potential for massively distributed intelligence. Perhaps humans will stay ignorant of the grander design, just as a single ant has no comprehension of the intelligence of the colony. When 2001 really does arrive, we may never know it.


* “Real Artificial Life: Where We May Be”, David Ackley, Department of Computer Science, University of New Mexico










Machines with a human touch
Sep 20th 2001

HAL's new pals
Jun 28th 2001

Herbert Simon, pioneer of artificial intelligence
Feb 22nd 2001

More about...

Computer technology

Websites

Click to buy from Amazon.com: “2001: A Space Odyssey” directed by Stanley Kubrick (Amazon.co.uk).

See this unofficial site on Richard Dawkins. David Ackley’s article, “Real artificial life: Where we may be”, is online. A site devoted to Alan Turing has more on his work. Stanford Research Institute, which describes Shakey the robot, has an Artificial Intelligence Centre. Intel provides a history of its innovations in computing. Thomas Ray created Tierra and has published many articles on AI. Hugo de Garis predicts a “gigadeath war” in an interview with Shebang Magazine. A multitude of online galleries display Masamune Shiro’s extravagant vision of female cyborgs. See also the homepages of Lynn Margulis and Hans Moravec.




Economist.com Marketplace

ADVERTISEMENT


OPINION | WORLD | BUSINESS | FINANCE & ECONOMICS | SCIENCE & TECHNOLOGY
PEOPLE | BOOKS & ARTS | MARKETS & DATA | DIVERSIONS | PRINT EDITION


Copyright © The Economist Newspaper Limited 2002. All rights reserved.
Legal disclaimer | Privacy Policy | Terms & Conditions | Help



E-MAIL AND MOBILE EDITIONS SUBSCRIBE TO THE ECONOMIST Email and Mobile Editions Help