- Sir KARL POPPER
AS WE LOOK FORWARD to see where the technology race leads, we should ask three questions. What is possible, what is achievable, and what is desirable?
First, where hardware is concerned, natural law sets limits to the possible. Because assemblers will open a path to those limits, understanding assemblers is a key to understanding what is possible.
Second, the principles of change and the facts of our present situation set limits to the achievable. Because evolving replicators will play a basic role, the principles of evolution are a key to understanding what will be achievable.
As for what is desirable or undesirable, our differing dreams spur a quest for a future with room for diversity, while our shared fears spur a quest for a future of safety.
These three questions - of the possible, the achievable, and the desirable - frame an approach to foresight. First, scientific and engineering knowledge form a map of the limits of the possible. Though still blurred and incomplete, this map outlines the permanent limits within which the future must move. Second, evolutionary principles determine what paths lie open, and set limits to achievement - including lower limits, because advances that promise to improve life or to further military power will be virtually unstoppable. This allows a limited prediction: If the eons-old evolutionary race does not somehow screech to a halt, then competitive pressures will mold our technological future to the contours of the limits of the possible. Finally, within the broad confines of the possible and the achievable, we can try to reach a future we find desirable.
Prognosticators often guess at the times and costs required to harness new technologies. When they reach beyond outlining possibilities and attempt accurate predictions, they generally fail. For example, though the space shuttle was clearly possible, predictions of its cost and initial launch date were wrong by several years and billions of dollars. Engineers cannot accurately predict when a technology will be developed, because development always involves uncertainties.
But we have to try to predict and guide development. Will we develop monster technologies before cage technologies, or after? Some monsters, once loosed, cannot be caged. To survive, we must keep control by speeding some developments and slowing others.
Though one technology can sometimes block the dangers of another (defense vs. offense, pollution controls vs. pollution), competing technologies often go in the same direction. On December 29, 1959, Richard Feynman (now a Nobel laureate) gave a talk at an annual meeting of the American Physical Society entitled "There's Plenty of Room at the Bottom." He described a non-biochemical approach to nanomachinery (working down, step by step, using larger machines to build smaller machines), and stated that the principles of physics do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but, in practice, it has not been done because we are too big.... Ultimately, we can do chemical synthesis.... put the atoms down where the chemist says, and so you make the substance." In brief, he sketched another, nonbiochemical path to the assembler. He also stated, even then, that it is "a development which I think cannot be avoided."
As I will discuss in Chapters 4 and 5, assemblers and intelligent machines will simplify many questions regarding the time and cost of technological developments. But questions of time and cost will still muddy our view of the period between the present and these breakthroughs. Richard Feynman saw in 1959 that nanomachines could direct chemical synthesis, presumably including the synthesis of DNA. Yet he could foresee neither the time nor the cost of doing so.
In fact, of course, biochemists developed techniques for making DNA without programmable nanomachines, using shortcuts based on specific chemical tricks. Winning technologies often succeed because of unobvious tricks and details. In the mid-1950s physicists could see that basic semiconductor principles made microcircuits physically possible, but foreseeing how they would be made - foreseeing the details of mask-making, resists, oxide growth, ion implantation, etching, and so forth, in all their complexity - would have been impossible. The nuances of detail and competitive advantage that select winning technologies make the technology race complex and its path unpredictable.
But does this make long-term forecasting futile? In a race toward the limits set by natural law, the finish line is predictable even if the path and the pace of the runners are not. Not human whims but the unchanging laws of nature draw the line between what is physically possible and what is not - no political act, no social movement can change the law of gravity one whit. So however futuristic they may seem, sound projections of technological possibilities are quite distinct from predictions. They rest on timeless laws of nature, not on the vagaries of events.
It is unfortunate that this insight remains rare. Without it, we stumble in a daze across the landscape of the possible, confusing mountains with mirages and discounting both. We look ahead with minds and cultures rooted in the ideas of more sluggish times, when both science and technological competition lacked their present strength and speed. We have only recently begun to evolve a tradition of technological foresight.
Through most of history, people had little understanding of evolution. This left philosophers thinking that sensory evidence, through reason, must somehow imprint on the mind all human knowledge-including knowledge of natural law. But in 1737, the Scottish philosopher David Hume presented them with a nasty puzzle: he showed that observations cannot logically prove a general rule, that the Sun shining day after day proves nothing, logically, about its shining tomorrow. And indeed, someday the Sun will fail, disproving any such logic. Hume's problem appeared to destroy the idea of rational knowledge, greatly upsetting rational philosophers (including himself). They thrashed and sweated, and irrationalism gained ground. In 1945, philosopher Bertrand Russell observed that "the growth of unreason throughout the nineteenth century and what has passed of the twentieth is a natural sequel to Hume's destruction of empiricism." Hume's problem-meme had undercut the very idea of rational knowledge, at least as people had imagined it.
In recent decades, Karl Popper (perhaps the scientists' favorite philosopher of science), Thomas Kuhn, and others have recognized science as an evolutionary process. They see it not as a mechanical process by which observations somehow generate conclusions, but as a battle where ideas compete for acceptance.
All ideas, as memes, compete for acceptance, but the meme system of science is special: it has a tradition of deliberate idea mutation, and a unique immune system for controlling the mutants. The results of evolution vary with the selective pressures applied, whether among test tube RNA molecules, insects, ideas, or machines. Hardware evolved for refrigeration differs from hardware evolved for transportation, hence refrigerators make very poor cars. In general, replicators evolved for A differ from those evolved for B. Memes are no exception.
Broadly speaking, ideas can evolve to seem true or they can evolve to be true (by seeming true to people who check ideas carefully). Anthropologists and historians have described what happens when ideas evolve to seem true among people lacking the methods of science; the results (the evil-spirit theory of disease, the lights-on-a-dome theory of stars, and so forth) were fairly consistent worldwide. Psychologists probing people's naive misconceptions about how objects fall have found beliefs like those that evolved into formal "scientific" systems during the Middle Ages, before the work of Galileo and Newton.
Galileo and Newton used experiments and observations to test ideas about objects and motion, beginning an era of dramatic scientific progress: Newton evolved a theory that survived every test then available. Their method of deliberate testing killed off ideas that strayed too far from the truth, including ideas that had evolved to appeal to the naive human mind.
This trend has continued. Further variation and testing have forced the further evolution of scientific ideas, yielding some as bizarre-seeming as the varying time and curved space of relativity, or the probabilistic particle wave functions of quantum mechanics. Even biology has discarded the special life-force expected by early biologists, revealing instead elaborate systems of invisibly small molecular machines. Ideas evolved to be true (or close to the truth) have again and again turned out to seem false - or incomprehensible. The true and the true-seeming have turned out to be as different as cars and refrigerators.
Ideas in the physical sciences have evolved under several basic selection rules. First, scientists ignore ideas that lack testable consequences; they thus keep their heads from being clogged by useless parasites. Second, scientists seek replacements for ideas that have failed tests. Finally, scientists seek ideas that make the widest possible range of exact predictions, The law of gravity, for example, describes how stones fall, planets orbit, and galaxies swirl, and makes exact predictions that leave it wide open to disproof. Its breadth and precision likewise give it broad usefulness, helping engineers both to design bridges and to plan spaceflights.
The scientific community provides an environment where such memes spread, forced by competition and testing to evolve toward power and accuracy. Agreement on the importance of testing theories holds the scientific community together through fierce controversies over the theories themselves.
Inexact, limited evidence can never prove an exact, general theory (as Hume showed), but it can disprove some theories and so help scientists choose among them. Like other evolutionary processes, science creates something positive (a growing store of useful theories) through a double negative (disproof of incorrect theories). The central role of negative evidence accounts for some of the mental upset caused by science: as an engine of disproof, it can uproot cherished beliefs, leaving psychological voids that it need not refill.
In practical terms, of course, much scientific knowledge is as solid as a rock dropped on your toe. We know Earth circles the Sun (though our senses suggest otherwise) because the theory fits endless observations, and because we know why our senses are fooled. We have more than a mere theory that atoms exist: we have bonded them to form molecules, tickled light from them, seen them under microscopes (barely), and smashed them to pieces. We have more than a mere theory of evolution: we have observed mutations, observed selection, and observed evolution in the laboratory. We have found the traces of past evolution in our planet's rocks, and have observed evolution shaping our tools, our minds, and the ideas in our minds - including the idea of evolution itself. The process of science has hammered out a unified explanation of many facts, including how people and science themselves came to be.
When science finishes disproving theories, the survivors often huddle so close together that the gap between them makes no practical difference. After all, a practical difference between two surviving theories could be tested and used to disprove one of them. The differences among modern theories of gravity, for instance, are far too subtle to trouble engineers who are planning flights through the gravity fields of space. In fact, engineers plan spaceflights using Newton's disproved theory because it is simpler than Einstein's, and is accurate enough. Einstein's theory of gravity has survived all tests so far, yet there is no absolute proof for it and there never will be. His theory makes exact predictions about everything everywhere (at least about gravitational matters), but scientists can only make approximate measurements of some things somewhere. And, as Karl Popper points out, one can always invent a theory so similar to another that existing evidence cannot tell them apart.
Though media debates highlight the shaky, disputed borders of knowledge, the power of science to build agreement remains clear. Where else has agreement on so much grown so steadily and so internationally? Surely not in politics, religion, or art. Indeed, the chief rival of science is a relative: engineering, which also evolves through proposals and rigorous testing.
Though engineers often tread uncertain ground, they are not doomed to do so, as scientists are. They can escape the inherent risks of proposing precise, universal scientific theories. Engineers need only show that under particular conditions particular objects will perform well enough. A designer need know neither the exact stress in a suspension bridge cable nor the exact stress that will break it; the cable will support the bridge so long as the first remains below the second, whatever they may be.
Though measurements cannot prove precise equality, they can prove inequality. Engineering results can thus be solid in a way that precise scientific theories cannot. Engineering results can even survive disproof of the scientific theories supporting them, when the new theory gives similar results. The case for assemblers, for example, will survive any possible refinements in our theory of quantum mechanics and molecular bonds.
Predicting the content of new scientific knowledge is logically impossible because it makes no sense to claim to know already the facts you will learn in the future. Predicting the details of future technology, on the other hand, is merely difficult. Science aims at knowing, but engineering aims at doing; this lets engineers speak of future achievements without paradox. They can evolve their hardware in the world of mind and computation, before cutting metal or even filling in all the details of a design.
Scientists commonly recognize this difference between scientific foresight and technological foresight: they readily make technological predictions about science. Scientists could and did predict the quality of Voyager's pictures of Saturn's rings, for example, though not their surprising content. Indeed, they predicted the pictures' quality while the cameras were as yet mere ideas and drawings. Their calculations used well-tested principles of optics, involving no new science.
Because science aims to understand how everything works, scientific training can be a great aid in understanding specific pieces of hardware. Still, it does not automatically bring engineering expertise; designing an airliner requires much more than a knowledge of the sciences of metallurgy and aerodynamics.
Scientists are encouraged by their colleagues and their training to focus on ideas that can be tested with available apparatus. The resulting short-term focus often serves science well: it keeps scientists from wandering off into foggy worlds of untested fantasy, and swift testing makes for an efficient mental immune system. Regrettably, though, this cultural bias toward short-term testing may make scientists less interested in long-term advances in technology.
The impossibility of genuine foresight regarding science leads many scientists to regard all statements about future developments as "speculative" - a term that makes perfect sense when applied to the future of science, but little sense when applied to well-grounded projections in technology. But most engineers share similar leanings toward the short term. They too are encouraged by their training, colleagues, and employers to focus on just one kind of problem: the design of systems that can be made with present technology or with technology just around the corner. Even long-term engineering projects like the space shuttle must have a technology cutoff date after which no new developments can become part of the basic design of the system.
In brief, scientists refuse to predict future scientific knowledge, and seldom discuss future engineering developments. Engineers do project future developments, but seldom discuss any not based on present abilities. Yet this leaves a crucial gap: what of engineering developments firmly based on present science but awaiting future abilities? This gap leaves a fruitful area for study.
Imagine a line of development which involves using existing tools to build new tools, then using those tools to build novel hardware (perhaps including yet another generation of tools). Each set of tools may rest on established principles, yet the whole development sequence may take many years, as each step brings a host of specific problems to iron out. Scientists planning their next experiment and engineers designing their next device may well ignore all but the first step. Still, the end result may be foreseeable, lying well within the bounds of the possible shown by established science.
Recent history illustrates this pattern. Few engineers considered building space stations before rockets reached orbit, but the principles were clear enough, and space systems engineering is now a thriving field. Similarly, few mathematicians and engineers studied the possibilities of computation until computers were built, though many did afterward. So it is not too surprising that few scientists and engineers have yet examined the future of nanotechnology, however important it may become.
Leonardo lived five hundred years ago, his life spanning the discovery of the New World. He made projections in the form of drawings and inventions; each design may be seen as a projection that something much like it could be made to work. He succeeded as a mechanical engineer: he designed workable devices (some were not to be built for centuries) for excavating, metalworking, transmitting power, and other purposes. He failed as an aircraft engineer: we now know that his flying machines could never be made to work as described.
His successes at machine design are easy to understand. If parts can be made accurately enough, of a hard enough, strong enough material, then the design of slow-moving machines with levers, pulleys, and rolling bearings becomes a matter of geometry and leverage. Leonardo understood these quite well. Some of his "predictions" were long-range, but only because many years passed before people learned to make parts precise enough, hard enough, and strong enough to build (for instance) good ball bearings - their use came some three hundred years after Leonardo proposed them. Similarly, gears with superior, cycloidal teeth went unmade for almost two centuries after Leonardo drew them, and one of his chain-drive designs went unbuilt for almost three centuries.
His failures with aircraft are also easy to understand. Because Leonardo's age lacked a science of aerodynamics, he could neither calculate the forces on wings nor know the requirements for aircraft power and control.
Can people in our time hope to make projections regarding molecular machines as accurate as those Leonardo da Vinci made regarding metal machines? Can we avoid errors like those in his plans for flying machines? Leonardo's example suggests that we can. It may help to remember that Leonardo himself probably lacked confidence in his aircraft, and that his errors nonetheless held a germ of truth. He was right to believe that flying machines of some sort were possible-indeed, he could be certain of it because they already existed. Birds, bats, and bees proved the possibility of flight. Further, though there were no working examples of his ball bearings, gears, and chain drives, he could have confidence in their principles. Able minds had already built a broad foundation of knowledge about geometry and the laws of leverage. The required strength and accuracy of the parts may have caused him doubt, but not their interplay of function and motion. Leonardo could propose machines requiring better parts than any then known, and still have a measure of confidence in his designs.
Proposed molecular technologies likewise rest on a broad foundation of knowledge, not only of geometry and leverage, but of chemical bonding, statistical mechanics, and physics in general. This time, though, the problems of material properties and fabrication accuracy do not arise in any separate way. The properties of atoms and bonds are the material properties, and atoms come prefabricated and perfectly standardized. Thus we now seem better prepared for foresight than were people in Leonardo's time: we know more about molecules and controlled bonding than they knew about steel and precision machining. In addition, we can point to nanomachines that already exist in the cell as Leonardo could point to the machines (birds) already flying in the sky.
Projecting how second-generation nanomachines can be built by protein machines is surely easier than it was to project how precise steel machines would be built starting with the cruder machines of Leonardo's time. Learning to use crude machines to make more precise machines was bound to take time, and the methods were far from obvious. Molecular machines, in contrast, will be built from identical prefabricated atomic parts which need only be assembled. Making precise machines with crooked machines must have been harder to imagine then than molecular assembly is now. And besides, we know that molecular assembly happens all the time in nature. Again, we have firmer grounds for confidence than Leonardo did.
In Leonardo's time, people had scant knowledge of electricity and magnetism, and knew nothing of molecules and quantum mechanics. Accordingly, electric lights, radios, and computers would have baffled them. Today, however, the basic laws most important to engineering - those describing normal matter - seem well understood. As with surviving theories of gravity, the scientific engine of disproof has forced surviving theories of matter into close agreement.
Such knowledge is recent. Before this century people did not understand why solids were solid or why the Sun shone. Scientists did not understand the laws that governed matter in the ordinary world of molecules, people, planets, and stars. This is why our century has sprouted transistors and hydrogen bombs, and why molecular technology draws near. This knowledge brings new hopes and dangers, but at least it gives us the means to see ahead and to prepare.
When the basic laws of a technology are known, future possibilities can be foreseen (though with gaps, or Leonardo would have foreseen mechanical computers). Even when the basic laws are poorly known, as were the principles of aerodynamics in Leonardo's time, nature can demonstrate possibilities. Finally, when both science and nature point to a possibility, these lessons suggest that we take it to heart and plan accordingly.
As nanotechnology advances, there will come a time when assemblers become an imminent prospect, backed by an earnest and well-funded development program. Their expected capabilities will have become clear.
By then, computer-aided design of molecular systems - which has already begun - will have grown common and sophisticated, spurred by advances in computer technology and the growing needs of molecular engineers. Using these design tools, engineers will be able to design second-generation nanosystems, including the second-generation assemblers needed to build them. What is more, by allowing enough margin for inaccuracies (and by preparing alternative designs), engineers will be able to design many systems that will work when first built - they will have evolved sound designs in a world of simulated molecules.
Consider the force of this situation: under development will be the greatest production tool in history, a truly general fabrication system able to make anything that can be designed - and a design system will already be in hand. Will everyone wait until assemblers appear before planning how to use them? Or will companies and countries respond to the pressures of opportunity and competition by designing nanosystems in advance, to speed the exploitation of assemblers when they first arrive?
This design-ahead process seems sure to occur; the only question is when it will start and how far it will go. Years of quiet design progress may well erupt into hardware with unprecedented suddenness in the wake of the assembler breakthrough. How well we design ahead - and what we design - may determine whether we survive and thrive, or whether we obliterate ourselves.
Because the assembler breakthrough will affect almost the whole of technology, foresight is an enormous task. Of the universe of possible mechanical devices, Leonardo foresaw only a few. Similarly, of the far broader universe of future technologies, modern minds can foresee only a few. A few advances, however, seem of basic importance.
Medical technology, the space frontier, advanced computers, and new social inventions all promise to play interlocking roles. But the assembler breakthrough will affect all of them, and more.