Science, Nanotechnology, and Responsibility
By Chris Phoenix - October 2002
One Sentence Summary
It is too early to predict which aspects of molecular nanotechnology
might be dangerous; to limit nanotech research at this point would only
increase our ignorance and reduce the possible benefits, without
decreasing risk.
Abstract:
Despite several scary ideas such as "gray goo", nanotechnology is still at the level of basic science. No one, not even the scientists involved, can predict what mix of good and bad will come from any basic science discovery. Nanotechnology researchers are already taking responsibility for avoiding forseeable dangers. Unexpected dangers cannot be avoided without forbidding all scientific research--a clear impossibility. When Fleming discovered that a certain mold killed bacteria, how could he know whether he had found an antibiotic or a weapon? Dangerous discoveries cannot be prevented by regulating scientific activity, since we do not know what to regulate; we would pay a high price in ignorance, and be unprepared for the bad stuff that less responsible people will be developing.
Unpredictable Outcomes of Basic Science
Should scientists be held responsible for the fruits of their research? This question must not be taken lightly. Science has reshaped--even redefined--every aspect of our lives, including family, politics, death, trade, and psychology. It is tempting to say that such a powerful force must be managed and controlled, and those who wield it must do so with caution and humility. It is tempting, and seems at first quite suitable, to insist that scientists consider the consequences of their work, and restrict their own research in order to prevent new and horrific excesses. If they don't, the obvious alternative is to regulate science from outside. However, both of these solutions can easily cause new problems, and must be applied with care. In some cases, each solution is appropriate. In other instances, either "solution" will do more harm than good.
Radioactivity and X-rays were discovered within a year of each other--X-rays in 1895 and radioactivity in 1896. In 1898, long before the term "science fiction" was invented, H. G. Wells included a "death ray" in War of the Worlds. Surely X-rays would have been a prime candidate for scientific caution, even more so than radioactivity. However, a cautious approach to X-rays would merely have delayed their medical applications. It was radioactivity, and not X-rays, that led to the atomic bomb. X-rays found a use in medicine as a diagnostic tool, and when radium was discovered in 1898, it was used to cure cancer. The atomic bomb would not become even a theoretical possibility until 1939, when uranium fission was discovered by German scientists.
This paper provides a brief overview of the problems and benefits created
by nanotechnology, and substantiates the claim that a variety of ethical
systems will be necessary to deal appropriately with the range of issues
raised by nanotech. It begins by discussing the power of nanotechnology
as a technology, then surveys the risks, commercial opportunities, and
abundance that may be created by nanotech, and concludes by restating the
need for the deliberate development of collaboration between diverse organizations
with radically different ethics.
The atomic bomb is the archetypal example of irresponsible science. Scientists, engineers, and technicians worked for years to produce a weapon of terrible destructive force. According to Richard Feynman, at the outset of the project everyone was convinced that it was necessary to develop a bomb before Nazi Germany could do so. When Germany was defeated, most of them never questioned whether the project should continue: "There was, however, I think, an error in my thought in that after the Germans were defeated .... I didn't stop; I didn't even consider that the motive for originally doing it was no longer there."(1) Oppenheimer did reconsider, even before the bomb was first tested, and opposed the project at the cost of his career. Teller, by contrast, went on to develop the hydrogen bomb, and remains convinced that his work has reduced wartime death.
Perhaps if Feynman and others had joined Oppenheimer we would not have seen the Arms Race and the Cold War, and the current risk of Indian, Pakistani, and terrorist use of atomic weapons. On the other hand, the Soviet Union might have developed the bomb unilaterally and used it to subjugate Europe and America; Stalin would likely have killed tens of millions of people in the new Soviet territories, as he did in Siberia. With the Soviet and Nazi scientists subject to the coercive demands of their governments, it cannot be argued with certainty that Western scientists were wrong to develop it.
But let's assume that the A-bomb was a bad thing to develop. Where should we place the responsibility for it? At what point should a scientist have called a halt to the research? Perhaps Einstein and Feynman were wrong in their initial decisions that developing the A-bomb was necessary. We can claim, with hindsight, that if they had been more "ethical" they would have opposed it. But when we look even twenty years earlier in the chain of discovery, we find a very different picture. Marie Curie was one of the earliest researchers in radioactivity, famous for discovering radium, and winner of not one but two Nobel prizes. Speaking in 1921 about her discovery, she had this to say (2):
"But we must not forget that when radium was discovered no one knew that it would prove useful in hospitals. The work was one of pure science. And this is a proof that scientific work must not be considered from the point of view of the direct usefulness of it. It must be done for itself, for the beauty of science, and then there is always the chance that a scientific discovery may become like the radium a benefit for humanity."
|
At this point, radium had been used to cure cancer and the A-bomb was not yet a scientific possibility. Marie Curie could not have foreseen the perils of radioactivity in 1921--still less in the 1800's when she was isolating radium. Her description of her work as "pure science" is no less than the truth.
Just as pure science cannot be "considered from the point of view of the direct usefulness of it," pure science also cannot be evaluated based on its direct danger. When Alexander Fleming first noticed that bacteria could not live around a certain kind of mold, how was he to know whether he had found an antibiotic (penicillin) or a deadly weapon? (3) There is no way to tell, when a discovery is first made, whether it will turn out to be beneficial, like X-rays--or dangerous--or both. Pure scientists cannot be held responsible for the consequences of the discoveries they make, because the consequences of those discoveries can hardly ever be predicted without a lot of research. It is when the science is applied--turned into technologies and products--that we must decide whether or not the product is desirable. In many cases, even the products will be value-neutral: a computer can be used for education or war, and the responsibility must rest with the user
Nanotechnology
What does this say about nanotechnology research? First, we must keep in mind that there are two kinds of nanotechnology. One kind, which I will call "structural nanotechnology," is concerned with very small structures, such as nanocrystals and complicated molecules. This is the focus of the National Nanotechnology Initiative, and of the majority of nanotechnology researchers today. The other kind of nanotechnology, labeled "molecular nanotechnology" several years ago by Eric Drexler, is concerned with very small machines: robots, engines, and computers built atom-by-atom, smaller than a cell. This is the kind of nanotech that has raised hopes of free manufacturing and fears of environmental destruction
Structural nanotechnology became an accepted field of research only a few years ago, but has already divided into a huge number of different threads of research. It is driven largely by commercial applications: as soon as it is found that a nanocrystal emits light with certain properties, it is immediately adapted by biotech, computer, or other companies. There is certainly pure research going on, but it is largely hidden behind the avalanche of products and possibilities created by the new properties of nano-scale materials and structures. Structural nanotechnology, for the most part, will help us to do what we're already doing--better. We will have faster computers, more effective medicines, stronger materials, and more efficient engines. These products, for the most part, are value-neutral: they can be used for good or bad, and the responsibilty rests with the user. When a scientist or engineer does face a moral or ethical dilemma, as when asked to develop a new weapon or put a new drug through clinical trials, the dilemma will be familiar, and will be resolvable with procedures and institutions already in place.
Molecular nanotechnology poses a different set of problems. First of all, it doesn't exist yet! Today we can build robots the size of insects, but not the size of cells. We don't know what could be done with such robots--how fast they could run, what kinds of chemicals they could process, or how easy it will be to adapt them to any of the millions of tasks we might apply them to. We cannot even say whether unconstrained research and development would produce, on balance, more good than harm. Could nanotech actually produce a "gray goo" that risks "eating" the environment, or will such things be easy to prevent and contain? We cannot know at this point. How many lives could be saved and extended by subcellular medical robots? Can we even agree on whether a 150-year lifespan would be a good thing or not? The questions quickly spiral out of control.
Many of the scariest scenarios related to nanotech are based in the possibility of self-replication: a machine that can make copies of itself. This idea was invented in the 1940's by John von Neumann. We've already had five decades to get used to it, and we still can't agree what it's good for or whether it's even worth putting into practice. Molecular nanotechnology was first described by Richard Feynman in 1959 (4). Four decades later, the possibility is still not widely accepted. We have not yet been able to build a single device described in Nanosystems (5), which was published a decade ago. A description of gray goo may sound scary: A machine that eats the biosphere and turns everything into copies of itself. But such a machine would require going far beyond Nanosystems; it would be far more complex than the "assemblers" that are the traditional workhorses of molecular nanotechnology, and even a simple assembler is far beyond the scope of Nanosystems. We simply don't know how to do it yet. "Gray goo" is no more real today than H. G. Wells' "death ray" was in 1898.
Leonardo da Vinci did his best to invent airplanes five centuries ago, and his descriptions were more detailed than any descriptions we have of gray goo. But it wasn't until 1903 that the first human was carried by the first powered airplane. Powered flight had to wait for the invention of the internal combustion engine to provide a lightweight power source. Gray goo is as unrealistic today as powered flight was in 1500, and if it ever exists, it will be as different from today's ideas as a modern airplane is from da Vinci's designs. We have no way of knowing what inventions or discoveries--if any--can make gray goo possible, any more than da Vinci could imagine the internal combustion engine. If we tried to prevent the development of gray goo by preventing the necessary discoveries, we could not do so without stopping the entire body of scientific research.
Policy Recommendations
It should be clear by now that dangerous inventions cannot be prevented by regulating basic science. Is there any point where they can they be regulated? As implied above, they can always be regulated--at least in theory--at the level of products and applications. This may be somewhat successful in the case of technologies that are inherently difficult to produce and hard to use, such as nuclear weapons. It is much less likely to succeed for technologies that are relatively simple, portable, and desirable to individuals, such as psychoactive drugs. In its initial stages, nanotech will be difficult, but it will quickly become easier; we can expect it to advance at least as fast as computers and genomics.
What about regulation at some intermediate stage? Perhaps basic science cannot be regulated, but applied science might be. But even here, we run into trouble. The internal combustion engine was patented in 1867. Thirty-six years later, it was used in an airplane. And only six years after that, in 1909, the first military airplane was purchased by the United States. At what point should we have regulated airplanes in order to prevent the pointless firebombing of Dresden in 1945? The firebombing of Dresden could only have been prevented at a political or military level; science and technology had nothing to do with it.
The discovery of uranium fission, which enabled the bomb, was made by German scientists working under a Nazi regime. If not for a suicidal war, they might well have been able to develop an atomic bomb. We cannot make plans based on the fantasy that all scientists are both ethical and free to choose. Any weapon that can be developed probably will be--somewhere in the world. And even horrific research such as biological weapons may be useful. The United States renounced biological weapons, while the Soviet Union retained an active program. Now the Soviet Union is gone, and the scientists have dispersed; biological weapons are loose in the world, and we know very little about them. If we had not canceled our bioweapons research program, we probably still would not have used the weapons--and we would be much better prepared for a terrorist attack that does use them.
Many countries around the world are actively engaged in nanotech research, including France, Germany, China, Japan, and of course the U.S. Attempting to suppress certain research directions at this point would be counterproductive. First, we cannot know which directions are dangerous. The benefits of nanotech will decrease in direct proportion to the suppression, while the risks will be largely unaffected or even increased. Unless we can impose a ban on nanotech-related research in every region of the globe, we will simply guarantee that the technology will be developed somewhere else, and that we will not know how to deal with it. And a ban on all nanotech-related research would require banning research into computers, medicine, and even basic materials science.
The best thing we can do, then, is to continue nanotech research, both practical and theoretical, and learn as much as we can about the possibilities inherent in the idea. Researchers who discover dangers inherent in their work should not stop; instead, they should continue to investigate, quantify the danger, and explore ways of dealing with it. It is too early at this point to invent any nanotech weapons; we can only imagine them, in the same way H. G. Wells imagined death rays and atomic bombs. This imagination should inspire us to greater efforts to understand the technology, so that when the time comes that our enemies can invent nanotech weapons, we will have some idea how to deal with them.
Preliminary efforts at understanding nanotech's potential for harm are already underway. Robert Freitas has written a long and well-researched paper about what it would take to stop a gray goo infestation (6). Foresight Institute and the Institute for Molecular Manufacturing have produced a set of guidelines (7) designed to prevent dangerous laboratory accidents with self-replicating machines; such guidelines will not be relevant for many years, yet responsible scientists are already writing and adopting them. At least in some nations, scientists are recognizing the dangers and dealing with them appropriately, and making strong efforts to communicate the issues to our political and military leaders.
In short, researchers here are already doing the best they can. They are doing everything they can think of to minimize the risks, which includes doing their best to study, understand, and--yes--invent the pieces of nanotechnology as quickly as possible. Trying to restrict their research, or asking them to restrict themselves because of the products that might eventually come from their work, will only make things worse by increasing our level of ignorance. No scientist today is designing a nanotech-based weapon, and no scientist will be able to do so for years to come. When it does happen, those scientists will probably be acting under duress and their political masters will not be susceptible to regulation. We must prepare for that time by doing as much nanotech research as possible, and by addressing the military and political problems that could cause the misuse of the technology.
References
(1) "The Pleasure of Finding Things Out" p. 231 Format Paperback, 288pp. ISBN 0738203491. Perseus Publishing. Pub. Date September 2000
(2) http://www.fordham.edu/halsall/mod/curie-radium.html "Modern History Sourcebook: Marie Curie (1867-1934): On the Discovery of Radium"
(3) http://www.pbs.org/wgbh/aso/databank/entries/dm28pe.html "A Science Odyssey: People and Discoveries: Fleming discovers penicillin"
(4) http://www.zyvex.com/nanotech/feynman.html "Feynman's Talk"
(5) http://www.zyvex.com/nanotech/nanosystems.html "NANOSYSTEMS" Format Paperback, 556pp. ISBN: 0471575186. Wiley, John & Sons
Pub. Date: September 199
(6) http://www.foresight.org/NanoRev/Ecophagy.html "Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations"
(7) http://www.foresight.org/guidelines/current.html "Foresight Guidelines on Molecular Nanotechnology"
|