A compelling theme in science fiction is the potential of humanity to create life and the hazards arising from such an endeavor. Perhaps Frankenstein was the first popular tale of a mad scientist creating havoc when the beings he created went haywire. The theme has been followed by writers such as Asimov in the Robot series, Philip K. Dick in 'Do Androids Dream of Electric Sheep?' (later adapted to film as Blade Runner) and recently has been re-imagined by the writers of Battlestar Galactica. Perhaps the most well known human impersonating robot has been portrayed by the current Californian Governor.

In each artificial life creation story, there is always a cautionary line - questioning the right of humankind to create a consciousness that would otherwise not be present in our universe. Let us inspect that theme philosophically to gauge its true value to us today.

Firstly, a definition for the purpose of discussion--'life' here is interpreted to be a thinking being, able to react in ways that make it indistinguishable from a human, much along the lines of Von Neumann's theory. Think of the androids in Blade Runner, the Cylons in Battlestar Galactica. Data in Star Trek; Arnold in Sacramento.

The case for creating life

The case for humankind to create life that is usually adopted goes like this: "we need the most advanced machines possible to conduct work too dangerous for a human". Certainly this is a utilitarian approach to life creation. It is a line that would be attractive to a scientist wishing to fund such research--although perhaps it will be true that there will always be a human proletariat to which such work will fall. But the true attractiveness of robots in this case is that they are seen as replaceable and they won't kick up a stink when working conditions are bad.

The case against creating life

Usually conflict arises when thinking robots realise the truth of their lot and also their ability to use their unique skills to put themselves in charge of their own destiny. This leads to devastating conflict between humans and their artificial progeny.

I see a close parallel here to the construction of the atomic bomb. The Bomb was developed for a utilitarian purpose (to defeat the Axis enemies of World War II). Now that the genie is out of the bottle, the very presence of nuclear weapons continuously threatens man's existence, while at the same time we are unable at this juncture to deprive ourselves of them.


There is a real difference, however, between the nuclear bomb and the robot. That difference is that once robots destroy mankind, they will be able to live on without us. On the other hand, the nuclear catastrophe will be a one-time-only event, and once the long winter passes and the radiation subsides, it will have no further unpredictable effect on Earth.

It is fraught with danger to tell stories about the future to help make our decisions now, but such imaginings are important to us in pushing forward this argument. Therefore let us consider a timeline where humankind creates life and is destroyed by its creation.

Is this inherently bad? It certainly is not a reassuring future for the generation that will deal with such conflict, presumably a generation that will live not far from today. But once the conflict is over, as long as we have invested in our creation the means of Darwinian evolution, it is likely that Earth, viewed as a complete ecosystem, will continue marching towards further technological achievement and eco-systemic enlightenment. Humankind will have played a heroic part in this adventure. Even robots (perhaps especially robots) will have to acknowledge that. Our lives will have meaning through our progeny, a common enough goal for everyday man.

Contrast this future with other futures that can be envisaged - where humanity runs out of steam and innovation and gets stuck on Earth or in the Solar System and eventually passes away without bursting forth upon the galaxy. Or perhaps humanity vanquishes or subdues the robot foe and swears off technology. It is certainly possible to see this as a more favourable scenario - especially if one is given to believing in the duality of existence--that our lives here are not all there is.

Certainly, the worst scenario is a future in which mankind perishes through massive conflict or disease. This is unattractive to one enchanted with the idea of natural progression, since it would return Earth to a state close to the start of the Permian era 65 million years ago, when dinosaurs had just been wiped out. But today Earth still has 5 billion years of existence yet, so there is plenty of time for a comeback.

Often the essential conflict for humans considering whether to produce intelligent life is: are we perfect enough to consider playing God? It certainly is an achingly poignant question to a modern progressive thinker--but perhaps the question is moot. Maybe machine life is inevitable in order that Darwinian evolution should continue on Earth. Whether it is through humanity's loins or through humanities laboratories that Darwin's game is progressed may not matter. Indeed, if humans gradually augment themselves with technology of their own creation in the coming centuries, will we fully realise when machines have 'taken over'? What will it be about a robot with a few original Homo sapien brain cells that makes it human?

Relevance to SETI

How is this relevant to SETI? Of course, it has been stated by many authors (even in this column) that when we make contact with alien beings, they may be the robotic progeny of beings similar to ourselves. Perhaps probes have been sent here with enough intelligence to carry on an engaging conversation. Is it possible they are waiting for us to be smart enough to construct a robot that can talk to them?

  • All About SETI
  • I, Robot, Ourselves: What Does Artificial Intelligence Tell Us About Humanity
  • How to Sort Signs of Artifical Life from the Real Thing
  • Virtual Humans Proposed As Space Travelers
  • Beyond Contact: A Guide to SETI and Communicating with Alien Civilizations