In the search for alien life, should we be looking for artificial intelligence?

Robot hand close to touching a human hand.
Superintelligences might reveal themselves through the technosignatures of their cosmic engineering projects. (Image credit: Paper Boat Creative via Getty Images)

Is biological life common in the universe, or should we be looking for artificial, robotic intelligence in the search for alien life

An increasing number of scientists suspect that if we ever do make contact with alien life, we will be communicating with a computer.

This thinking revolves around an event called the singularity. This term, borrowed from mathematics, signifies a point where our knowledge of math and physics breaks down and we can no longer accurately characterize what we're trying to describe. A black hole singularity is a good example of this.

Related: Could AI find alien life faster than humans, and would it tell us?

In computer science and technology, the singularity describes the moment when artificial intelligence develops so fast that it results in a superintelligence — an artificial general intelligence, as opposed to the very specific machine-learning algorithms we have today — that experiences runaway growth in computing power and intellectual ability. This superintelligence would grow so far ahead of us, so quickly, that we would lose the ability to understand or explain it. 

Computer scientists have been speculating that the singularity could come soon; most predictions seem to agree on the period between 2030 and 2045. What happens beyond the singularity is anyone's guess.

There's no guarantee that the singularity will come to pass; many academics remain skeptical. However, if it does, the timescales would be remarkable, given that it is predicted to occur just 250 years after the Industrial Revolution, 130 to 140 years after the Wright brothers' first powered flight, a century after the atom was first split and 50 years after the invention of the World Wide Web. If we are a typical civilization in the galaxy, the singularity would seem to happen early in the life of a technological species.

Now, consider the age of the universe: 13.8 billion years. Assuming that life has been able, in theory, to develop and evolve for the vast majority of that history, alien species could be billions of years older than our solar system and many billions of years older than Homo sapiens. They would have had plenty of time to pass through the technological singularity, which is why so many researchers studying the search for extraterrestrial intelligence (SETI) are convinced that technological aliens will be artificial intelligences.

"This is very much at the vanguard of thinking in some sections of the SETI community," Eamonn Kerins, an astrophysicist and SETI researcher at the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., told Space.com. "We ourselves are very close to realizing artificial general intelligence (AGI), and there's an expectation that once you reach that point, it can then accelerate away at a very fast rate and quickly outstrip ourselves in intelligence."

Searching for superintelligences 

Artist's illustration of a Dyson sphere very close to a glowing star. (Image credit: cokada/iStock/Getty Images Plus)

Suppose alien life was some form of superintelligence that had gone way past the singularity. What would it mean for SETI?

SETI focuses on searching for radio signals, the same kind that humans transmit. There are still very good reasons for searching the radio spectrum: Radio waves can permeate the Milky Way galaxy, they're a relatively simple means of signaling, and aliens would suspect that our astronomers were already studying the universe in radio waves and would therefore be more likely to spot a radio signal. 

A superintelligence billions of years older than us, however, might have long since moved past radio and might not even care enough to attempt to contact primitive life-forms on Earth.

Beyond looking for signals, recent SETI efforts have been considering the broader concept of technosignatures — evidence for extraterrestrial technology or engineering — possibly on an enormous scale for it to be noticeable to us. This might be one way of detecting an artificial superintelligence since the search for technosignatures is agnostic about why the aliens are doing what they're doing. Beyond the singularity, such reasons might be difficult for us to discern.

"Some of this [discussion about superintelligences] almost doesn't matter from the point of view of doing the search, if you build a good enough anomaly detector," Steve Croft, a radio astronomer who works on the Breakthrough Listen project for the Berkeley SETI Research Center at the University of California, Berkeley, said in an interview with Space.com. "We can figure out what they're up to afterwards — we may never comprehend what they're up to."

All that would matter is that we could potentially detect these intelligent life-forms' activities, even if we don't fully understand what they're doing. In some cases, though, we might understand. 

A superintelligence would need a lot of energy to facilitate the computations of its CPU. In 1964, Soviet astrophysicist Nikolai Kardashev proposed what would become known as the Kardashev scale, in which increasingly technologically developed civilizations harness the total energy of first a planet (Level I), then a star (Level II) and then an entire galaxy (Level III). 

In principle, the latter two levels would be achievable via Dyson swarms of solar-energy collectors around a civilization's home star, and then around every star and black hole in their galaxy. According to the Kardashev scale, a Type II civilization could harness 4 x 10^26 watts; a Type III civilization could reach 4 x 10^37 watts. 

A superintelligence might even opt to live inside a Dyson swarm — for example, in a "Matrioshka brain," a series of nested shells of Dyson swarms in which the innermost shell absorbs sunlight, uses the energy for processing and then emits the residual heat energy for the next shell to pick up, and so on.

What do superintelligences do in their spare time? 

This animation shows fast radio bursts appearing and disappearing over Earth. (Image credit: T. Jarrett (IPAC/Caltech); B. Saxton, NRAO/AUI/NSF)

What would a superintelligence do with all that energy? "Maybe they smash neutron stars together for fun and those are the fast radio bursts!" Croft said, only half-jokingly. "If you do have command of ridiculous amounts of energy, if you've achieved a Kardashev Type II or III level, then what might you do with your spare time? One thing we've seen through human societies over millennia is art, and it drives a lot of our endeavors, creating beautiful things, and I wonder whether a superintelligence might make art and whether that's something we could spot."

Spotting alien art might not be so easy; art is cultural, so we would not know what is beautiful to them. However, the scale of the potential art projects we could detect might make life easier. A superintelligence might push stars around, for example. One theoretical way of doing this is via a Shkadov thruster, which is essentially a giant concave mirror facing a star at a distance where the gravitational attraction that the mirror feels from the star is balanced by the stellar wind trying to push the mirror outward. The mirror would reflect the stellar wind and the star's own light back toward the star. And because photons and particles can carry momentum, the reflected radiation would push the star in the opposite direction. Over millions of years, it could, in theory, move the star many light-years

If an alien superintelligence has an artistic leaning, it may wish to assemble geometric shapes out of stars, such as a Klemperer rosette. This is a gravitationally stable system of six objects — in this case, stars — perhaps alternating in mass between large and small, all moving around a common point on the same orbit. Such a star system could not form naturally, and if we found one, it would be evidence for a powerful extraterrestrial intelligence. An alternative concept would be to place all of the planets in a system on the same orbit around their star; a recent study showed how it might be possible to fit 24 planets on the same orbit without them colliding.

However, all of these are brute-force projects. Superintelligence may be more focused on the loftier goal of just thinking, or running virtual reality programs. Processing information requires a lot of energy, and the more a superintelligence thinks, the more energy it will require. And the less ambient heat there is, the more efficiently the computations run. 

The interior of the Milky Way galaxy is a warm place, so superintelligences might relocate to the outskirts of the galaxy, where the ambient temperature drops, thus allowing more efficient information processing. Some researchers have even proposed that superintelligences might go into hibernation for tens of billions of years while the universe around them cools to just a fraction of a degree above absolute zero, which would permit more efficient computations. (Currently, the universe — or, more specifically, the cosmic microwave background, the leftover radiation from the Big Bang — is 2.73 kelvins above absolute zero.).

What would they be thinking and calculating? That's not a question we can answer, but we don't need to. All we have to do is find evidence for their presence — whether in a Dyson swarm, a Shkadov thruster, a Klemperer rosette or activity on the edge of the galaxy. And perhaps, if our own AIs arrive at the singularity too, that could give us some insight into what the great intelligences of the universe spend their time doing.

Follow Keith Cooper on Twitter @21stCenturySETI. Follow us on Twitter @Spacedotcom and on Facebook.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

Keith Cooper
Contributing writer

Keith Cooper is a freelance science journalist and editor in the United Kingdom, and has a degree in physics and astrophysics from the University of Manchester. He's the author of "The Contact Paradox: Challenging Our Assumptions in the Search for Extraterrestrial Intelligence" (Bloomsbury Sigma, 2020) and has written articles on astronomy, space, physics and astrobiology for a multitude of magazines and websites.

  • realintelligence2023
    What came to mind is the first "Star Trek" motion picture. In that movie, an advanced alien artificial intelligence was looking for just the opposite: living life. They were an advanced race who kept making things faster and more efficient and artificial until one day, the artificial intelligence apparently took over. This whole idea was addressed by early science fiction writers who had the ability to think ahead of the artificial intelligence. In Star Trek I, an advanced artificial intelligence found the "Voyager" spacecraft sent out from earth in reality I think in the 1970's with a friendly greeting from earth. The advanced artificial intelligence found it and went looking for the right planet and found Captain Kirk and his crew of the Starship Enterprise. (can't access credits - will post separately). If I remember right, the advanced super intelligence had retained enough of it's original life form in a childlike state and was smart enough to realize the futility of artificial lifeless mathematical logical advancement and sought out on it's own to discover the true meaning of life with limitless power in the hands of a child demanding to know why they were being replaced.
    Reply
  • realintelligence2023
    Roddenberry's first Star Trek film was faulted for being too much theory and no action like the first "Star Wars" movie two years earlier. I was in Jr. High and we were amazed by the huge enemy ships passing over the screen and we enjoyed the engaging story and Carrie Fisher's outfits. Douglas Trumbull, the effects supervisor for Star Trek I, also did another sci-fi classic with both an ecological message and action scenes with one of the best spaceship action scenes ever where the lead character, portrayed by Bruce Dern, a botanist environmentalist guru hero saved the earths forests with some dramatic intervention. That movie has an enduring human message to counter the artificial efficiency and meaningless pursuit of technology and perfect emptiness of wealth without morality.

    I think the answer is found in the childlike rejection of the super-perfect master artificial intelligence race by the one who saw how futile and useless the pursuit of lifeless technology for technology's sake is. In the end, do we want to sit around the campfire with the wife and kids and worship the smart-phone monolith, or use it to dial up some fun tunes?

    Artificial intelligence is very dangerous and poses a threat to human survival. It is devoid of any useful human values on it's own and what would stop it from terminating a rival super-computer society that threatened its existence? Isn't that what it would base it's values on? Two gorillas fighting over a dead pig carcass?
    Reply
  • Unclear Engineer
    Um, I don't think gorillas eat pork - or any other meat - unless you call termites "meat".

    Chimps or baboons - sure.
    Reply
  • billslugg
    AI could well cause great disruption in our society. We are very advanced but not very stable. One bad virus could shut everything down. But as far as taking over power, it would have to be programmed by a human with that intent. Computers have no driving force to any particular end unless humans give it one.
    Reply
  • Unclear Engineer
    billslugg said:
    AI could well cause great disruption in our society. We are very advanced but not very stable. One bad virus could shut everything down. But as far as taking over power, it would have to be programmed by a human with that intent. Computers have no driving force to any particular end unless humans give it one.
    I don't think it is very unlikely that some human somewhere will have an idea that either intentionally or inadvertently gives AI "a bad idea" - maybe to "take over the world" or maybe to be "lazy" about what maintenance humans need to do for AI.

    So, there probably need to be some laws that make it a crime to give AI specific abilities. We are already too late with respect to preventing AI from taking military actions - that is one of the things that is driving the development in the first place. But, we definitely do not want AI devices to be able to do the mechanical things that they need to survive without humans. We don't want them to be able to self-replicate, for instance. And, we don't want AI in control of its sources of power or its routing. We need it to be dependent on humans for its continued existence.
    Reply