Expert Voices

AI may be to blame for our failure to make contact with alien civilizations

a set of large white antenna dishes in the desert
The Karl G. Jansky Very Large Array radio astronomy observatory, located at Plains of San Agustin in New Mexico. (Image credit: Getty Images)

This article was originally published at The Conversation. The publication contributed the article to Space.com's Expert Voices: Op-Ed & Insights.

Michael Garrett is the Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics, University of Manchester.

Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) — a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.

But what if this milestone isn't just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?

Related: Could AI find alien life faster than humans, and would it tell us?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe's "great filter" – a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?

This is a concept that might explain why the search for extraterrestrial intelligence (SETI) has yet to detect the signatures of advanced technical civilizations elsewhere in the galaxy.

The great filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This questions why, in a universe vast and ancient enough to host billions of potentially habitable planets, we have not detected any signs of alien civilizations. The hypothesis suggests there are insurmountable hurdles in the evolutionary timeline of civilizations that prevent them from developing into space-faring entities.

I believe the emergence of ASI could be such a filter. AI's rapid advancement, potentially leading to ASI, may intersect with a critical phase in a civilization's development – the transition from a single-planet species to a multiplanetary one.

SpaceX CEO Elon Musk claims the company's Starship rocket is the first vehicle capable of making humanity interplanetary. (Image credit: SpaceX)

This is where many civilizations could falter, with AI making much more rapid progress than our ability either to control it or sustainably explore and populate our Solar System.

The challenge with AI, and specifically ASI, lies in its autonomous, self-amplifying and improving nature. It possesses the potential to enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI.

The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multiplanetary. For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.

In this scenario, I estimate the typical longevity of a technological civilization might be less than 100 years. That's roughly the time between being able to receive and broadcast signals between the stars (1960), and the estimated emergence of ASI (2040) on Earth. This is alarmingly short when set against the cosmic timescale of billions of years.

This estimate, when plugged into optimistic versions of the Drake equation – which attempts to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way – suggests that, at any given time, there are only a handful of intelligent civilizations out there. Moreover, like us, their relatively modest technological activities could make them quite challenging to detect.

The Drake Equation is used to estimate the number of communicating civilizations in our galaxy, or more simply put, the odds of finding intelligent life in the Milky Way. (Image credit: sharply_done/Getty Images)

Wake-up call

This research is not simply a cautionary tale of potential doom. It serves as a wake-up call for humanity to establish robust regulatory frameworks to guide the development of AI, including military systems.

This is not just about preventing the malevolent use of AI on Earth; it’s also about ensuring the evolution of AI aligns with the long-term survival of our species. It suggests we need to put more resources into becoming a multiplanetary society as soon as possible – a goal that has lain dormant since the heady days of the Apollo project, but has lately been reignited by advances made by private companies.

As the historian Yuval Noah Harari noted, nothing in history has prepared us for the impact of introducing non-conscious, super-intelligent entities to our planet. Recently, the implications of autonomous AI decision-making have led to calls from prominent leaders in the field for a moratorium on the development of AI, until a responsible form of control and regulation can be introduced.

But even if every country agreed to abide by strict rules and regulation, rogue organizations will be difficult to rein in.

The integration of autonomous AI in military defense systems has to be an area of particular concern. There is already evidence that humans will voluntarily relinquish significant power to increasingly capable systems, because they can carry out useful tasks much more rapidly and effectively without human intervention. Governments are therefore reluctant to regulate in this area given the strategic advantages AI offers, as has been recently and devastatingly demonstrated in Gaza.

This means we already edge dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international law. In such a world, surrendering power to AI systems in order to gain a tactical advantage could inadvertently set off a chain of rapidly escalating, highly destructive events. In the blink of an eye, the collective intelligence of our planet could be obliterated.

Humanity is at a crucial point in its technological trajectory. Our actions now could determine whether we become an enduring interstellar civilization, or succumb to the challenges posed by our own creations.

Using SETI as a lens through which we can examine our future development adds a new dimension to the discussion on the future of AI. It is up to all of us to ensure that when we reach for the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope – a species that learned to thrive alongside AI.

Originally published at The Conversation.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

Michael Garrett is the Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics, University of Manchester with a strong interest in the Search for Extraterrestrial Intelligence (SETI). Garrett is also the chair of the International Academy of Astronautics SETI Permanent Committee (SETI PC). Garrett has published over 130 refereed journal papers and was previously General Director of ASTRON (The Netherlands Institute for Radio Astronomy) and Director of JIVE (Joint Institute for VLBI in Europe).

  • Classical Motion
    I know little about A.I. But I'm guessing the ancient roots are in chess. And that is an interaction of deceit. It's not cooperation.

    It's elimination.
    Reply
  • bwana4swahili
    "... even if every country agreed to abide by strict rules and regulation, rogue organizations will be difficult to rein in."
    I suspect all countries will NOT abide by strict rules and regulation. China, for one, is not going to be governed by rules set by the USA or the UN. India is probably a close 2nd in this regard. It is to a country's advantage to be far ahead of others in AI development!
    Reply
  • COLGeek
    I know more than a little about AI and this notion is a HUGE stretch, bordering on fantasy.

    SETI may simply not have found anything of merit yet, regardless of the advances in AI.

    All AI is only as good as the data it draws from. If the evidence, based on the criteria used to look for it, isn't there, then it simply isn't there.

    By the way, I was a longtime SETI member and crunched a bazillion items over many years.
    Reply
  • bwana4swahili
    COLGeek said:
    I know more than a little about AI and this notion is a HUGE stretch, bordering on fantasy.

    SETI may simply not have found anything of merit yet, regardless of the advances in AI.

    All AI is only as good as the data it draws from. If the evidence, based on the criteria used to look for it, isn't there, then it simply isn't there.

    By the way, I was a longtime SETI member and crunched a bazillion items over many years.
    "All AI is only as good as the data it draws from". Applies to homo sapiens as well.

    bwa
    Reply
  • COLGeek
    bwana4swahili said:
    "All AI is only as good as the data it draws from". Applies to homo sapiens as well.

    bwa
    Agreed!

    AI should be like lawyers. Advisers, not deciders.

    The fear of AI is not from AI itself, but people getting lazy and abdicating their personal responsibilities to to prevent flawed outcomes.

    Just another tool that can be used for good or ill.
    Reply
  • Madhu Thangavelu
    Hey, this AI quest is clearly double-edged, and the author’s vision paints the pessimistic view of it all. To my mind, AI could accelerate our contact with extraterrestrial intelligence and entities. But then, how would we know, if the solutions AI presents are unintelligible to us? Perhaps AI will design interpreting algorithm filter(s) to engage the gazillion civilizations out there who have managed to thrive by keeping watch of and dodging cosmic hazards, avoiding Nature’s ambivalence toward what we perceive as unfathomable violence?… like rogue black holes tearing up entire star systems or much more frequent gamma rays bursts or CMEs that can fry vital space assets?…. Jus’ sayin’…

    Happy Mother’s Day, y’all!
    Reply
  • billslugg
    COLGeek said:
    By the way, I was a longtime SETI member and crunched a bazillion items over many years.
    I miss the old waterfall screen saver.
    Reply
  • Classical Motion
    I would have thought that looking for intelligent life would be far easier than signs of life. By intelligent life I mean life that realizes the stars and the distances and would attempt to put out a beacon. Is there anybody out there?

    There is only one way to do it. Something will have to be added to a star that changes it's normal star spectrum. It has to make that star an unique spectrum, un-natural. This will make it stand out like a sore thumb to any astronomer. But that would be just a beacon, a tease to look there.

    Once they look in that direction, the reflection of light, from a planet, will have to be modulated. Perhaps with a spacial charge field about the planet. Or modulating a spectrum line(component) of the planet. Assuming a planet spectrum is easier to modulate than a star spectrum.

    But few would see it even after a million years of operation. Intelligent life might realize this and not waste time and resources on it. Then again.....

    Now, what happens if you or S.E.T.I. spot a "neon" star. That would be dumb intelligent life. By all means claim and advertise that puppy. Just...Please ....don't try and answer it.

    Dumb and Dumber.
    Reply
  • billslugg
    They have found 60 new candidates for a Dyson Sphere using data "from the European Gaia satellite as well as the Two Micron All Sky Survey (2MASS) and Wide-field Infrared Survey Explorer (WISE)."

    These candidates have an excess of infrared as compared to the visible.

    https://earthsky.org/space/dyson-sphere-alien-megastructures-infrared-heat-stars/
    Reply
  • AboveAndBeyond
    Michael Garrett sounds like he read James Barrat's 2015 book "Our final invention: artificial intelligence and the end of the human era", one of probably several that presents most or all of the standard AI Doomsday Scenario. Barrat has made documentary films for PBS, so he's not some far out kook.

    If an alien has radio telescope technology it's not necessarily a foregone inevitability that digital technology goes along with it, so it could be AI is one of those things unique to humans. I don't think absence of evidence for ETI's is much of an argument for them being done in by AI-gone-wrong.
    Reply