Deep space missions will test astronauts' mental health. Could AI companions help?

a solitary astronaut stands on the moon gazing at Earth
(Image credit: Getty Images)

World Space Week 2023 is here and Space.com is looking at the current state of artificial intelligence (AI) and its impact on astronomy and space exploration as the space age celebrates its 66th anniversary. Here, John Loeffler discusses how AI companions might help keep astronauts on deep space missions mentally healthy.

In one of the more light-hearted scenes of Christopher Nolan's otherwise tension-filled film "Interstellar," the four Endurance astronauts are lifting off on the movie's mission to save humanity. Riding along with them is a quippy AI named TARS that jokes that it is looking forward to using them all as servants on its robot colony and wishes Matthew McConaughey's character the best of luck getting back to the ship once TARS blows him out the airlock for talking back.

Told that TARS has been programmed with a humor algorithm for the benefit of the humans on board, 634-257McConaughey's Cooper asks TARS what it's humor level is set to and promptly commands the AI to scale it back a bit. 

Like a lot of "Interstellar," Nolan went to great lengths to envision what the future of deep space exploration would look like, and AI companions for human astronauts are as important to that vision as the film's spectacular black hole set piece, Gargantua, even becoming important characters in the film in their own right.

Back on Earth, NASA, the European Space Agency, and a wide assortment of private space companies are all looking at artificial intelligence as a key part of future space missions like the upcoming Artemis moon missions and eventually the first crewed missions to Mars. But as humans push deeper into space, these AI systems may not simply be tools to help carry out operational tasks but might provide important emotional and mental health support for crew members experiencing the most unique instances of social isolation ever experienced by human beings.

Related: AI is already helping astronomers make incredible discoveries. Here's how

The unique mental health challenges of deep space

Space, famously, is a very lonely place, and the unique environment of even low Earth orbit is enough to dramatically affect a space traveler's mental health. When William Shatner, Star Trek's Captain James T. Kirk, rode a Blue Origin rocket into space in 2021, he said he expected to feel an "ultimate catharsis," but instead was rocked by an intense sorrow.

"It was among the strongest feelings of grief I have ever encountered," Shatner wrote in Variety a year after his trip. "The contrast between the vicious coldness of space and the warm nurturing of Earth below filled me with overwhelming sadness."

Other astronauts have described similar experiences. Apollo 11 astronaut Buzz Aldrin described the surface of the moon as a "magnificent desolation" in a 2014 Reddit AMA.

"Because I realized what I was looking at, towards the horizon and in every direction, had not changed in hundreds, thousands of years," Aldrin wrote. "Beyond me I could see the moon curving away - no atmosphere, black sky. Cold. Colder than anyone could experience on Earth when the sun is up  — but when the sun is up for 14 days, it gets very, very hot. No sign of life whatsoever.

"That is desolate. More desolate than any place on Earth."

The human mind is not built for this kind of environment, but adapting to it is not impossible, as countless space travelers to the ISS and beyond can attest to. But the mental health challenges of space travel are as important, if not more so, than issues of physical health.

"Deep space travel will pose unique challenges to crew, challenges that are inherently different from those currently experienced on orbit," Alexandra Whitmire, element scientist with the Behavior Health and Performance Element of NASA's Human Research Program, told Space.com. 

As humanity pushes farther into space, we'll need to find ways to keep astronauts from suffering from the effects of isolation while millions of miles away from Earth. (Image credit: NASA)

While there have been very few reported mental health issues among astronauts during space missions, they do happen. A 2016 NASA report on the psychological effects of space shuttle missions found 34 instances of "behavioral signs or symptoms" of note out of 208 crew members over 89 missions, with an overall incidence rate of 0.11 for a 14-day mission, with the most commonly reported symptom being "anxiety or annoyance". 

Extrapolate that out to a two-year round trip to Mars among, and you're looking at an all-but guaranteed environment of interpersonal conflict and stress to at least some degree.

Which is understandable. Ask anyone who's been on a road trip with family for more than several hours and they'll tell you how quickly tempers can flare. 

"Given the distance of Mars, for example, the duration of such a mission will last around 2.5 years. The size of the vehicle will be relatively small, suggesting that the crew of four or six will live and work for a period of two and a half years, confined in a small habitat," Whitmire said.

A road trip through a cold, lifeless void that is one loose seal away from sucking you out into certain doom? Astronauts need all the help they can get to stay mentally healthy.

Can empathetic AIs help keep space travelers mentally healthy?

While most of us might be tempted to write off the value of an AI in deep space as a mental health tool for astronauts (an AI cannot replace a person, after all), they do have serious potential to ease the emotional well-being of those tasked with living on a moon base or even Mars.

Naturally, no one is proposing that these explorers journey alone, and not just for safety reasons. As social animals, being in close contact with other humans is an indispensable part of our mental well-being, and it's unlikely that even a sophisticated artificial intelligence can replace human-to-human connection.

Still, NASA and the ESA have been looking into bringing AI "crew" as stress relief for a while now. Back in 2018, Airbus and IBM partnered with the ESA on a floating AI for the International Space Station called the Crew Interactive Mobile Companion (CIMON). Results were mixed, to say the least.

CIMON's biggest deficit, really, was its general lack of empathetic responses, making it much more like a floating Alexa smart speaker than an empathetic AI, but other AI firms are looking to introduce this empathy element into future AIs that will hopefully bridge this gap.

NASA, meanwhile, is actively investigating whether such an AI "companion" for astronauts will be useful on future moon and Mars missions, but Whitmire stresses that it must be guided by the evidence.

"Research is under way to help inform mitigation strategies needed to support astronauts in the context of these future missions — including missions to the Moon and to Mars," she said. "AI as a digital 'companion' is a potential area of interest, but more research is needed to understand methods through which this type of support could be granted and to what extent, etc., as well as potential pitfalls, before recommendations are made for AI as a behavioral health countermeasure."

But an artificial intelligence doesn't need to replace a human companion for it to be beneficial. Just as journaling can be an important mental health exercise, interacting with an artificial intelligence can serve much the same purpose or prove even more useful if it's able to provide specific prompts to help guide astronauts who are struggling with some of the deleterious mental health effects of deep space isolation.

"Given the prolonged and extreme isolation of a future Mars mission, an AI social support tool, if proven to be effective, could serve as part of a toolkit of countermeasures available to future crew venturing on a mission to Mars," Whitmire said. "It's possible that for some crew, having an AI 'companion' offers a safe sounding board. For many however, the ability to connect with family through audio and visual loops, and the maintenance of team cohesion of the crew on the mission, will serve as key methods to support their behavioral health. The goal is to offer an array of evidence-based mitigations to support crew health and performance, and if AI companions prove to be an effective and meaningful countermeasure, then there could be a role for them in a toolkit of countermeasures."

Still, there is no replacement in the end for human connection, something that NASA is keenly aware of.

"From my perspective, while AI can potentially serve as a tool to support future crews, I think that it will be just that — a support tool- that cannot replace the need for contact with loved ones back home, and the need to support the cohesion of the crew on a mission," Whitmire said. "Nothing convinced me more of this than going through COVID quarantine, as we all became more reliant on the use of technology to keep us more connected—but we saw that there was an inherent need to maintain that human contact, in person, as much as we could. 

"Hence, while I think AI has the potential to provide support, and could augment measurement and diagnostics as well, our mission (of supporting mental health of future crews), remains largely human centric and human driven."

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

 John is a science and technology journalist and Space.com contributor. He received his B.A. in English and his M.A. in Computer Science from the City University of New York, Brooklyn College, and has bylines with TechRadar, Live Science, and other publications. You can find him on Twitter at @thisdotjohn or seeking out dark sky country for spectacular views of the cosmos. 

  • Questioner
    "Hal, if that irritating copilot had a fatal accident that would save oxygen wouldn’t it?"
    "Yes, Dave, it would.", "I'll get right on that."
    "Hal, you always make me feel so much better."
    Reply
  • Questioner
    We live in an abyss of time and space.
    Space is incomprehensibly huge & empty and near absolute zero.
    We are urinating away the survivability of this biosphere at an accelerating rate.
    Fiction escapism is the only thing that keeps me sane.
    I've lasted human irrationality (including my own) this long, maybe catastrophe won't happen,
    ....... that soon.
    Global warming does seem to be happening sooner than (i) expected.
    Pharmaceuticals and nanoplastics are everywhere.
    Time to read a good (fiction) book.
    Insularity is necessary for sanity.
    Reply
  • billslugg
    "...survivability of this biosphere..."
    Yes, humans' days are numbered. Humans won't be here to see it, but the biosphere will be OK, shaking off the effects of a 200 year fever, thinking: "Damn humans. I should have worn a mask."
    Reply
  • Classical Motion
    Only gravity, shielding and down time will keep them sane and healthy.
    Reply
  • Unclear Engineer
    Frankly, the AI that I am seeing so far is more likely to drive me crazy than stop me from going crazy.

    Probably best to experiment with the effect here on Earth, first.

    I have several subjects (all politicians) whom I suggest be locked into complete isolation with only various forms of AI "companions" for periods exceeding at least the next election cycle. If any of them come out "sane", we will have discovered a "miracle cure"!
    Reply
  • Questioner
    While i made a joke of it it highlights questions and problems with AI.

    Blaming AI for doing things deflects responsibility, guilt, both objectively and subjectively.

    Also if an AI infers some desire of its companion & the takes violent 'criminal' action where does the (shared?) 'responsibility' lie?
    The original designer, the programmer, the companion &/or the AI itself?

    Would the AI be destroyed/erased like an animal that attacked a person?
    Reply
  • Questioner
    Ownership,
    Can one own an AI?
    If so does one own whatever property &/or product the AI owns/produces?
    Would that extend to responsibility/liability for the actions/results of an AI?

    Will we categorize AIs with variant stipulations?
    Reply
  • Unclear Engineer
    "Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.

    So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!
    Reply
  • Unclear Engineer
    Questioner said:
    Ownership,
    Can one own an AI?
    If so does one own whatever property &/or product the AI owns/produces?
    Would that extend to responsibility/liability for the actions/results of an AI?

    Will we categorize AIs with variant stipulations?
    Apparent;y, nobody owns what an Ai creates, at least according to the U.S. Copyright Office and the courts. See https://www.foxnews.com/us/copyright-board-delivers-blow-terminator-tech-photo-protections .

    That seems a little weird to me, but I can also see problems if there are a bunch of AI programs that are "learning" from the same basic data and then creating very similar things - by the millions. Any government agency or court trying to figure out who was really "original" would get completely overwhelmed by AI generated stuff - art - prose - scripts, machinery designs, etc. Maybe it is sensible, since the "learning" that AI needs is essentially all that others have previously produced. So, isn't anything it produces more or less "stolen" from humans to begin with?

    But, in the future, Ai will certainly be used to at least start a lot of creative processes. To the extent that humans then do the development work and testing of prototype devices, medicines, etc., those should definitely still be patentable. The same should apply to copyright, I think, but this legal opinion seems to say that, even modified by a human, an image that originates from AI is not copyrightable.
    Reply
  • Questioner
    Unclear Engineer said:
    "Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.

    So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!
    "Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development."

    I am inclined to agree.
    It produces 'slurmatic' single continuous functions.
    I think critical thinking requires objectified (discrete) concepts/constructs.
    Having separate sub-AIs, each focused on distinct ideas & then using an aggregation of them to grasp a topic might begin to address that.

    In reality people operate on autopilot most of the time.
    Cognition requires a lot of time, energy and considerations.
    Weighing every iota we would never accomplish anything.
    Efficiency demands we operate on habit/reflex most of the time.
    Important decisions should be contemplated if we aren't pressed by immediacy.
    Marketers play on that by telling us we need to make instant decisions. (pseudo-pressure)
    Reply