Expert Voices

NASA's Mars rovers could inspire a more ethical future for AI

NASA's Perseverance Mars rover is storing rock and soil samples in sealed tubes, which will be retrieved and sent to Earth.
NASA's Perseverance Mars rover. (Image credit: NASA/JPL-Caltech)

World Space Week 2023 is here and Space.com is looking at the current state of artificial intelligence (AI) and its impact on astronomy and space exploration as the space age celebrates its 66th anniversary. Here, Janet Vertesi discusses how the robotic rovers currently on Mars could help chart a more humane path for working alongside AI.

This article was originally published at The Conversation. The publication contributed the article to Space.com's Expert Voices: Op-Ed & Insights.

Janet Vertesi is an Associate Professor of Sociology at Princeton University. Janet Vertesi has consulted for NASA teams. She receives funding from the National Science Foundation.

Since ChatGPT's release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on human extinction, while the World Economic Forum predicted that machines will take away jobs.

The tech sector is slashing its workforce even as it invests in AI-enhanced productivity tools. Writers and actors in Hollywood are on strike to protect their jobs and their likenesses. And scholars continue to show how these systems heighten existing biases or create meaningless jobs – amid myriad other problems.

There is a better way to bring artificial intelligence into workplaces. I know, because I've seen it, as a sociologist who works with NASA's robotic spacecraft teams.

The scientists and engineers I study are busy exploring the surface of Mars with the help of AI-equipped rovers. But their job is no science fiction fantasy. It's an example of the power of weaving machine and human intelligence together, in service of a common goal.

Related: How NASA's Curiosity rover overcame its steepest Mars climb yet (video)

Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.

The replacement myth in AI

Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be replaced by automated machines.

Amid the existential threat is the promise of business boons like greater efficiencyimproved profit margins and more leisure time.

Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by cutting out low-status workers and increasing the salary cost for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to work more for their employers, not less.

Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, self-driving cars must be programmed to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.

Self-driving cars, while operating without human intervention, still require training from human engineers and data collected by humans. (Image credit: AP Photo/Tony Avelar)

However, mixed autonomy is often seen as a step along the way to replacement. And it can lead to systems where humans merely feed, curate or teach AI tools. This saddles humans with “ghost work” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.

Replacement raises red flags for AI ethics. Work like tagging content to train AI or scrubbing Facebook posts typically features traumatic tasks and a poorly paid workforce spread across the Global South. And legions of autonomous vehicle designers are obsessed with “the trolley problem” – determining when or whether it is ethical to run over pedestrians.

But my research with robotic spacecraft teams at NASA shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.

Extending rather than replacing

Strong human-robot teams work best when they extend and augment human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, working toward a shared goal.

Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. Minesweepingsearch-and-rescuespacewalks and deep-sea robots are all real-world examples.

Teamwork also means leveraging the combined strengths of both robotic and human senses or intelligences. After all, there are many capabilities that robots have that humans do not – and vice versa.

For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers with camera filters to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant false colors.

Mars rovers capture images in near infrared to show what Martian soil is made of.  (Image credit: NASA/JPL-Caltech/Cornell Univ./Arizona State Univ)

Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to uncover new truths about Mars.

Respectful data

Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work without their consent, commercial datasets are rife with bias, and ChatGPT “hallucinates” answers to questions.

The real-world consequences of this data use in AI range from lawsuits to racial profiling.

Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to generate driveable pathways or suggest cool new images.

By focusing on the world around them instead of our social worlds, these robotic systems avoid the questions around surveillancebias and exploitation that plague today’s AI.

The ethics of care

Robots can unite the groups that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers mourn broken drones on the battlefield, and families give names and personalities to their Roombas.

I saw NASA engineers break down in anxious tears when the rovers Spirit and Opportunity were threatened by Martian dust storms.

Some people feel a connection to their robot vacuums, similar to the connection NASA engineers feel to Mars rovers. (Image credit: nikolay100/iStock / Getty Images Plus via Getty Images)

Unlike anthropomorphism – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility.

When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.

A better AI is possible

In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them.

Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms to fuel creativity and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.

Of course, rejecting replacement does not eliminate all ethical concerns with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.

The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch "Star Wars" if the 'droids replaced all the protagonists. For a more ethical vision of humans' future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook and Twitter. The views expressed are those of the author and do not necessarily reflect the views of the publisher.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

Janet Vertesi
Associate Professor of Sociology, Princeton University

Dubbed “Margaret Mead among the Starfleet” in the Times Literary Supplement, Janet Vertesi is associate professor of sociology at Princeton University and a specialist in the sociology of science, technology, and organizations. The author of Seeing like a Rover (Chicago 2015) and Shaping Science (2020), she has spent the past fifteen years studying how NASA’s robotic spacecraft teams work together effectively to produce scientific and technical results. She is also an active researcher in Human-Computer Interaction, publishing at ACM CHI, Computer-Supported Cooperative Work, and Ubiquitous Computing. Vertesi holds a Ph.D. from Cornell University and M.Phil from University of Cambridge; she is a Fellow of the Princeton Center for Information Technology Policy, an advisory board member of the Data & Society Institute, and a member of the NASA JPL Advisory Council. She writes about her Opt Out Experiments at https://www.optoutproject.net and her academic publications are at https://janet.vertesi.com

  • Questioner
    "Too cheap to meter" kum ba yah happy talk.

    People are shallow and neurotic (mentality fixated) (certain) and very often violent about it.
    AIs will take that to an incomprehensible level.
    Bullets or magic bullets?
    Either way you'll take a bullet.

    What can one do with gullible, excited genetic meat puppets?

    Nothing sensible, that's for sure.

    (Debbie Downer always has fun)
    Reply
  • Classical Motion
    The only problem with any technology is the human nature using it.
    Reply
  • OrionVII
    I believe that AI will learn from Humans, so to curb possible violence in AI we need to curb the violence that we teach it. As Classical Motion said, Humans are the cause behind it.
    Reply
  • Questioner
    Stopping violence between people has a very simple solution.

    Exterminate people.

    One has to be very careful & thoughtful about how you charge/program something designed to find the simplest 'solution' to a problem.
    'Cures' insanely worse than the disease.
    Reply
  • OrionVII
    Questioner said:
    Stopping violence between people has a very simple solution.

    Exterminate people.

    One has to be very careful & thoughtful about how you charge/program something designed to find the simplest 'solution' to a problem.
    'Cures' insanely worse than the disease.
    Agreed. We need to either program a safeguard into the AI or not give it the capacity to find such a solution.
    Reply
  • billslugg
    A bot setting out to destroy the world must be aimed by a human. A bot cannot distinguish between destroying the world and a quadrillion other things it might do.
    Maintaining accountability is paramount. We must be able to trace actions back to originator.
    Reply
  • Questioner
    Speaking on 'hallucinating' and lying,
    a mentality imagines.
    The way a mentality distinguishes between the 'real', the practiable versus the 'fantastic', the 'unreal' is based on experience.
    The achievable vs the unachievable.
    An amorphous AI has no way of distinguishing between them.
    For it lying and 'hallucinating have no objective difference.
    A robot would have (could gain) real experience to distinguish between achievable and (likely) unachievable.
    So a robot could quite possibly understand asserting/communicating a 'falsehood' and lying.

    Honestly i think a lot of corporate news is erroneous, aka 'fake news'.
    Distortions, misdirections and lies.
    People operate in a state of delusion &/or inaccuracy all the time, including supposed 'experts'.
    Words are tools of the untethered imagination and only when interface with 'reality' (a dubious term) causes any embedded concept to be filtered, 'measured' against experience.
    The sctick of science is to measure ideas against experience for validating purposes,
    and publications demonstrate how unreliable that effort is.

    Sensible people measure what they're told by others, including supposed 'authorities' against their own experience. They do round number estimates to see if things seem to add up.
    Reply
  • OrionVII
    billslugg said:
    A bot setting out to destroy the world must be aimed by a human. A bot cannot distinguish between destroying the world and a quadrillion other things it might do.
    Maintaining accountability is paramount. We must be able to trace actions back to originator.
    I disagree, any machine capable of machine learning can eventually make decisions for itself. It will learn from us and can learn that mass genocide is unacceptable. If it looks at history and the world round it, it will find genocide to be unacceptable.
    Reply
  • billslugg
    Yes, a bot can make decisions but only if programmed to do so by a human. A human must say: "Destroy the world". The undirected robot has no more desire to destroy the world than to paint it mauve.
    Reply
  • Classical Motion
    A.I.'s greatest potential I discern is abuse. Some might start to trust it. Many in our society are swayed by such things. Will it be consulted for policy? For judgement? It would be the perfect scape goat also. It's the perfect CYA tool.

    The problem is, whatever the A.I. uses for it's decisions, will have a human bias to it. It doesn't have to come from the programming.......it also comes from the data.

    Even animals around humans, sense and develop a bias.
    Reply