How Supersmart Spacecraft Could Work (With Help from IBM's Watson)

IBM's Watson, shown here during a presentation at the CeBIT 2011 trade show in Hanover, Germany, could one day help astronauts explore space using supersmart spacecraft.
IBM's Watson, shown here during a presentation at the CeBIT 2011 trade show in Hanover, Germany, could one day help astronauts explore space using supersmart spacecraft. (Image credit: Caroline Seidel/Alamy)

HOUSTON — Computer systems like IBM's "Jeopardy!"-winning Watson, which can become a widely read, quick-reacting expert on any topic, could play a crucial role advising pilots, driving Mars robots or navigating from within the walls of a spacecraft on its way to deep space.

Watson's chief scientist, Grady Booch, virtually joined IBM distinguished engineer Chris Codella at SpaceCom 2017 here yesterday (Dec. 7) to talk about how their computer system is preparing for a journey to Mars.

IBM computers have played important roles in the Gemini and Apollo programs, guided the Saturn rockets and integrated into the space shuttles. Those systems were traditional, programmed computers, whereas Watson, which defeated two leading human experts at Jeopardy in 2011, favors an approach called "cognitive computing." [These Three Killer Apps Could Help Humans Colonize Mars]

Watson excelled at "Jeopardy!" because it can read huge amounts of data written for humans — like books, papers and reports, or, in the case of "Jeopardy!," Wikipedia — and file them into a framework to draw conclusions and hypotheses. That process lets Watson act as an expert consultant to pull up the most relevant data from an entire field.

"The main objective for cognitive systems is to amplify human abilities," Codella said during the presentation. "It's doing algorithmically only that which can be done better algorithmically; that is, read all that stuff and find out what's relevant to what a human user is asking for."

Codella described two test cases for the technology that apply to outer space: what could be called the "discovery" and "decision" use cases. For discovery, IBM worked with researchers at NASA's Langley Research Centerto develop Watson as a research assistant, instantly pulling up all relevant information about aerospace topics from its ever-increasing stores of knowledge.

On the decision side, they envisioned using Watson as a decision-maker in airline or space control rooms, or even at the side of a pilot who needs immediate information without extraneous detail.

"The idea here is to monitor all parameters of the flight as it's evolving, taking into account context, and also have a knowledge base based on the practice of aviation, airport information, aircraft information, anything that's relevant to the situation, and that it be nonintrusive in the cockpit," Codella said. "That is, provide information just as it's needed, only when it's needed, and only when it's relevant."

During a proof-of-concept experiment based on a real-life scenario that came up in a discussion with NASA's Ames facility, Watson was able to successfully work out the issue and advise a pilot when sensors were frozen. In the real scenario, the pilot ended up guiding the plane up and down for nearly 20 minutes before understanding the problem.

Langley is involved in helping to develop a cockpit interface for the system, Codella said, and IBM has moved on to developing a lexicon so the system could understand radio reports during flight.

"Let's go up in altitude," Booch suggested. To use an artificial intelligence like Watson in space, "one has to not rely on Mission Control on the Earth — you need to take Mission Control with you in the walls of the Orion spacecraft."

"Similarly," he added, "if you look at some of the mission profiles being considered, the idea is to put robotson the surface of Mars before the humans arrive, both to put together facilities as well as, perhaps, to later on serve as assistants in the science team."

In both cases, the AI would not only need the cognitive approach and wide knowledge base Watson has, but it would also have to be "embodied" — "the organism is in and of the world, and it can sense, react and act on that world," he said.

Current artificial intelligences are getting increasingly good at pattern recognition, but often lack the ability to define their own thought processes or take action on their own, Booch said. Booch's new open-source framework, called Project Intu, provides a way for something like Watson to apply its knowledge to itself and its surroundings. Rather than just acting as an adviser, the spacecraft and robots of the future will be able take action when needed, and be able to explain exactly why they took that action.

Email Sarah Lewin at or follow her @SarahExplains. Follow us @Spacedotcom, Facebook and Google+. Original article on

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at:

Sarah Lewin
Associate Editor

Sarah Lewin started writing for in June of 2015 as a Staff Writer and became Associate Editor in 2019 . Her work has been featured by Scientific American, IEEE Spectrum, Quanta Magazine, Wired, The Scientist, Science Friday and WGBH's Inside NOVA. Sarah has an MA from NYU's Science, Health and Environmental Reporting Program and an AB in mathematics from Brown University. When not writing, reading or thinking about space, Sarah enjoys musical theatre and mathematical papercraft. She is currently Assistant News Editor at Scientific American. You can follow her on Twitter @SarahExplains.