Deep learning could help robotic rovers figure out their location on the moon and Mars
A Mars rover roaming the red planet cannot whip out a smartphone to check its location based on GPS. Instead, the robotic explorer must take panoramic pictures of the surrounding landscape so that a human back on Earth can painstakingly compare the ground images with Mars satellite maps taken from above by orbiting spacecraft.
Locating a Mars mission after it first touches down, using that manual process of scrutinizing landscape features and making image comparisons, can take up to 24 hours. What’s more, it still requires at least thirty minutes to confirm a rover’s updated location after it’s on the move. But a new AI approach that trains deep learning algorithms to perform the necessary image comparisons could reduce the localization process to mere seconds. A team of space scientists and computer scientists gathered together during the 2018 NASA Frontier Development Lab event to develop that potential path forward for future space missions.
“If we go to more planets or another moon or the asteroids, the goal is to be able to use this to localize ourselves in GPS-free environments,” says Benjamin Wu, an astrophysicist at the National Astronomical Observatory of Japan and a member of the team that tackled this challenge.
During an eight-week workshop from 25 June to 17 August, Wu and three other researchers puzzled over how they could train deep learning algorithms to solve the navigation issue for planetary rovers. The workshop was organized by the NASA Frontier Development Lab, an AI research accelerator created by NASA and the SETI Institute. The development lab’s purpose is to harness the latest AI technologies for tackling space exploration challenges. Tech giants such as Intel and Google provided additional experts and resources for the event, which took place at the NASA Ames Research Center in California.
The navigational problem may seem strange to Earth-bound smartphone owners who can rely upon GPS signals from orbiting satellites to pinpoint their location to within several meters almost instantaneously. But the lack of similar GPS satellites orbiting the moon or Mars means that robotic or human missions to those destinations must rely on a process not unlike a person trying to read a map and visually eyeball the surrounding landscape.
From the start, Wu and his colleagues faced the challenge of finding enough lunar surface data to train their deep learning algorithms. Deep learning has proven incredibly successful at recognizing patterns in data when trained on thousands or even millions of examples. But previous missions to the moon have not generated enough high-resolution ground images to put together a training dataset for the rover localization task.
The team initially wanted to make their own virtual moon based on the real thing using elevation maps of the moon created by past lunar orbiter missions, Wu explained. But given the time crunch for the FDL workshop, they ended up using an off-the-shelf moon simulation built in the Unreal game engine. The simulation mimics lunar terrain features but does not exactly match the real lunar landscape.
During the training process, the team took pictures facing north, south, east and west from specific lunar surface locations within the synthetic moon simulation. Those four images were then combined and reprojected to create a crude top-down view similar to a satellite’s viewpoint from above—a process that transformed 2.4 million ground images into approximately 600,000 reprojected images. That set the stage for training the deep learning algorithm to recognize certain synthetic lunar surface features and match them against locations on an orbital map of the moon simulation.
The final result turned out to be a proof-of-concept for how an AI assistant could help long-suffering human handlers pinpoint the locations of their robotic rovers. Once trained, the deep learning algorithm usually pointed to the correct location within its first five guesses, which made the localization task much easier for humans to verify.
“You’re reducing the search space to looking within these few tens of meters instead of hundreds of kilometers,” Wu says.
Training the AI assistant on a synthetic moon rather than on a virtual landscape that accurately reflects the real lunar surface may sound less than practical. But the research team believes that the deep learning algorithm would still perform well if challenged with real lunar surface images and orbital maps, even without undergoing retraining.
“I don’t think using the virtual moon makes as much of a difference for the final solution,” says Philippe Ludvig, a computer scientist member of the FDL team and a researcher at a Japanese startup called ispace. “Essentially what we trained the deep learning neural network to do was compare source images on the ground with satellite images.”
One possible next step could involve training a deep neural network from scratch specifically for the moon or Mars landscape detection task. For their proof-of-concept demonstration, the researchers took a shortcut by modifying a deep neural network called ResNet-50 that has been pre-trained on classifying common Internet images.
An even bigger step could involve training a deep learning algorithm on real images from a planetary body. The technique could probably work with the images available from NASA missions that surveyed the surface of Mars, Ludvig said. But he cautioned that the lower-resolution ground and orbital snapshots available from past moon missions would likely pose a greater challenge.
Still, that lack of high-quality moon imagery does not entirely rule out AI-assisted navigation for future moon missions. One approach might involve having a rover take high-resolution pictures of the surrounding landscape in order to create a map for that smaller region of the lunar surface.
“At the end of the day, if your resolution from the satellite image isn’t as good, maybe you can make up for it with higher resolution images on the ground,” Ludvig says.