The moon has a branch office in Bremen. The slope of the artificial crater is nine meters wide; five-and-a-half meters of elevation must be scaled from the foot of the depression to its top. Those wishing to climb it must overcome inclines of 25 to 40 degrees. People are generally spectators here, though, for this moonscape was designed as a training ground for astronauts of steel: In the space exploration hall at the German Research Center for Artificial Intelligence (DFKI), robots practice independent exploratory missions on the satellite of Earth. The choice of terrain is no accident: Craters and their surroundings are among the most interesting places on moons and planets because their slopes contain sediment layers from different eras, as well as traces of material from the solar system. Their walls also provide information about the origins of moons and planets.
The creator of the climbing robot is Professor Frank Kirchner, who heads the Robotics Innovation Center at DFKI on the outskirts of Bremen and works on mechanical astronauts with his team. His creatures are often biologically inspired, such as the four-legged walking robot Charlie, which looks like a monkey, or Mantis, a contraption with six extremities that looks like its namesake from the animal kingdom. At present, Coyote III, a gray-and-orange rover with star-shaped wheels and a flattish silhouette is navigating the artificial moonscape.
Intelligent and autonomous robots are indispensable for space exploration because they require no food and no oxygen. And once the mission is done, they don’t need a return journey to Earth. They do, however, have to be able to hold their own, to some extent, on strange moons and planets. The artificial crater in Bremen provides an opportunity to see how well they do at it. The crater was built by a company that ordinarily builds indoor climbing walls. “The template was photographs taken by Apollo astronauts of a crater at the Moon’s south pole,” explains Kirchner, one of the world’s foremost experts for autonomous space and underwater robots.
Autonomous submarine for the moon of Jupiter
Outer space and the underwater world have more in common than one might assume at first glance. One of the most interesting places in the solar system, after all, is Jupiter’s moon Europa, under whose ice sheet a vast ocean of liquid water has been postulated—a place, in other words, in which life could have developed.
“For me an autonomous vehicle is a robot I can drive” Professor Frank Kirchner
So the Bremen-based robotics experts have also built an eight-meter-deep water tank in which they can test the Europa Explorer, among other things: The pipe-shaped drill named Teredo is designed to penetrate the 3 to 15-kilometer-thick ice sheet on the moon’s surface and then launch the underwater vehicle Leng to explore the ocean beneath. Because control signals from earth would take 33 to 53 minutes to arrive, the torpedo-shaped submarine would have to be able to operate autonomously.
It’s no wonder, then, that the research group in the “space city” of Bremen has been working intensively on topics such as sensor technology, actuator technology, and artificial intelligence. But the results they achieve do not only benefit aerospace applications— Kirchner also places great store by the transfer to other fields, for instance for robots that have to maneuver independently in dangerous environments. He is also following the development of autonomous driving avidly, from his own very particular perspective. “For me an autonomous vehicle is a robot I can drive,” says Kirchner.
And there are, indeed, many commonalities. Both autonomous vehicles and robots on distant orbs must perceive and analyze their surroundings and use that information to make intelligent decisions. Of course on the moon and Mars there is no road traffic with traffic lights, traffic signs and cars and pedestrians suddenly popping out of nowhere. Nevertheless, even Kirchner’s robots have to deal with dynamic conditions such as sandstorms and tornadoes on Mars or starkly changing light conditions on the moon.
Orientation without maps
In contrast to autonomous vehicles, however, there are no maps of the terrain for their missions. “At one meter, the resolution of satellite images is still too poor,” explains Kirchner. “As such, the robots must build their own maps of their environment and locate themselves within it.” To cope with that reality, the researchers developed the SLAM algorithms (Self Localization and Mapping), probability-based methods for orientation in unknown terrain. “It all started with navigation in sewage canals,” recalls Kirchner. “It was a very simple environment, which allowed us to test the new approach there very effectively.” From the mid-1990s, the SLAM algorithms were also used in open terrain and in buildings. The first applications for the self-localization of autonomous vehicles emerged about 15 years ago.
Autonomous vehicles should continue to learn while in use
The basis for the SLAM algorithms is object recognition in dynamic situations, which was also an early focus of the robotics experts. The challenge: The technology must function reliably even when the camera is moving and the ambient conditions change due to weather and changing light conditions—factors that apply as much on Mars as on Earth. “In robotics, object recognition has gained a great deal in terms of maturity and robustness,” says Kirchner. “The underlying mathematics is the same as in cars today.” But the transfer is by no means a one-way street. Robot developers have benefited significantly from the smartphone boom of recent years that has made inexpensive video cameras commonplace. And with the continued exponential performance gains with microprocessors—a phenomenon known as Moore’s Law, with considerable impetus from the automotive industry—, their creations are becoming increasingly intelligent.
Based on his own research, he knows how complicated it is to steer a car through road traffic without human intervention. Kirchner himself has ridden in two test vehicles and was “very impressed.” As a highly engaged observer of the development, he naturally has a few ideas of his own on the subject. “Autonomous vehicles should learn during their use phase,” he suggests. “One buys a vehicle with basic experience and it continues to develop itself along with the other vehicles on the road.” It would be a collective learning experience—just as with the collaborative robots that are now gaining a foothold in industrial manufacturing processes: They have to get along with a variety of different people and therefore share their individual experiences with each other, for instance via a cloud.
“Today, with autonomous driving we pay too much attention to the individual algorithms—but we’ve known them for a long time already, in some cases since the 1950s,” says Kirchner. “What’s more important is the organization of knowledge. They key is to network the individual components of knowledge with each other—for instance through collective learning. The vehicle must be a system that learns throughout its life.” And to do so, it should also dream from time to time: Kirchner’s team is working on the EU project Dreams4Cars, whose mission is to improve the safety of autonomous vehicles. Like mind’s-eye pictures or dreams, the control software continually replays real traffic situations in a simulation environment, testing alternative reactions and thereby preparing itself for exceptional circumstances. It will be interesting to see what ideas from the Bremen-based robotics experts eventually make their way from the moon to Earth.
Info
Prof. Frank Kirchner is one of the world’s leading experts for autonomous space and underwater robots. He is the campus spokesman for DFKI Bremen and heads the Robotics Innovation Center with its over 100 employees.
Text: Christian Buck
Photos: Cosima Hanebeck; DFKI
Text first published in the Porsche Engineering Magazine, issue 01/2019