Greetings, robot car!

Autonomous Driving,

A driving humanoid? – Rather not. The Ars Electronica Futurelab and Mercedes-Benz are exploring the future of human-machine interaction. Image: Mercedes-Benz

Fans of the TV series “Knight Rider” are already familiar with the concept of an autonomous vehicle. But it won’t be long before a TV screen isn’t the only place you’ll see robot cars; motorists will soon be sharing the road with them on a daily basis. In cooperation with Mercedes-Benz, the Ars Electronica Futurelab is investigating ways to enable us to communicate effectively with the self-driving vehicles that are coming our way. Here, Martina Mara and Christopher Lindinger discuss why it’s important for mobile robots to talk to us in plain, simple language, and what role the Spaxels are playing in the initial interaction experiments.

You are investigating how we’ll be able to communicate with robotic cars. But why is that such a pressing matter now, if it’ll still be another 10-15 years before fully autonomous vehicles are present on our highway and byways?

Christopher Lindinger: Now is exactly the right time for this. The Ars Electronica Futurelab has been dealing with radical innovations for quite some time now and, in going about this, we’ve observed that technologies sometimes develop faster than our understanding of how to use them. So even if self-driving cars won’t be going into serial production for years, it’s imperative that we give some thought now to how we want to interact with these new motorists. Only in this way can these considerations have an impact on design decisions made on the way to putting these products on the market.

Martina Mara: Over the next decade, robots will be playing a role in more and more aspects of everyday life—for instance, household assistance systems, in various medical fields, and in the shipping & transportation industry. So it’s extraordinarily important to initiate a discussion about how these intelligent machines ought to be engineered so that we human beings will feel well amongst them. And especially in collaboration with carmakers. On one hand, this has to do with vehicles’ exterior design; but it also entails developing a functional language for communication between humans and robots.

Gesture communication between humans and autonomous cars. Image: Mercedes-Benz

Your work with Mercedes-Benz focuses on how an autonomous automobile can interact with its surroundings, with pedestrians, cyclists, or other vehicles …

Christopher Lindinger: These are very fundamental issues. I mean, if I’m a pedestrian who’s about to cross the street and a car approaches the crosswalk, the driver usually gives a signal—either by eye contact or a hand gesture—to let me know that I have the right of way. What occurs is nonverbal communication that’s natural and governed by conventions. But what happens when I step out into the path of an autonomous robot? How can I be sure that it’s recognized me and will stop in time? What role could light signals, gesture-based controls or other forms of interaction play in this situation? These are the questions we’re discussing with Mercedes-Benz, whereby we’re delighted to have a partner that’s the innovation leader in this industry and is thinking and acting beyond the confines of its core competencies.

Martina Mara, Interaction experiments at the Mercedes-Benz Future Talk “Robotics”. Photo: Mercedes Benz

Martina, your research at the Futurelab has to do with the psychology of human-robot interaction. From your perspective, what are the key aspects of communication with robotic automobiles?

Martina Mara: First of all, we get along better with robots when they unmistakably remain machines, when we don’t get the feeling that they’re excessively competing with us humans. In the case of robot cars, it’s also highly relevant that we avoid potential uncertainties in this communications process—which is to say the loss of control. When the majority of our fellow motorists are intelligent robots that are interacting with each other as if by magic and using means that are incomprehensible by us to reach decisions like, for instance, who has the right of way at an intersection, whether to turn left or right, and how fast to drive, then it’s certainly understandable that human beings could find this pretty creepy. That’s why robotic cars have to act as proactively as possible—both internally and externally—in signaling their current status and their next move, even if this is totally unnecessary from a purely technical point of view. They have to give a clear indication of their intentions that is easily understood by all. For example, a little kid has to be able to immediately recognize the difference between a self-driving vehicle and a conventional car driven by a human being.

Christopher Lindinger and Martina Mara are testing the gesture communications with the Spaxels. Photo: Mercedes Benz

You’re just back from Mercedes-Benz’s Future Talk conclave in Berlin, where this year’s theme was robotics. Tell us a little about what went on there.

Christopher Lindinger: At Future Talk, we spent three days with journalists from Germany and throughout the world discussing desirable features of a future language for communication between human beings and autonomous automobiles. The process of interdisciplinary exchange was fascinating. The participants included Mercedes’ staff experts including futurist Alexander Mankowsky, Vice President Group Research and Sustainability and Chief Environmental Officer at Daimler AG Herbert Kohler and Vera Schmidt from Mercedes-Benz Advanced Design Center in Sunnyvale, as well as prominent guests such as Ellen Fricke, a scholar whose specialty is research on gestures. One of the issues we scrutinized was how a car is supposed to recognize if I’m using a gesture to issue it a command, or just naturally gesticulating in the course of a conversation. And if there exists a repertoire of gestures that are understood worldwide.

Martina Mara: At Future Talk, Christopher and I discussed the current state of development in robotics and showed a few interesting works of art dealing with robots—for example, the Oribots designed by Matthew Gardiner, one of our Futurelab colleagues. But our top priority was to set up an experimentation zone in which Future Talk attendees could experience for themselves what it’s like to interact with autonomous robots.

From left to right: Ellen Fricke (Professor of German Linguistics at the Chemnitz University of Technology and Co-founder of the Berlin Gesture Center), Martina Mara (Key Researcher, Ars Electronica Futurelab), Alexander Mankowsky (Futures Studies & Ideation, Daimler AG), Ralf Lamberti (Director Telematics and Infotainment Group Research and Advanced Engineering Daimler AG), Vera Schmidt (Senior Manager, Advanced UX Design, Daimler AG), Christopher Lindinger (Director of Research and Innovations, Ars Electronica Futurelab), Koert Groeneveld (Head of Research & Development Communications, Daimler AG). Photo: Mercedes Benz

I’ve heard that the experimentation you staged on human-robot interaction also included an aerial performance by the Spaxels, the Ars Electronica Futurelab’s swarm of LED-equipped quadcopters …

Martina Mara: Exactly. We created an approximately eight-by-eight-meter shared space in which three of our quadcopters performed computer- and sensor-controlled maneuvers and communicated externally via light signal. When I enter this interaction space, I can interact with the quadcopters via gestures, by verbal commands and also with the help of a “magic” key object. For instance, I can summon it by raising my arm, make it stop with a Halt! gesture, and control its altitude. The quadcopters, in turn, react to me—for example, they flash variously colored LED codes to acknowledge my presence or to signal that they’re about to brake.

[youtube=http://www.youtube.com/watch?v=ORPBbhzex6A&w=610]

Christopher Lindinger: Thus, with the help of the quadcopters, we can experiment with various interaction possibilities between human beings and autonomous robots to ascertain which forms of communication are suitable, intuitive and clearly understandable in which situations. In contrast to virtual simulation environments, utilizing our Spaxels enables us to deploy haptic proxies for autonomous vehicles—units that are physically palpable, exude kinetic energy, generate real gusts of wind.

Experimental research of Human-Robot-Communication with Martin Mörth, Peter Holzkorn, Andreas Jalsovec, Martina Mara, Christopher Lindinger and the spaxels Carl, Gottlieb und Mercédès. Photo: Mercedes Benz

And what exactly is Quad-Pong?

Martina Mara: Besides doing R&D on interaction scenarios for future settings for vehicular traffic, we occasionally use our experimentation zone to play ping-pong with one of our quadcopters via gesture control. This game lets you enjoy a really immediate, hands-on experience interacting with a machine. And, after all, sometimes you just have to take a playful approach!