Why trust robots?

When humans and robots work side by side, it's not always easy.

| | |

Widespread skepticism and a lack of communication paradigms will create new challenges in the workplace of the future.
How can the working world of the future be designed with people in mind? How do you create trust and acceptance? And how do you deal with a colleague who’s just a gripper arm?

In the CoBot Studio of the LIT Robopsychology Lab at Johannes Kepler University Linz, researchers are developing new standards for successful teamwork between humans and robots. And in Deep Space 8K at the Ars Electronica Center, a unique mixed-reality environment will simulate future forms of work with collaborative robots. From robotics to psychology and virtual reality to nonverbal communication, this joint research by the LIT Robopsychology Lab (JKU), the Ars Electronica Futurelab, the Center for Human-Computer Interaction (Salzburg University), Joanneum Robotics(Johanneum Research), Polycular OG, Blue Danube Robotics, and the OFAI (Austrian Research Institute for Artificial Intelligence) drawson a wide range of disciplines.

Martina Mara, professor for Robopsychology at the Institute of Technology (LIT) at Johannes Kepler University Linz, and Roland Haring, Director of the Ars Electronica Futurelab, talk about how their joint findings could help humans and machines work together in harmony.

What are the difficulties in men and machines working together?

Martina Mara: Robots have been used in industrial companies for a long time: In production, manufacturing and for other repetitive tasks. But industrial robots of that kind were anything but collaborative. For safety reasons, the heavy machines had to operate behind barriers, far away from humans. They were “lone workers” and close contact with their human colleagues had to be avoided at all costs — for safety reasons. Robots were therefore associated with danger for a long time.
CoBots, on the other hand, are a very recent achievement and are just now being put into practice. These are robots that are designed to collaborate with humans. They’re lightweight and safe because they’re equipped with sensors that register the human environment and allow the robot to respond. It’s now possible for humans and robots to work together even in confined spaces.

What about their acceptance in everyday work environments? What is the impact of Cobot Studio on this commitment?

Martina Mara: A good relationship and mutual acceptance between interacting partners is always built on trust. It’s the glue of all social relationships — whether at home or at work. If you don’t trust your counterpart, you feel insecure.
In psychology, there’s a large body of theoretical and empirical literature on trust from which we draw our basic principles. Often, a distinction is made between two routes for establishing trust: When trust is established via how friendly, kind, and well-meaning a counterpart appears to me, it’s referred to as the affective trust route. In contrast, the cognitive route builds trust based on how competent, reliable, and predictable I assess my counterpart to be.

The cognitive trust domain is what we draw on for our research in CoBot Studio. We don’t want to develop cute, kindly robots for the workplace. After all, we don’t want to create blind trust – we want to foster an appropriate degree of informed trust in robotic work partners. We know from psychological theory that comprehensibility and predictability of behavior can be important foundations of cognitive trust formation. So that’s exactly what we’re building on in the CoBot Studio research project: We’re investigating how robots can act in a way that’s more understandable and predictable for humans in industrial collaboration. Because if CoBots are designed in such a way that their human partners can understand what the robot can and can’t do, or what it plans to do next, that’s likely to be an important factor for successful, trusting, and safe collaboration.

What impact does the research in the CoBot Studio have on mutual understanding between humans and their future colleagues?

Martina Mara: We study nonverbal communication and signals of intent that make it easier to assess the other person’s intention. These are factors that have been under-considered in robot design. A lot of work goes into enabling CoBots to interpret human signals using sensors and machine learning. But conversely, we still don’t give people enough opportunities to learn to interpret the machines. So robots are a kind of black box for humans.
Research in the CoBot Studio, a collaboration between the Ars Electronica Futurelab and the JKU, starts at the interface between psychology, robotics, nonverbal communication, game design and virtual reality. It creates new knowledge as a basis for improved communication between humans and CoBots, the collaborative robots.

Using a game-like virtual reality (VR) research environment that simulates a collaborative setting – for example, an industrial setting – we conducted a study last year to investigate how the intelligibility of different robot signals relates to trust, a sense of safety and acceptance. We’re examining different light signals as well as non-verbal pointing gestures in industrial robots, which should enable humans to recognize which object the robot wants to grab next or where in space it’s moving to.

Later, in the Ars Electronica Center’s Deep Space 8K, the test subjects will encounter a real CoBot in a shared virtual game environment and try to interpret its signals. The answers we hope to get from this mixed-reality research project should provide us with important information about how to equip collaborative robots in the future so that humans and machines can understand each other better.

So our research in the CoBot Studio is built on the assumption that machines and humans could communicate better than they do today. Increasing informed trust should significantly improve collaboration in the future. And of course, it affects not only human well-being in a work environment with robots, but also the success of the collaboration and performance.

“Hopefully, teamwork between humans and machines will benefit from the insights gained, because our overall goal with CoBot Studio is to align the working world of the future more strongly to human needs.” – Martina Mara, Professor for Robopsychology, Professor for Robopsychology at the Johannes Kepler University Linz (JKU)

What are your other goals?

Roland Haring: Of course, we are interested in investigating human behavior in relation to collaborative robots. How is trust established and what’s the best way for robots and humans to communicate? How do we make it so that people are not afraid of a robot?

But it’s also about finding out: what are the limitations and possibilities of the robotic system and how do we design interactions and possible scenarios? So one goal of the research project is to find out what the methodological possibilities of the VR system are — especially in light of the fact that we interact with it in a virtual world, not a real one.

“Complex research environments are a major problem in scientific practice. If research can only be conducted in real or realistic environments, there’s a risk that financial constraints will prevent research from progressing.” – Roland Haring, Director in the Ars Electronica Futurelab

After all, research needs to be done on how interaction with multiple robots works: How does a human behave when outnumbered by machines? It can be very difficult to recreate research questions like that in laboratory experiments. In a virtual world, we can simulate them much more easily and cost-effectively. Deep Space 8K offers us a great advantage right now: it provides a hybrid encounter space, a space where people can act in a real environment and a virtual reality at the same time. But the ambience of Deep Space 8K is also very different from the environment in an industrial hall — expectations around safety are different, for example. We’re aware of that, of course. But it’s a problem every laboratory study has to reckon with.

There are still very few comparative studies looking at how to actually use VR in research. There’s a risk that people will behave differently when interacting with virtual robots than when interacting with real machines. After all, in a virtual world, people always lull themselves into a sense of security. Real proportions and physical touch may have an impact on how people respond and interact.

We have a very similar basic study design in both experiments, which probably allows for a good comparison of the data. In principle, we can filter out a wide variety of parameters for measuring performance. Then when we compare the data sets, we see where the differences in behavior lie and we can identify certain trends. It would be great if it turned out that the different combinations of scenarios in the virtual or real world had little or no influence on how participants behave. Because that would mean the environment doesn’t hugely affect the test result. So we’re also asking methodological questions that are very exciting and we hope working in the CoBot Studio will give us better insight into the extent to which virtual reality can complement or even replace elaborate physical research environments from a methodological perspective.

By the way: The new exciting virtual reality study of the LIT Robopsychology Lab at JKU is looking for test persons in July 2021. Together with a virtual robot, you can playfully try out for yourself how it feels to communicate with a machine in the VR underwater lab of CoBot Studio. The common mission: the virtual sea must be freed from plastic waste. The trial will take place at the LIT Open Innovation Center at JKU – Johannes Kepler University Linz. Places are limited.
You can register as a participant at this link:

Learn more about humans, robots and the future of work at the Ars Electronica Blog, about CoBot Studio and Roland Haring’s Key Research Coimmersive Spaces at the Ars Electronica Futurelab website. New perspectives on the future of humanity together with robots can be found in Humanity and Robotinity, Episode 4 from the Lab’s 25th Anniversary Series:

Martina MaraMartina Mara studied communication sciences in Vienna and received her PhD in psychology from the University of Koblenz-Landau under Prof. Markus Appel on user acceptance of human-like machines. After many years of research work in non-university settings, including at the Ars Electronica Futurelab, she was recruited as Professor of Robopsychology at the Linz Institute of Technology (LIT) at JKU in April 2018. Her work focuses on psychological conditions of human-centered technology development and interdisciplinary research strategies. Together with partners from science and industry, she investigates, among other things, effects of simulated emotionality in machine agents or communication designs of autonomous vehicles and collaborative robots. Mara is a member of the Austrian Council for Robotics and Artificial Intelligence (ACRAI). As a newspaper columnist, she regularly comments on current technological events for a wide audience. In 2018, she was awarded the BAWAG Women’s Prize as well as the Futurezone Award in the category “Women in Tech”.

Roland Haring, Ars Electronica FuturelabRoland Haring studied Media Technology and Design at Hagenberg University of Applied Sciences. Since 2003, he has been a member of the Ars Electronica Futurelab and one of the driving forces behind the lab’s R&D efforts. His activities include research and development in several large R&D projects with academic, artistic and commercial partners and collaborators.
Currently, Roland Haring is Technical Director of Ars Electronica Futurelab and co-responsible for its general management, content conception and technical development. With his many years of experience in the (software) technical management of large-scale, research-intensive projects, he is an expert in the design, architecture and development of interactive applications.