Converge @ Futurelab Night 2024; photo: Bettina Gangl

Converge

Converge is an interactive work for Deep Space 8K by the Ars Electronica Futurelab – based on SHARESPACE, a large European R&D project on using avatars in social situations. Converge uses tracking for up to ten on-site participants and a motion capture suit for one off-site participant. All must collaborate to progress, communicating with the remote participant only via movement & body language. 

SHARESPACE is a European research and development consortium focused on the ethically sound future use of digital avatars for social interactions in mixed human and automated settings. The project with 14 partner institutions across eight countries is running from January 2023 to December 2025. The Futurelab is mainly responsible for developing art scenarios, allowing for experimental, creative approaches that differ from the more application-driven health and sports scenarios by other partners.  

Central to the art scenarios is the Deep Space 8K at the Ars Electronica Center, an immersive experience space featuring 16×9 meter stereoscopic projections on walls and floors – built and maintained by the Futurelab. Its pharus laser tracking system, another Futurelab technology, enables interactive experiences by detecting the position of people and objects on the floor, enhancing the engagement of digital artwork. 

Converge merges Deep Space with SHARESPACE technology for a new kind of experience, connecting on-site with off-site participants to solve scenarios, guided only by subtle visual cues. The minimalistic style highlights movements and interactions, ending with participants herding crowds of AI characters. 

Artistic Perspective

Converge was created by Futurelab Lead Designer & Artist Patrick Berger. It mainly focuses on exploring the idea of embodied collaboration, so non-verbal communication via movement and body language to reach a common goal, like you would have in a team sport. It was especially interesting to think about how players will communicate if they cannot talk to the remote player. And how the remote player can only communicate with the people in Deep Space 8K via body language.  

The situations that the players are confronted with are intentionally made to only be solvable together, creating an environment of shared responsibility and social connectedness. It was intended to not have any external verbal moderation, meaning the situations had to be designed in a way that the group could understand and solve them by themselves. This would need more time per session than was possible at the Ars Electronica Festival, meaning we resorted to helping the group along via some verbal hints when they were needed. 

The style is intentionally kept minimalistic. The idea behind this was to draw more attention towards the silhouettes of the characters. Initial thoughts were revolving strongly around recognition based on movement, identifying a human in a crowd of NPCs, amplification of individual gestures and so on. So, the idea was to have a high contrast environment, omitting nearly everything except the silhouette and movement aesthetic. 

But since play testing in Deep Space 8K showed that going too “high contrast” comes at the price of losing the illusion of perspective, a slightly shaded style was selected, allowing light and shadows, grey tones and textures, while maintaining high contrast by sticking to a black, white, and grey color scheme. 

The Converge creator was heavily inspired by Mario von Rickenbach’s “KIDS” in terms of interaction with crowds and minimalistic aesthetic. Concerning the simplicity of interaction needed to create an engaging experience in Deep Space 8K, an additional great source of inspiration were the mechanics used in various examples of Gerhard Funk’s “Cooperative Aesthetics”. Additional artistic inspirations came from Gilbert Garcin and Sean Mundy

Technology Used 

The project uses some advanced systems to bring the mentioned ideas to life. Firstly, the remote player is motion captured in real time. This is done by using an optical tracking system called Optitrack and a motion capture system called Noitom. The remote player is wearing optical markers on his head, back, elbows, hands, knees and feet. This position data of the individual markers is compiled by Noitom to create a skeleton. This skeleton data is then sent in the form of a BVH stream (Biovision Hierarchy) over the network to the PCs in Deep Space 8K (Wall PC and Floor PC) and in the studio. The PCs are each running an instance of our custom application made in Unreal Engine. These instances use the skeleton data to render the remote avatar correctly. 

Additionally, the remote player is wearing a virtual reality headset, the Meta Quest Pro, allowing him to see the virtual environment, including the other players and his own body from the perspective of the character he is controlling. The visuals in the headset are streamed via a local WIFI network from a separate instance of the application running on a PC in the studio directly to the headset. 

The PC rendering the wall content in Deep Space 8K is configured as a server, replicating things like the scene transitions, hole size and other global parameters to the client PCs (Floor PC and Studio PC). 

The players positions in Deep Space are tracked using the Futurelab’s LIDAR laser tracking system pharus. The tracking data from the lasers in Deep Space 8K are used to calculate position data in real time on a separate PC. This position data is replicated from the server instance (Wall PC) the clients (Floor PC and Studio PC) to render the local players characters at the correct positions. 

The animation of the local players is handled on the rendering PCs. The characters simply follow the real position of the players using a path finding algorithm. They use pre-defined idle, walk and run animations and a blend space that morphs between those animations based on the characters speed. 

This project was part of the Open Futurelab at the Ars Electronica Festival 2024.

Credits

Ars Electronica Futurelab: Patrick Berger, Daniel Rammer, Raphael Schaumburg-Lippe, Cyntha Wieringa, Arno Deutschbauer
PARTNER: SHARESPACE Consortium

This work has been developed in the framework of the SHARESPACE project.

SHARESPACE has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 10192889.