2017

MANUACT

Credit: Ars Electronica Futurelab / Michael Mayr

How can gesture research be presented so that it is comprehensible by everyone, was the main question of the interdisciplinary research project, Hands and Objects in Language, Culture and Technology: Manual Actions at Workplaces between Robotics, Gesture, and Product Design (MANUACT), that Chemnitz University of Technology commissioned in conjunction with an R&D assignment. The Ars Electronica Futurelab was brought on board as a scientific associate to support the university research group by developing specially designed installations and exhibits about gesture research.


How can gesture research be presented so that it is comprehensible by everyone? How do you identify natural gestures for specific applications? What are the origins of such gestures, and how can they best be used in future interfaces? The Ars Electronica Futurelab has been collaborating with Chemnitz University of Technology to get to the bottom of these and other questions.
By starting the content, you agree that data will be transmitted to youtu.be.Data Protection Declaration
The background of this collaborative effort was an interdisciplinary research project, Hands and Objects in Language, Culture and Technology: Manual Actions at Workplaces between Robotics, Gesture, and Product Design (MANUACT), that Chemnitz University of Technology commissioned in conjunction with an R&D assignment. The Ars Electronica Futurelab was brought on board as a scientific associate to support the university research group by developing specially-designed installations and exhibits about gesture research.

The further we go into the Digital Age, the clearer it becomes that, in human-machine communication, gestures will have a growing influence on our everyday life. Many jobs that used to be performed strictly manually increasingly entail machine control, whereby a gesture is the means a worker uses to assign the task to a machine. What this calls for are new interfaces that keep this process as intuitive as possible for the user. This objective can be attained either by utilizing gestures derived from a human user’s physical act of performing the task manually, or by coming up with alternative (conventional, everyday) gestures assigned new functions and significances in this particular context.



The Futurelab’s collaborative research with Chemnitz University of Technology has, among other results, produced a gesture glossary, created means of controlling virtual worlds via gestures, and developed installations to get across the findings and parameters of gesture research in a playful way.

The resulting research findings and interactive exhibits were put on display together with works by such noted international artists as Daniel Rozin, Golan Levin and Jennifer Crupi at an exhibition entitled Gesten – gestern, heute, übermorgen [Gestures – – in the past, present, and future] that ran from November 2017 to March 2018 at Industrie Museum Chemnitz. From April to September 2019, the Museum for Communication Berlin showed the exhibition, and from September 2019 to February 2020 it has been on view at the Museum for Communication Frankfurt. Currently, from April 2nd to November 5th 2023, the exhibition is being shown at the Kulturzentrum Festung Ehrenbreitstein | Landesmuseum Koblenz. All the works shown were conceived to enable the general public to grasp the fundamental transformation of communication and this phenomenon’s background from an artistic, scientific and technological perspective.

Read more in the Interview with Marianne Eisl of Ars Electronica Futurelab:

Credits

Ars Electronica Futurelab: Roland Aigner, Marianne Eisl, Peter Freudling, Roland Haring, Anna Kuthan, Christopher Lindinger, Maria Mayr, Michael Mayr, Otto Naderer, Johannes Pöll, Erwin Reitböck, Clemens F. Scharfen