“mirror_0.2” is an interactive installation that seeks to replace the installation visitor’s reflection with that of the artist. The visitor stands before a monitor that looks like a mirror. The background is a 3-D scan that resembles the background of the actual space behind the visitor. Plus, the virtual space is slightly distorted in a realistic way to strengthen the illusion of a mirror image. A Kinect installed above the monitor screen registers the position and facial expression of the person standing in front of it. If that person changes his/her expression, posture or distance from the mirror, this is registered by the installation’s software, which then replaces the visitor’s image with the corresponding image of the artist. Instead of seeing their own reflection in the mirror, installation visitors see the artist imitating them.
For this installation, the artist shot approximately 100,000 images of himself with different facial expressions, his head tilted in various directions, and in diverse postures. Software determines which of these photos most closely resembles the appearance of the person standing in front of the mirror and displays that particular photo on the monitor screen.
In this interview, Gregor Woschitz explains why he’s been working on this project for seven years now, how he managed to snap 100,000 self-portraits exhibiting the entire spectrum of facial expressions, and why “mirror_02” nevertheless has problems with bearded guys wearing glasses.
What inspired you to create “mirror_0.2”? What’s the idea behind it?
Gregor Woschitz: The first inklings that led to this installation occurred to me in 2009-10. In technical school, a teacher screened a video for us showing how you could photograph people three-dimensionally using an infrared camera and a few LEDs. This inspired me to work interactively. So, in 2011 when I enrolled in Linz Art University, I already had a concept, albeit a rather naïve one. Right from the start, my aim was to acquire the know-how necessary to do this project.
Originally, I wanted to create “mirror” as the ultimate self-portrait. I wanted to enable spectators to see what I see. Indeed, I share this motivation with many other artists. But while the work was in progress, I further developed the project conceptually. At this point, it began to raise questions such as: What remains of a person once his identity had been digitized? Can machines convey empathy? Will we carry on conversations with digital avatars in the future? Some people even interpreted “mirror” as poking fun at selfies—a mirror in which you can’t see yourself.
Was there also a “mirror_0.1” and how did it differ from the current version?
Gregor Woschitz: Yeah, there was also a Version 0.1. And there must have been about 100 other versions before that! The difference among the versions is, above all, the number of photographs and their depiction. And I’ve repeatedly enhanced the sound, the internal logic and the computational efficiency. Over time, you simply get better at dealing with the computer processor, and that’s reflected—in the truest sense of the word—in this work.
You took about 100,000 photos of yourself with various facial expressions, head tilts and postures. How long did you need for this and how did you go about it?
Gregor Woschitz: Imagine this: I’m sitting alone in front of the camera, looking into the lens, and trying not to move my upper body. I tilt my head in all directions in an attempt to cover all possible combinations of facial expressions and axis rotations. Doing this intimately acquaints you you’re your facial muscles … and after doing it for a while, the muscles at the corners of your mouth start to cramp up and twitch. But since smiling in the “mirror” isn’t supposed to come across as forced or cramped, I repeatedly had to interrupt the sessions. So this went on day after day.
Credit: Gregor Woschitz
Yeah, there are countless facial expressions! Does “mirror_0.2” really recognize every one?
Gregor Woschitz: No. Although “mirror_0.2” does now recognize a broad spectrum of expressions, the human face nevertheless has a more multifaceted repertoire of movements than the machine can recognize at this point. I also conduct experiments in order to teach the computer special forms of mimicry. Micro-expressions, for example, are perceived by the human subconscious and are an important element of nonverbal communication, but this is a little too much to ask of the “mirror.” The hardware and software also occasionally have difficulties with bearded men who wear glasses. Every additional facial expression input into the project not only doubles the expenditure of effort by man and machine, but also empowers it. So, there are fewer facial expressions than I had originally planned, but still more than most installation visitors expect.
What was the most difficult thing about this project?
Gregor Woschitz: No doubt, optimizing the software. Back when I was still a novice, I programmed the first functional prototypes during a train trip. But my initial euphoria was soon overwhelmed by reality. You can’t just create a huge database and expect that the computer will be able to query it n-dimensionally in real time … unless you have a quantum computer, of course. That’s how I figured out that I had to work with greater precision and set up the sequences in the program in a way that’s more multidimensional and exact. The more precise the commands are, the faster the computer can execute them.
TIME OUT .06 opens on Wednesday, June 8th at 6:30 PM in the Ars Electronica Center Linz: https://ars.electronica.art/center/eroeffnung-time-out-06/