Speculative Artificial Intelligence
Birk Schmithüsen (DE)

The work consists of a series of aesthetic experiments designed to make processes of artificial neural networks perceptible to humans through audiovisual translation. Exp. # 1 examines inner behavior during the prediction and learning process. Exp. # 2 questions an AI’s capacity for empathy and purpose while communicating with a second AI.

Amadeus Code
Taishi Fukuyama (JP)

Amadeus Code is a songwriting assistant powered by artificial intelligence.

Critical Cartography: Unauthorized Blue Prints
Vladan Joler (RS)

This map is based on a 5-year internet monitoring process and over 400 different cases of violations documented and analyzed by the Share Foundation. Though different methods represented in this map are observed in our local context, we believe that they are also being used worldwide in similar forms. This map is an attempt to interconnect most of those issues into one map, one possible narrative, one possible reading of those processes.

Computers that Learn to Listen
Institute of Computational Perception, Johannes Kepler University (JKU) Linz (AT)

A set of 8 short demo videos on latest results from the world of scientific research: computers learn to listen to and “understand” music; an autonomous drum robot recognizes beat and rhythm; musical computer companions turn music pages for pianists, listen to orchestras, provide synchronized scores to concert audiences, and accompany soloists.

Voices from AI in Experimental Improvisation
Tomomi Adachi (JP), Andreas Dzialocha (DE), Marcello Lussana (IT)

Through machine learning, computers can recognize patterns in a variety of sound documents. But can they also learn to improvise musically? The software “Tomomibot” tries it out by interacting in real time with the sound artist Tomomi Adachi, using deep learning techniques. The Artificial Intelligence (AI) "learns" the artist’s individual style via voice recordings and directly confronts him with the newly generated material. Their joint performance shows how interactive technology and AI can influence a (vocal) style. However, this dialogue also makes clear that the artist will always be more creative and unpredictable than his mechanical counterpart.

TechiEon
Corea Impact (KR)

TechiEon is a Taekkyeon performance that re-imagines the scale and bio-diversity of eon in AI-based augmented reality and soundscape.

Reeps One x Dadabots ft. Second Self AI
Reeps One (UK)

Second Self is an art and science collaboration between Reeps One, Dada Bots and the E.A.T. program at Nokia Bell Labs. The collaboration is a live performance piece designed to integrate machine learning, the human voice, and generative audio as a practical artistic tool and to raise awareness about machine learning beyond the academic, technological and engineering demographics via the medium of film and performance.

UngenauBot
Ilmar Hurkxkens (NL), Fabian Bircher (CH)

UngenauBot combines highly developed robot technology with an everyday rubber glove performing banal activities. By deliberately exploiting empirical errors in robotic systems and artificial intelligence, this work demonstrates the limits of technology when things don’t go according to plan.

Humanity (Fall of the Damned)
Scott Eaton (US/UK)

One thousand hand-drawn figures, “painted” with Eaton’s Bodies neural network. The composition of tumbling, intertwined figures embodies the visceral human experience and humanity’s ongoing struggle with its own nature and its consequences.

Co-Thinking the Renewal of Fashion

FRI 6.9. | 13:30 – 15:00 In this panel, artists and scientists involved in the Re-FREAM project will share their perspectives on future developments in the fashion industry, talking about responsive fashion, future materials and the fashion production system.