The first AI x Music Festival, organized by Ars Electronica and the European Commission as part of the STARTS initiative, is dedicated to the encounter between human creativity and technical perfection. From September 6 to 8, 2019, Ars Electronica will be gathering musicians, composers, cultural historians, technologists, scientists and AI developers from all over the world in Linz to discuss the interaction between human and machines through concerts and performances, conferences, workshops and exhibitions.
Renowned personalities from the world of art, such as Hermann Nitsch (AT), Christian Fennesz (AT), Markus Poschner (DE), Dennis Russell Davies (US/AT), Maki Namekawa (JP/AT), Memo Akten (TR), Anthony Moore (UK/FR), and Sophie Wennerscheid (DE), and from the world of science, such as Josef Penninger (AT), Siegfried Zielinski (DE), and Ludger Brümmer (DE) will be present. Other participants include personalities such as Matthias Röder (DE) from the Karajan Institute, the author, theologian, editor, filmmaker, and presenter Renata Schmidtkunz (AT), and Amanda Cox (US) from the New York Times’s data journalism section “The Upshot.” In addition, there will be internationally leading developers from the Yamaha R&D Division AI Group and the Glenn Gould Foundation, from Google’s Magenta Studio, SonyLab, IRCAM, or the Nokia Bell Labs, as well as from various start-ups.
The venues of the “AI x Music Festival” are the Anton Bruckner Private University, the Ars Electronica Center, the Linz Donaupark, POSTCITY, and the St. Florian Monastery. The latter is the undisputed hotspot of the AI x Music Festival: Whether it is the marble hall, church, crypt, or tomb—the impressive rooms of this spiritual site are a perfect context for reflecting on the future role of intelligent machines and our self-image as human beings.
Renata Schmidtkunz (DE)
Renata Schmidtkunz hosts four panel discussions in the summer refectory, prominently featuring Josef Penninger, Sophie Wennerscheid, Amanda Cox, Markus Poschner, and others.
In four panel discussions on Saturday, experts will address topics such as creativity, “bio art” and gaming in relation to the use and research of artificial intelligence in music.
Artificial intelligence is changing our understanding of music. Starting with a series of talks, the AIxMusic Matinée invites universities and institutions share their research and present an overview about what it is happening today in those incubators. The lecture series also deals with the massive changes in the music industry, triggered by the developments of the AI and the latest trends in the music market.
Conversations with Renata Schmidtkunz
Renata Schmidtkunz (DE)
Renata Schmidtkunz hosts four panel discussions in the summer refectory, prominently featuring Josef Penninger, Sophie Wennerscheid, Oliviero Toscani, Amanda Cox, Markus Poschner, and others. The topic is dedicated to the role of science and research. Social acceptance in relation to current AI research will be discussed as well as the new artistic possibilities opening up due to AI applications.
Jean Beauve (FR), 0W1 Audio/Oleg Stavitsky (RU), Endel / Florian Richling (AT), Fortunes / Ivan Turkalj (HR/AT), Music Traveler / Taishi Fukuyama (JP), Amadeus Code
SUN 8.9. | 16:15 – 17:30
AIxMusic Cultural Organizations
Gerald Wirth (AT), Vive Kumar (IN), Veronika Liebl (AT), Matthias Röder (DE)
SUN 8.9. | 15:00 – 16:00
AIxMusic Industry Application Oriented Research
Vittorio Loreto (IT), Francois Pachet (FR), Akira Maezawa (JP)
SUN 8.9. | 13:15 – 14:45
Hugues Vinet (FR), Philippe Esling (FR), Daniele Ghisi (FR), Jérôme Nika (FR), Ludger Brümmer (DE), Christine Bauer (AT), Peter Knees (AT), Koray Tahiroğlu (Fl/TR) , Nick Bryan-Kinns (UK)
SUN 8.9. | 10:00 – 13:00
Dialogue VII: Dear Glenn, – Yamaha AI Project
Akira Maezawa (JP), Brian M. Levine (CA), Norbert Trawöger (AT), Francesco Tristano (LU)
SAT 7.9. | 19:00 – 19:30
Dialogue V: Overview of the AI and Music scene in the Bay Area
Clara Blume (AT/US) & Naut Humon (US)
SAT 7.9. | 17:00 – 17:30
Dialogue I: Komposition, Interpretation, Reproduction – 3 shades of creativity
Markus Poschner (DE) & Ali Nikrang (AT)
SAT 7.9. | 15:00 – 15:30
Panel IV: What is Creativity?
Renata Schmidtkunz (DE), Amanda Cox (US), Hermann Vaske (DE)
Numerous theoreticians, artists and lately also neuro-scientists have tried to unlock the secrets of creativity and in our new economy it has also become a much sought after ingredient for commercial success. So what is it, where does it come from and could it be delivered also by AI-Systems?
Panel III: Deep Journalism, Information and Misinformation in the age of Artificial Intelligence
Renata Schmidtkunz (DE), Walter Ötsch (AT), Marta Peirano (ES)
Which potentials and risks does the increasing automation and handling of information processes entail? Can we develop sensitive strategies for our data in digital space?
Panel II: AI, more than a technology
Renata Schmidtkunz (DE), Markus Poschner (DE), Douglas Eck (US), François Pachet (FR)
AI is expected to open many new possibilities for creators, not replacing them but assisting and supporting their work. Even more so we see big expectations for the businesses related to the distribution of music. What are the consequences and implications? What kind of new business models can we expect? How will this affect the artists?
Panel I: Homo Deus
Renata Schmidtkunz (DE), Josef Penninger (AT), Sophie Wennerscheid (DE)
Renata Schmidtkunz hosts four panel discussions in the summer refectory, prominently featuring Josef Penninger, Sophie Wennerscheid, Oliviero Toscani, Amanda Cox, Markus Poschner, and others. The topic is dedicated to the role of science and research, which initially had to confirm a religious view of the world, then was subordinated to economic rationality, and now, in the dawning age of AI, is reorienting again. Social acceptance in relation to current AI research will be discussed. Another focal point of the panels will be the new artistic possibilities opening up due to AI applications, which also lead to a variety of novel business models or issues with copyright regulations.
We Revolutionize Music Education: The Neuromusic Education Simulator (NES)
Gerald Wirth (AT), Wiener Sängerknaben/VIve Kumar (IN), Athabasca University (US)
In cooperation with developmental psychologists and pedagogues, Professor Gerald Wirth developed his engagement-centric teaching methodology – the wirth method – aiming at constant high-level student attention. Through neuronal networks activated when using movement to support teaching and through repetitions with variations, contents are sustainably stored in the long-term memory. The use of NES based on the wirth method applying VR & AR allows teachers and students in addition to personal tuition, to practice, gain experience and receive feedback.
ACIDS: Artificial Creative Intelligence
Philippe Esling (FR)
The Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM seeks to model musical creativity by targeting the properties of audio mixtures. This studies the intersection between symbol (score) and signal (audio) representations to understand and control the manifolds of musical information.
Automatic Music Generation with Deep Learning – Fascination, challenges, constraints
Ali Nikrang (AT)
In recent years, there has been a great deal of academic interest on applying Deep Learning to creative tasks such as for generating texts, images or music with fascinating results. Technically speaking, Deep Learning models can only learn the statistics of the data. Thus, they often can learn relationships in the data that human observers have not been aware of, and can therefore serve as a new source of inspiration for human creativity. This workshop focuses on current technical approaches for automatic music generation.
Creating interactive audio systems with Bela
Andrew McPherson (UK)
The workshop will provide an introduction to Bela, an open-source embedded hardware platform for creating interactive audio systems. Participants will get a hands-on introduction to building circuits and programming using Bela, following a series example projects to introduce the basis of building real-time audio systems.
Recommenders and Intelligent Tools in Music Creation: Why, Why Not, and How?
Christine Bauer (AT), Peter Knees (AT), Richard Vogl (AT), Hansi Raber (AT)
This workshop will highlight the role of Artificial Intelligence, Machine Learning-supported composition, and Recommender Systems in the process of music creation. We discuss their reception and prevalent image among professional music producers and creators, including the potential threats these technologies pose to their artistic originality. We contrast this view by emphasizing the power of AI-technology for a democratization of music making, by lowering the entrance barrier of music creation.
Digital Musical Interactions
Koray Tahiroğlu (Fl/TR)
Today digital technologies and advanced computational features, such as machine learning and artificial intelligence (AI) tools, are shaping our relationship with music as well as enabling new possibilities of utilising new musical instruments and interfaces. In this workshop, we question, what does our relationship with music and musical instruments look like today?
Computer Music Design and Research – IRCAM Workshop
Jérôme Nika (FR), Daniele Ghisi (IT)
Computer music designer, musician, and researcher Jérôme Nika (FR) will present the generative agents / software instruments DYCI2 that he develops in collaboration with Ircam, and in interaction with expert improvisers. These agents offer a continuum of strategies going from pure autonomy to meta-composition thanks to an abstract “scenario” structure.
The Art of Intelligent Interruption and Augmented Relationships
Harry Yeff (UK) & Domhnaill Hernon (IR), Nokia Bell Labs
Developing disruptive research for the next phase of human existence. What are the narratives that allow the world to embrace Augmented Intelligence and do artists offer an answer? Harry Yeff walks us through his portfolio of interactive installation, creative use of machine learning and vocal performance to explore the concept of intelligent interruption and augmented relationships.
Dear Glenn, – Yamaha AI Project
Pianist: Francesco Tristano (LU), Flutist: Norbert Trawöger (AT), Violinist: Maria Elisabeth Köstler (AT/DE), Researcher: Akira Maezawa (JP; Yamaha Corporation)
Yamaha Corporation, together with the support of the Glenn Gould Foundation and pianists, is pursuing the development of the world’s first AI piano solution capable of analyzing and playing in the style of a human pianist while interacting with human musicians in a music ensemble. Yamaha will demonstrate the AI through a concert performance at the St. Florian monastery on September 7.
The Vienna Acousmonium
Thomas Gorbach (AT)
Acousmatics (acousma in Greek means “aural cognition“) is the cognitive science of listening; a listening to listening. To make this possible, unheard sounds and compositions are projected through an orchestra of loudspeakers: the Acousmonium.
Johann Sebastian Bach: Suites for unaccompanied cello
Yishu Jiang (AT)
The Bach cello suites played in the performance are structured in six movements each: prelude, allemande, courante, sarabande, two minuets or two bourrées, and a final gigue. The Bach cello suites are considered to be among the most profound of all classical music works.
Cumulus – Stratus
Volkmar Klien (AT)
Volkmar Klien lets St. Florian’s bells swing and produce aural shapes in the sky around the abbey with the assistance of AI-based pattern-recognition and interpolation. The shapes emerge and morph to melt back into homogenous sound fields covering everything within earshot. They then subside, to give way to distinct sonic formations from above.
AIxMusic Workshops (Sunday)
In recent years, the academic interest in applying deep learning to creative tasks such as generating text, images or music has drastically increased. These workshops offer everyone the opportunity to try out the AI systems used for making and playing music.
Joep Beving, Arvo Pärt, Bach-Kurtag at St.Florian
Maki Namekawa (JP), Dennis Russell Davies (AT/US)
At this year’s festival the renowned pianist Maki Namekawa will perform several pieces by different composers solo as well as together with her husband Dennis Russell Davies.
Bach Hauer Scelsi Cage
Weiping Lin (AT/TW)
Weiping Lin (violin) presents four different compositional approaches by composers who, in their individual ways, reflected on questions of musical order and its relation to the wider contexts of human existence.
Martina Claussen (DE)
Voice and sound recordings, together with sound objects, weave a “sound carpet” which provides the basis for an electroacoustic journey. These textures act as a sort of humus for voices, from which they repeatedly emerge in fragmented form. Associations of the most diverse kinds and unexpected connections are evoked.
The tenor duets of Claudio Monteverdi
Ensemble Vivante (AT)
Ensemble Vivante presents the dramatically charged vocal music of a contemporary of Kepler, offering works whose texts reflect their time’s turbulence, innovation and discovery through their depictions of nature and humanity.
WM_EX10 TCM_200DV TP-VS500 MS-201 BK26 MG10
Stefan Tiefengraber (AT)
Unexpected and uncontrollable analogue signals are altered and bent by the artist to create an audio/video noise-scape. Pre-recorded (installation) or live audio signals, audible through speakers, are sent directly to CRT monitors mounted on the speakers, visualizing the signal in flickering and abstract shapes and lines in black and white to create a time-based sculpture.
Roberto Paci Dalò (IT)
A solo concert for clarinet (and bass clarinet) that works with the very special acoustics and reverbs of Sankt Florian’s Marmorsaal and evokes different musical styles from Gregorian to Monteverdi and Gesualdo da Venosa. Sometimes it makes a timbral memory appear, borrowed from practices and memories of electronic musical culture. Tenebrae (Latin for “darkness”) is a religious service of Western Christianity.
GRAND JEU 2
Wolfgang Mitterer (AT)
Electronics are operated live and represent a second organ with multiple possibilities when coupled with a keyboard and controllers. This results in a massive expansion of sound in every direction, from Baroque to Bruckner, and beyond.
Ali Nikrang (AT), Michael Lahner (AT)
We generated several short sequences that are played as input in a loop. As a result, the application will continue to create new outputs despite the same input.
Aleksey Igudesman (DE/AT), Julia Rhee (KR/US), Dominik Joelsohn (DE/AT), Ivan Turkalj (HR/AT)
Music Traveler is a marketplace that connects musicians and centralizes spaces with musical instruments, equipment, and services for the creative industry.
Nokia Bell Labs
Domhnaill Hernon (US)
An interactive experience fusing music and image. Users’ movements are transformed into a dynamically designed audio-visual experience through the Bell Labs Motion Engine.
Bruckner Percussion plays Xénakis
Leonhard Schmidinger (AT), Fabian Homar (AT), Vladimir Petrov (BG)
Iannis Xénakis (1922-2001) composed Okho for three djembe players. The premiere took place on October 20, 1989 on the occasion of the Paris Autumn Festival. Our interpretation deviates from the original instrumentation and makes use of an extended percussion setup of the kind Xénakis himself uses in his solo piece Rebond B for percussion.
Thomas Grill (AT)
Two acoustic agents incarnated by large horn loudspeakers are incessantly exchanging acoustic codes. Based on models of human vocalization, they develop their vocabulary independently from a natural language. In their ongoing discourse, they follow a common goal: to optimize the beauty of their own vocal expression.
The Neuromusic Education Simulator (NES) Project
Wiener Sängerknaben (AT), Gerald Wirth (AT)
In cooperation with developmental psychologists and pedagogues, Professor Gerald Wirth developed his engagement-centric teaching methodology. The use of the Neuromusic Education Simulator, based on the Wirth method and applying VR & AR, allows teachers and students to practice, gain experience and receive feedback, including talent and deficiency detection (ADHD).
SHOJIKI “Play Back” Curing Tapes
Muku Kobayashi (JP), Mitsuru Tokisato (JP)
Rewinding curing tapes with a motor. The performers use a switch to control the rotation direction of the motor and its ON/OFF. Each time the tape is rewound on to the motor axis, it makes peeling sounds and continuant sounds.
Koray Tahiroglu (FI/TR)
NOISA, the Network of Intelligent Sonic Agents, is an interactive music system that monitors the performer’s actions and provides autonomous and non-intrusive counteractions. In this co-creative music performance, intelligent sonic agents are designed to support the music performance by providing responses that encourage and maintain the communication and motivation of the performer with the NOISA system.
Yasuaki Kakehi (JP), Mikhail Mansion (US), Kuan-Ju Wu (US)
Soundform No.1 is a minimalistic soundscape and kinetic art installation that transforms heat energy into a poetically evolving, spatiotemporal composition. Through modulations of heat, light and motion, the artwork creates an ever-changing atmosphere of Zen-like tonal patterns and visual effects.
Quadrature (DE) in collaboration with Christian Losert (DE)
Via a radio telescope in front of the venue, the noise of the skies is performed by a self-playing organ. Little by little, neural networks take control over the organ and seek out familiar harmonies in the otherworldly noises. Ideas of melodies evolve as the artificial intelligence begins to fantasize about familiar tunes in these alien sounds.
Oleg Stavitsky (RU)
Endel is a technology that creates personalized sound environments for stress reduction, productivity boosting and adjusting mind and body to different tasks and goals. Sound changes on the fly according to various personal inputs like location, time, heart rate and cadence. Endel’s technology is already presented as an ecosystem of products; it’s also designed to be integrated into various hardware and platforms in mobility, hospitality, retail, workspaces, etc.
Dmitry Morozov / ::vtol:: (RU)
Umbilical Digital is kind of a farm where a special algorithm devotes itself to raising “digital living creatures” such as Tamagotchis. The system monitors their condition and takes on all tasks that are required for maintaining their “life” and keeping their “spirits” up. The simulation of pressing keys makes the system seem like a person who is taking care of the digital creature: it exists as if it has been “raised” by a human hand.
Friday, September 6, 2019 / POSTCITY, Ars Electronica Center
The AI x Music Festival will start with a series of workshops at POSTCITY. Reeps One (UK) from Nokia Bell Labs will place a focus on disruptive research for the next phase in human history, Jérôme Nika (FR) is working with IRCAM researchers and will be reflecting on the role of human-machine interaction in the context of music, and Daniele Ghisi (IT) will be hosting a workshop titled “La machine des monstres.” Computer music designer, musician, and researcher Koray Tahiroğlu (Fl/TR) will present tools for real-time performances of digital music developed at Aalto University. Within the framework of the STARTS program, which is also jointly organized with the European Commission, there will be further lectures dedicated to the interweaving of AI and music.
In the Ars Electronica Center’s Deep Space, pianist Kaoru Tashiro (JP) and digital visual artist OUCHHH (TR) will present a performance that fuses music and digital images into a fascinating live performance.
Big Concert Night
In the evening, the “Big Concert Night” of this year’s Ars Electronica will also be the opening concert of the “AI X Music Festival” (and will therefore, for once, take place on Friday rather than Sunday!). “Mahler-Unfinished – Music meets AI” is the title of the ambitious concert in the POSTCITY’s spectacular hall, which will be an encounter between orchestral music and electronic music, human and robotic dancers, and artificial intelligence. Christian Fennesz (AT) will kick the event off with “Mahler Remixed.” The stylistically influential proponent of electronic music in Austria will use samples from Mahler symphonies for his live performance; the visuals will be added by Lillevan (DE). Towards the end of this session, the pianist Markus Poschner (DE) will join in and will create a bridge between electronic and orchestral music through his improvisations. Under the conductor Poschner, the Bruckner Orchestra Linz will then perform Mahler’s Unfinished Symphony No. 10. “Unfinished”—a word that always resonates with the challenge to think ahead and reinterpret. Not to imitate or even improve Mahler, but to test new forms of expression with today’s artistic approaches and technical possibilities. “Mahler Unfinished” was therefore also the creative starting point for Ali Nikrang (IR/AT), pianist, composer, computer scientist, AI developer, and researcher at the Ars Electronica Futurelab. With MuseNet, currently OpenAI’s most powerful AI system for musical applications, Nikrang has reworked the significant viola theme at the beginning of Mahler’s 10th Symphony. Only the first ten notes of the original melody, along with a few stylistic parameters, were entered into the system. The outputs included countless new interpretations, one of which Nikrang selected and then orchestrated by hand. The result will be performed by the Bruckner Orchestra Linz during the “Big Concert Night.” As on the last Big Concert Night, on this occasion, Silke Grabinger (AT) will also be present: Using motion tracking, her dance will be transferred to industrial robots, which in turn will make a man-sized puppet dance like a marionette. With “Alive Painting,” Akiko Nakayama (JP) will also contribute an impressive real-time visualization. Last but not least, with his piece “Sonic Robots,” Moritz Simon Geist (DE) will deliver a stirring performance with robotic instruments of his own construction. After this, the Ars Electronica Nightline will begin.
Saturday, September 7, 2019 / Anton Bruckner Private University, St. Florian Monastery
The second day of the AI x Music Festival will start with “Sonic Saturday – Medium Sonorum,” curated by Volkmar Klien (AT) and Andreas Weixler (AT) at the Anton Bruckner Private University. First, Tobias Leibetseder (AT) & Astrid Schwarz (AT), Tania Rubio (MX), and Erik Nyström (SE) will perform; after the break Kaori Nishii (JP) and Angélica Castelló (MX/AT) will stage their performance “Luc Ferrari.”
Lectures, talks, demos, concerts
At noon, the AI x Music Festival will move to the St. Florian Monastery. Throughout the afternoon and evening, moderated lectures, talks, demonstrations, and concerts will be on the program.
To start, Hermann Nitsch (AT) will be giving an organ concert, and he won’t be playing just any instrument; the Bruckner organ of the St. Florian Basilica is considered to be one of the most magnificent in all of Europe.
The first panel discussion, with notable participants, will start immediately afterwards: In conversation with Renata Schmidtkunz (AT), Josef Penninger (AT) and Sophie Wennerscheid (DE) will address the role of science and research, which initially had to confirm a religious view of the world, were then subordinated to economic rationality and now, in the dawning age of the AI, are reorienting themselves. Cellist Yishu Jiang (AT) will take this up with a performance and ask what preconditions and strategies would be necessary for a reflective social discourse on AI.
The next session will deal with AI applications that open up new artistic possibilities. In conversation with Renata Schmidtkunz, François Pachet (FR) and Markus Poschner (DE) will discuss the associated effects, such as new business models or (copyright) legal regulations. Immediately afterwards, Weiping Lin (TW) will play the violin.
The next star-studded panel will continue in this vein: Walter Ötsch (AT) and Marta Peirano (ES) will talk about the social acceptance of current AI research.
The next highlight will be “Calculated Sensations,” an “expanded lecture” with Siegfried Zielinski (DE), Europe’s leading expert on media archaeology and the cultural history of machines, and Anthony Moore (UK), British experimental musician, composer, producer, and co-writer of Pink Floyd songs. The two will invite you on a journey through four millennia of music history. In seven different locations of the St. Florian Monastery, one main theme will be addressed on each occasion—with short texts, experimental sounds, and dialogue.
To follow on, we will then have four “Conversations on AI” and two “Reports and Proceedings”: Vladan Joler (RS) and Vuk Ćosić (SI) will discuss the relationship between the humanities and artificial intelligence in the past and present, while Aza Raskin (US) and Maja Smrekar (SI) will deal with the parallels and similarities between artistic practices in AI and life art. Markus Poschner (DE) and Ali Nikrang (IR/AT) will deal with the issues of composition, interpretation, reproduction, and reception. Marta Peirano (ES) will ask about the tension between information and disinformation in the age of AI, Clara Blume (AT) and Naut Humon (US) will offer insights into the AI & music scene in the Bay Area, and Lynn Hughes (CA) will ask whether the soundtracks of current AI systems meet the demands of the gaming industry.
The next sessions will consist of talks and performances, all of which will deal with current AI applications. First, Dennis Russell Davies (US/AT), Maki Namekawa (JP/AT), and Francesco Tristano (LU) will discuss the tension between originality and authenticity when evaluating AI-generated music. Akira Maezawa (JP), and Brian M. Levine (CA) will then present a research project by Yamaha and the Glenn Gould Foundation, in which an AI system has been trained to exactly emulate Glenn Gould’s interpretation style. Markus Poschner (DE) and Norbert Trawöger (AT) will discuss machine learning applications that are based on statistical methods but whose results still seem spontaneous and original to us. Ali Nikrang (IR/AT) will take this up and show how he has reworked the viola theme of Mahler’s 10th Symphony using OpenAI’s MuseNet.
The next program block consists of performances and demonstrations around the AI-supported creation of music: Tomomi Adachi (JP) will use a performance AI called “Tomomibot,” which learns from musical improvisations to interact live with a human voice improviser. Muku Kobayashi (JP) and Mitsuru Tokisato aka SHOJIKI (JP) represent the completely analog opposite pole and improvise with nothing more than … adhesive strips! Based on the performance “Ultrachunk” Memo Akten (TR) will deal with the role of AI-applications regarding improvisation. The organist Klaus Sonnleitner (AT) invites you to a concert where he’ll be improvising on the organ in the spirit of Anton Bruckner; then, Roberto Paci Dalò (IT) will be playing the (bass) clarinet and working with the very special acoustics of the St. Florian Marble Hall. Rupert Huber (AT) will be taking his seat at the piano and also improvising.
Another program block of the AI x Music Festival entails demos of current AI applications. Ali Nikrang (IR/AT) will discuss historical and current AI systems for the generation of music—he examines what makes musical data special and why it is a great challenge to compose music using AI applications. Thomas Grill (AT) and Martina Claussen (AT) will conduct numerous experiments to improve human-machine communication. Vittorio Loreto (IT) will demonstrate how music and AI merge with Sony CSL to create something new; the musicians of Ensemble Vivante (AT) will presents the dramatically-charged vocal music of Monteverdi.
The next block will focus on presentations and discussions on current AI research activities. Ludger Brümmer (DE) will talk about transdisciplinary research at the Hertz Lab of the ZKM, while François Pachet (FR) will explain his research on “flow machines” like SKYGGE.
We will continue with a series of workshops. Memo Akten (TR) will show how generative artificial neuronal networks (GANS) can be used as a medium for creative expression and storytelling. Finally, with the project “Anatomy of an AI System,” Vladan Joler (RS) from SHARE Lab will present a meticulous compilation of all the human, data, and natural resources required to build and operate an Amazon Echo.
On Saturday evening, the AI x Music Festival is inviting you to a journey through time from the beginnings of music history to the here and now. Composer and organist Wolfgang Mitterer (AT) will demonstrate the power and effect human actors can unleash on stage, even in times of increasing digitalization. Quadrature (DE), by contrast, receive signals from space, which are interpreted by an AI system and transmitted to a robotic keyboard made of electromagnets that ultimately makes the organ play. The experts from the Yamaha R&D Division AI Group, the Glenn Gould Foundation, Francesco Tristano (LU) and musicians of the Bruckner Orchestra will contribute an AI-based performance. The crowning finale of the evening will be “Heavy Requiem – Buddhist Chant: Shomyo + Electronics.” Keiichiro Shibuya (JP), Eizen Fujiwara (JP), and Justine Emard (FR) will fuse traditional Buddhist music with electronic sounds.
A number of art projects will also be presented in the premises of the monastery: With his installation “Anschwellen – Abschwellen,” Volkmar Klien (AT) explores the question of what makes a machine seem autonomous, reactive, and intelligent. The “Akusmonium” is a unique loudspeaker orchestra for the interpretation of computer-generated music and the creation of ephemeral, dynamically moving sound sculptures. In this way, Thomas Gorbach (AT) invites the audience on a journey from the history of electronic sounds through digital sound generation to AI-based compositions. Stefan Tiefengraber’s (AT) “WM_EX10 TCM_200DV TP-VS500 MS-201 BK26 MG10 [INSTALLATION/PERFORMANCE]” is an installation and performance in which unexpected and uncontrollable analog signals are altered and “bent” to create an audio/video noise image that in turn creates a time-based sculpture. With “Soundform No.1” Yasuaki Kakehi (JP), Mikhail Mansion (US), Kuan-Ju Wu (US) form a minimalist sound landscape and kinetic art installation of warmth, light, and movement.
Sunday, September 8, 2019 / Donaupark, POSTCITY
At POSTCITY, Roberto Viola (IT) of the European Commission will open a whole series of talks with his statement. Christine Bauer (AT) from JKZ and Peter Knees (AT) from the TU Vienna conduct research on “music information retrieval” and will give an introduction to the field and to the current state of technology in music information retrieval. Philippe Esling (FR), Jérôme Nika (FR), and Daniele Ghisi (IT) will talk about their research at the Institut de Recherche et Coordination Acoustique/Musique at the Centre Pompidou in Paris (IRCAM), which focus on the power and limits of artificial neural networks. Jérôme Nika (FR), by contrast, focuses on the introduction of authoring, composition, and control in human-computer music co-improvisation. This work led to numerous collaborations and musical productions, particularly in improvised music (Steve Lehman, Bernard Lubat, Benoît Delbecq, Rémi Fox) and contemporary music (Pascal Dusapin, Marta Gentilucci). Then, we’ll hear from Nick Bryan Kinns (UK) from Queen Mary University, Koray Tahiroglu (FI) from Aalto University, Ludger Brümmer (DE) from ZKM. Next up is a music industry application-oriented research on AI with François Pachet (FR) from Spotify, Vittorio Loreto (IT) form Sony CSL Paris and Akira Maezawa (JP) from Yamaha. The final presenters will be the start-ups Amadeus Code, Endel, Fortunes, and Music Traveller.
At the same time as the talks, there will be workshops with Ali Nikrang (IR/AT) from the Ars Electronica Futurelab, Alex Braga (IT) from “A-MINT,” and Phillipe Esling (FR) from IRCAM as well as from Gerald Wirth (AT) from the Vienna Boys Choir and Vive Kumar (IN) from Atabasca University.
Episode by the River
On Sunday evening the AI x Music Festival will take place in the Donaupark. Designed as an “Episode by the River,” the Ars Electronica, Bruckner Orchestra, and Brucknerhaus will be staging an homage to the very first “Klangwolke.” As in 1979, the starting point for this sound journey will be the orchestra concert in the Great Hall of the Brucknerhaus, which will not only be transmitted to the outside world via the powerful sound system of the Klangwolke, but will also provide the sound material for Wolfgang “Fadi” Dorninger (AT), Ali Nikrang (IR/AT), Roberto Paci Dalò (IT), Rupert Huber (AT), Markus Poschner (DE), Sam Auinger (AT) and Fennesz (AT) & Lillevan (DE), who will create new acoustic, analog, and digital sound spaces in the Donaupark based on it.
Exhibitions during the entire AI x Music Festival
In addition to the concerts, performances, conferences, lectures, panels, and workshops, the entire AI x Music Festival can also be experienced through a series of exhibitions and presentations. Founders, CEOs of leading companies, counter culture protagonists, scientists, and artists will be presenting their products, prototypes, and projects at POSTCITY.
EXHIBITION / ART
Domhnaill Hernon (IE), head of Nokia Bell Labs and artist and beatboxer Reeps One (UK) make it clear that, contrary to the public debate, the development of AI systems is not about replacing people but about complementing and supporting them in a meaningful way—above all in the creative industries.
Alex Braga’s (IT) “A-MINT” is a new kind of adaptive artificial musical intelligence that, for the first time, is able to crack the improvisational code of all musicians in real time and improvise with them—music and videos are created during the performance without preset patterns, pitch, or BPM.
The recently reopened Ars Electronica Center, in turn, will be showing an entire exhibition on the subject. “AI x Music” traces the history of music and thus that of the instruments, tools, and apparatus used for its performance, recording, and reproduction. The show spans an arc from the first string and wind instruments of antiquity to today’s digital synthesizers, from the wax rollers and soot-covered glass plates of the first precursors of the gramophone to the digital streaming services of the internet. All of this ultimately leads to AI and machine learning and again to new possibilities for creative design that artists all over the world are already using. “AI x Music” makes it clear that this is not just about technological phenomena, but about fundamental questions of the relationship between man and machine.
EXHIBITION / INDUSTRY, start-ups and established companies
With “Music Traveler,” Dominik Joelsohn (DE/AT), Aleksey Igudesman (DE/AT), and Julia Rhee (KR/US) will be presenting a peer-to-peer platform that helps musicians find and book the next available rehearsal room, recording studio, and concert hall quickly and easily.
With his “Amadeus Code,” Taishi Fukuyama (JP) will be presenting an AI-based assistant for songwriting. The mobile app offers almost unlimited inspiration for topline melodies via different chord sequences and creates sketches of new music compositions.
With “Endel,” Oleg Stavitsky (RU) will be presenting a technology that creates personalized sounds to reduce stress, increase focus, and improve sleep. All sounds are generated in real time and influenced by location, time, heart rate, and cadence, which in turn are recorded on the smartphone.
“0W1 Audio” by Jean Beauve (FR) is a music startup that develops IoT audio platforms that deliver incredibly natural sounds.
EXHIBITION / RESEARCH
Mathias Röder (DE) and the Karajan Institute are constantly pushing the boundaries of new technologies in the creation, dissemination, and reception of music. As part of the AI x Music Festival, he will be reporting on the “AI and Music Hackathon” he initiated for young creatives and tech freaks.
“Computers that Learn to Listen” is a video showing some of the results of scientific research on AI and music conducted at the Institute for Computational Perception of the Johannes Kepler University Linz, directed by Gerhard Widmer (AT). Based on the latest advances in machine learning, computers learn to “listen to” and “understand” music, to recognize beat and rhythm, and to immediately identify pieces of music from a few played notes.
Koray Tahiroglu (FI/TR) of Aalto University presents “NOISA” (Network of Intelligent Sonic Agents), an AI-based interactive music system whose sound agents support music performances. Each sound agent has a machine learning model for predicting and adjusting its performance behavior.