Introduction to Posthuman Aesthetics
Mindaugas Gapševičius (LT/DE)

Installation with devices and videos (2016-2019) These toolkits invite their users to carry out scientific experiments on a DIY level and be able to introduce their simplified, accessible versions to a broader community. The prerogative is to render and outline methods for independent research, opening the black box of empirical experiments to individuals across disciplines. Whether framing the discussion of political, economical, or cultural issues, the toolkits question the creativity of non-humans and do not presume humans to be the only creative force at work.

Panel II: AI, more than a technology
Renata Schmidtkunz (DE), Markus Poschner (DE), Douglas Eck (US), François Pachet (FR)

AI is expected to open many new possibilities for creators, not replacing them but assisting and supporting their work. Even more so we see big expectations for the businesses related to the distribution of music. What are the consequences and implications? What kind of new business models can we expect? How will this affect the artists?

We Revolutionize Music Education: The Neuromusic Education Simulator (NES)
Gerald Wirth (AT), Wiener Sängerknaben/VIve Kumar (IN), Athabasca University (US)

In cooperation with developmental psychologists and pedagogues, Professor Gerald Wirth developed his engagement-centric teaching methodology – the wirth method – aiming at constant high-level student attention. Through neuronal networks activated when using movement to support teaching and through repetitions with variations, contents are sustainably stored in the long-term memory. The use of NES based on the wirth method applying VR & AR allows teachers and students in addition to personal tuition, to practice, gain experience and receive feedback.

ACIDS: Artificial Creative Intelligence
Philippe Esling (FR)

The Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM seeks to model musical creativity by targeting the properties of audio mixtures. This studies the intersection between symbol (score) and signal (audio) representations to understand and control the manifolds of musical information.

Automatic Music Generation with Deep Learning – Fascination, challenges, constraints
Ali Nikrang (AT)

In recent years, there has been a great deal of academic interest on applying Deep Learning to creative tasks such as for generating texts, images or music with fascinating results. Technically speaking, Deep Learning models can only learn the statistics of the data. Thus, they often can learn relationships in the data that human observers have not been aware of, and can therefore serve as a new source of inspiration for human creativity. This workshop focuses on current technical approaches for automatic music generation.

Creating interactive audio systems with Bela
Andrew McPherson (UK)

The workshop will provide an introduction to Bela, an open-source embedded hardware platform for creating interactive audio systems. Participants will get a hands-on introduction to building circuits and programming using Bela, following a series example projects to introduce the basis of building real-time audio systems.

Recommenders and Intelligent Tools in Music Creation: Why, Why Not, and How?
Christine Bauer (AT), Peter Knees (AT), Richard Vogl (AT), Hansi Raber (AT)

This workshop will highlight the role of Artificial Intelligence, Machine Learning-supported composition, and Recommender Systems in the process of music creation. We discuss their reception and prevalent image among professional music producers and creators, including the potential threats these technologies pose to their artistic originality. We contrast this view by emphasizing the power of AI-technology for a democratization of music making, by lowering the entrance barrier of music creation.

Digital Musical Interactions
Koray Tahiroğlu (Fl/TR)

Today digital technologies and advanced computational features, such as machine learning and artificial intelligence (AI) tools, are shaping our relationship with music as well as enabling new possibilities of utilising new musical instruments and interfaces. In this workshop, we question, what does our relationship with music and musical instruments look like today?

Computer Music Design and Research – IRCAM Workshop
Jérôme Nika (FR), Daniele Ghisi (IT)

Computer music designer, musician, and researcher Jérôme Nika (FR) will present the generative agents / software instruments DYCI2 that he develops in collaboration with Ircam, and in interaction with expert improvisers. These agents offer a continuum of strategies going from pure autonomy to meta-composition thanks to an abstract “scenario” structure.

The Art of Intelligent Interruption and Augmented Relationships
Harry Yeff (UK) & Domhnaill Hernon (IR), Nokia Bell Labs

Developing disruptive research for the next phase of human existence. What are the narratives that allow the world to embrace Augmented Intelligence and do artists offer an answer? Harry Yeff walks us through his portfolio of interactive installation, creative use of machine learning and vocal performance to explore the concept of intelligent interruption and augmented relationships.