Techniques for the automatic generation of music – which have long been focused on systems ruling the score – are now being deployed at all levels of sound representation: signal, gesture, symbol, form. Sound synthesis using deep networks constitutes a radical break with conventional modelling approaches. How do composers handle these emerging possibilities?
- Facilitator: Hugues Vinet (FR), Director of Innovation and Research
- With Douglas Eck (US), Principal Scientist, Google Research
- Erin Gee, Composer and Professor, Brandeis University
- Giulia Lorusso (IT), Composer / Alessandro Rudi, Researcher, INRIA and École Normale Supérieure
- Jérôme Nika (FR), Researcher
- Alexander Schubert (DE), Musician, Performer, Composer
Douglas Eck (US)
Douglas Eck is a principal scientist at Google Research, currently working in the Paris office. He is leading the Magenta Project, a Google Brain effort to create music, video, images and text using deep learning and reinforcement learning. He is also exploring related research in generative models for domains like open-ended dialog and video generation. Before focusing on generative models for media, Doug worked in areas such as music perception, aspects of music performance, machine learning for large audio datasets and music recommendation. He completed his PhD in Computer Science and Cognitive Science at Indiana University in 2000 and went on to a postdoctoral fellowship with Juergen Schmidhuber at IDSIA in Lugano Switzerland. Before joining Google in 2010, Doug was faculty in Computer Science at the University of Montreal (MILA machine learning lab) where he became Associate Professor.
Erin Gee (US)
Cited as one of the most influential composer-vocalists of the 21st century, The New Yorker has stated of her Mouthpiece series, the “Mouthpieces are so potent, influential, and appealing”. Erin Gee has been awarded the Teatro Minimo prize from the Opernhaus Zürich, the International Rostrum of Composers award, a Guggenheim Fellowship, an American Academy in Rome Prize, and has been commissioned by the Radio Symphony Orchestra Vienna, the Los Angeles Philharmonic New Music, Klangforum Wien, and others. In her Mouthpiece series, the articulatory possibilities of the mouth are mapped onto the instruments, mirroring and expanding the vocal sounds to form a “super mouth” that can move beyond the physical limitations of a single vocal track, merging the voice with the breath and instruments and consisting of more than 30 works devoid of semantic text or language. U.S. composer Erin Gee is a professor of composition at Brandeis University.
Giulia Lorusso (IT)
Born in Rome in 1990, Giulia Lorusso studied Piano and Composition at at the Conservatory “Giuseppe Verdi” of Milan and in Paris where she attended the Cursus IRCAM and obtained a Master’s degree in Composition at CNSMDP. Between 2016 and 2018 she received commissions by Spinola-Banna for the arts Foundation (Turin,Italy) in co-production with IRCAM-Centre Pompidou, Bludenzer Tage zeitgemäßer Musik (Bludenz, Austria),Radio-France, ProQuartet. Her music has been performed in Italy and abroad: Festival Milano Musica, Festival Manifeste Ircam, Tzlil Meudcan Festival (Tel Aviv), Bludenzer Tage zeitgemäßer Musik, Forum Tactus for young composers (Bruxelles) by ensemble as Distractfold Ensemble, Quartetto Prometeo, Divertimento Ensemble, Ensemble Nikel, Ensemble KNM, mdi Ensemble, Brussels Philharmonic Orchestra, Ensemble Intercontemporain.
Alessandro Rudi (IT)
Alessandro Rudi is Researcher at INRIA and École Normale Supérieure, Paris. He received his PhD in 2014 from the University of Genova, after being a visiting student at the Center for Biological and Computational Learning at MIT. From 2014 to 2017 he was a postdoctoral fellow at the Laboratory of Computational and Statistical Learning at Italian Institute of Technology and University of Genova.
Jérôme Nika (FR)
Jérôme Nika is a researcher in human-computer music interaction and computer music designer. Nika holds diplomas from both ENSTA and Télécom Paris as well as a ATIAM master’s degree from IRCAM/Sorbonne University and also studied music composition. He specialized in the application of computer science and signal processing in digital creation and music in his PhD research (“Prix Jeune Chercheur Science/Musique 2015”, “Prix Jeune Chercheur 2016”, Association Française d’Informatique Musicale) then as a researcher at IRCAM. His research focuses on the notion of “musical memory”: learning, modelling, and mobilization in a creative context. It focuses on the composition of interaction and the dialectic between reactivity and planning in creative human-machine interaction. In 2019, he was awarded a research and production residency at Le Fresnoy. His research projects have given rise to numerous artistic collaborations and musical productions, notably in jazz and improvised music (Steve Lehman, Bernard Lubat, Benoît Delbecq, Rémi Fox) and contemporary music (Pascal Dusapin, Ensemble Modern, Marta Gentilucci). His project C’est pour ça in duet with Rémi Fox is the winner of the CNC’s DICRéAM grant for 2020. As a “digital violin maker”, he develops software instruments in close collaboration with the Musical Representations team hosted by IRCAM (DYCI2 library), in interaction with expert musicians. More than 60 concerts and artistic performances have used these tools since 2016 (Onassis Center, Athens, Greece; Ars Electronica Festival, Linz, Austria; Frankfurter Positionen festival, Frankfurt; Annenberg Center, Philadelphia, USA; Centre Pompidou, Collège de France, Le Centquatre Paris, France; Montreux Jazz festival, etc.).
Alexander Schubert (DE)
Alexander Schubert studied bioinformatics and composition.
Schubert’s interest explores cross-genre interfaces between acoustic and electronic music, combining different musical styles (like hardcore, free jazz, popular electronic music, techno) with contemporary classical concepts. Schubert has participated in his youth and early career in the above-mentioned genres both in groups and as a solo artist. Furthermore performance pieces are a major focus in his work. The use of the body in electronic music and the transportation of additional content through gestures are key features in his pieces, which aim at empowering the performer and at achieving a maximum of energy. This is done both through the use of sensors and visual media.
Apart from working as a composer and solo musician Schubert is also a founding member of ensembles such as “Decoder“. Since 2011, he has taught live electronics at the conservatory in Lübeck. His works have been performed more than 400 times in the last few years by numerous ensembles in over 25 countries.
Hugues Vinet (FR)
Hugues Vinet is Director of Innovation and Research Means at IRCAM where has been in charge of research and technological development activities since 1994. He is currently coordinator of the H2020 STARTS Residencies European project and coordinates the curation of the Vertigo Forum in the framework of Mutations Creation at Centre Pompidou.