The Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM seeks to model musical creativity by targeting the properties of audio mixtures. This studies the intersection between symbol (score) and signal (audio) representations to understand and control the manifolds of musical information. After introducing the framework of modeling creativity through mathematical probabilities, we will discuss the question of disentangling manifolds of factors of audio variation. We will detail several models and musical pieces produced by our team, allowing to travel through topological spaces of audio, working with audio waveforms and scores alike and controlling audio synthesizers with our voice.
Biography:
Philippe Esling (FR)
Philippe Esling received a B.Sc in mathematics and computer science in 2007, a M.Sc in acoustics and signal processing in 2009 and a PhD on data mining and machine learning in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at Ircam laboratory and Sorbonne Université since 2013. In this short time span, he authored and co-authored over 20 peer-reviewed journal papers in prestigious journals. He received a young researcher award for his work in audio querying in 2011, a PhD award for his work in multiobjective time series data mining in 2013 and several best paper awards since 2014. In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014, which already has a worldwide community of thousands users and led to musical pieces from renowned composers played at international venues. He is the lead investigator of machine learning applied to music generation and orchestration, and directs the recently created Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM.