Das Team von Artificial Creative Intelligence and Data Science (ACIDS) am IRCAM versucht, die musikalische Kreativität zu modellieren, indem es sich auf die Eigenschaften von Audiomischungen konzentriert. Dabei werden Schnittpunkte zwischen Symbol- (Partitur) und Signal- (Audio-) Darstellungen untersucht, um die Vielfalt der Musikinformationen zu verstehen und zu kontrollieren. Nachdem wir die Grundlagen der Modellierung von Kreativität durch mathematische Wahrscheinlichkeiten vorgestellt haben, werden wir die Entflechtung der vielfältigen Faktoren der Audiovariation diskutieren. Wir werden uns mit mehreren Modellen und Musikstücken unseres Teams beschäftigen und so durch topologische Klangräume reisen, sowie mit Audiowellenformen und Partituren arbeiten und Audio-Synthesizer mit unserer Stimme steuern.
Biografie:
Philippe Esling (FR)
Philippe Esling received a B.Sc in mathematics and computer science in 2007, a M.Sc in acoustics and signal processing in 2009 and a PhD on data mining and machine learning in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at Ircam laboratory and Sorbonne Université since 2013. In this short time span, he authored and co-authored over 20 peer-reviewed journal papers in prestigious journals. He received a young researcher award for his work in audio querying in 2011, a PhD award for his work in multiobjective time series data mining in 2013 and several best paper awards since 2014. In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014, which already has a worldwide community of thousands users and led to musical pieces from renowned composers played at international venues. He is the lead investigator of machine learning applied to music generation and orchestration, and directs the recently created Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM.