Between interaction and generation: new perspectives on generative sound environments via localized structured prediction 01:01:53
- Set Ircam Talks
- Saison 2019-2020 - None - None > « Entre interaction et génération » par Giulia Lorusso et Alessandro Rudi
- Oct. 22, 2019
- Giulia Lorusso (compositeur, conférencier)
- Alessandro Rudi (chercheur, conférencier)
- Benjamin Lévy (réalisateur informatique musicale)
- Jean-Louis Giavitto (chercheur)
- Gérard Assayag (chercheur)
By leveraging state of the art audio signal descriptors and recent developments in generative models for structured prediction and deep learning, this project aspires to question computational creativity issues with the goal of exploring some of the infinite application of technology in the field of generative sound art.
In particular, the present project investigates the integration of environmental sound analysis and recognition techniques with latest generative machine learning models, to provide a system that is able to discover emerging patterns of a given audio input and transfigure them in something unexpected, when trained with a large corpus of suitably selected samples. This opens new artistic perspectives on the interaction between computer generated sound systems and the surrounding environment, with the potential of “creative” and yet coherent positive feedback loops.
The project will deal with the following core aspects:
- environmental sound analysis and recognition
- audio classification applied to complex sound scenes
- audio features and modeling for environmental sounds
- machine learning for large scale and structured data
- generative framework via structured prediction techniques exploiting (multilevel) local structure in the data.