information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
date
November 7, 2024

The research project led by the ACIDS group at IRCAM aims to model musical creativity by extending probabilistic learning approaches to the use of multivariate and multimodal time series. Our main object of study lies in the properties and perception of musical synthesis and artificial creativity. In this context, we experiment with deep AI models applied to creative materials, aiming to develop artificial creative intelligence. Over the past years, we developed several objects aiming to embed these researches directly as real-time object usable in MaxMSP. Our team has produced many prototypes of innovative instruments and musical pieces in collaborations with renowned composers. However, The often overlooked downside of deep models is their massive complexity and tremendous computation cost. This aspect is especially critical in audio applications, which heavily relies on specialized embedded hardware with real-time constraints. Hence, the lack of work on efficient lightweight deep models is a significant limitation for the real-life use of deep models on resource-constrained hardware. We show how we can attain these objectives through different recent theories (the lottery ticket hypothesis (Frankle and Carbin, 2018), mode connectivity (Garipov et al. 2018) and information bottleneck theory) and demonstrate how our research led to lightweight and embedded deep audio models, namely 1/ Neurorack // the first deep AI-based eurorack synthesizer 2/ FlowSynth // a learning-based device that allows to travel auditory spaces of synthesizers, simply by moving your hand 3/ RAVE in Raspberry Pi // 48kHz real-time embedded deep synthesis.

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.