Vous constatez une erreur ?
NaN:NaN
00:00
One of the major challenges of the synthesizer market and sound synthesis today lies in proposing new forms of synthesis allowing the creation of brand new sonorities while offering musicians more intuitive and perceptually meaningful control to help them find the perfect sound more easily. Indeed, today’s synthesizers are very powerful tools offering musicians a wide range of possibilities for creating sound textures, but the control of parameters still lacks user-friendliness and generally requires expert knowledge to manipulate. This presentation will focus on machine learning methods for sound synthesis, enabling the generation of new, high-quality sounds while providing perceptually relevant control parameters.
In a first part of this talk, we will focus on the perceptual characterization of synthetic musical timbre by highlighting a set of verbal descriptors frequently and consensually used by musicians. Secondly, we will explore the use of machine learning algorithms for sound synthesis, and in particular different models of the “autoencoder” type, for which we have carried out an in-depth comparative study on two different datasets. Then, this presentation will focus on the perceptual regularization of the proposed model, based on the perceptual characterization of synthetic timbre presented in the first part, to enable (at least partial) perceptually relevant control of sound synthesis. Finally, in the last part of this talk, we will quickly present some of the latest tests we conducted using more recent neural synthesis models.
28 novembre 2024
28 novembre 2024
28 novembre 2024
28 novembre 2024
28 novembre 2024
Vous constatez une erreur ?
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.