informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
48 min
date
6 novembre 2019

The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.

Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.


Tristan Carsault : Structure discovery in multivariate musical audio signals through machine learning

The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures. Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression). In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.