April 14, 2005 01 h 01 min
April 14, 2005 24 min
May 12, 2005 52 min
February 4, 2005 01 h 18 min
October 17, 2007 49 min
June 27, 2007 01 h 12 min
July 11, 2007 48 min
September 12, 2007 01 h 07 min
September 19, 2007 01 h 13 min
September 26, 2007 01 h 00 min
October 3, 2007 01 h 12 min
October 10, 2007 01 h 10 min
October 24, 2007 50 min
November 21, 2007 57 min
0:00/0:00
The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.
Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.
The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.
Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.