April 14, 2005 01 h 01 min
April 14, 2005 24 min
May 12, 2005 52 min
February 4, 2005 01 h 18 min
October 17, 2007 49 min
June 27, 2007 01 h 12 min
July 11, 2007 48 min
September 12, 2007 01 h 07 min
September 19, 2007 01 h 13 min
September 26, 2007 01 h 00 min
October 3, 2007 01 h 12 min
October 10, 2007 01 h 10 min
October 24, 2007 50 min
November 21, 2007 57 min
0:00/0:00
We recently presented a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Nonetheless, compared to modeling waveform directly, ways of effectively handling higher-dimensional outputs, multiple feature streams and regularization become more important with our approach. In this work, we extend our proposed system to include additional components for predicting F0 and phonetic timings from a musical score with lyrics. These expression-related features are learned together with timbrical features from a single set of natural songs. We compare our method to existing statistical parametric, concatenative, and neural network-based approaches using quantitative metrics as well as listening tests.
Jordi BONADA, de l’université Pompeu Fabra de Barcelone (Music Technology Group), invité par l’équipe Analyse et synthèse des sons (STMS - CNRS/IRCAM/UPMC) à être membre du jury de thèse de Luc Ardillon, présente :
"A Neural Parametric Singing Synthesizer Modeling Timbre and Expression from Natural Songs"
ABSTRACT :
We recently presented a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Nonetheless, compared to modeling waveform directly, ways of effectively handling higher-dimensional outputs, multiple feature streams and regularization become more important with our approach. In this work, we extend our proposed system to include additional components for predicting F0 and phonetic timings from a musical score with lyrics. These expression-related features are learned together with timbrical features from a single set of natural songs. We compare our method to existing statistical parametric, concatenative, and neural network-based approaches using quantitative metrics as well as listening tests.