Jérôme NIKA, soutient sa thèse de doctorat réalisée au sein de l’équipe Représentations Musicales à l’Ircam (STMS - CNRS/IRCAM/UPMC), intitulée :
"Guiding human-computer music improvisation: introducing authoring and controls with temporal scenarios"
La soutenance de thèse se fait devant un jury composé de :
Directeurs :
Gérard Assayag , STMS - CNRS/IRCAM/UPMC, Paris
Marc Chemillier, Cams - EHESS, Paris
Rapporteurs :
Myriam Desainte-Catherine, Université de Bordeaux
Shlomo Dubnov, University of California San Diego
Examinateurs :
Gérard Berry, Collège de France, Paris
Emmanuel Chailloux, Université Pierre et Marie Curie, Paris
George Lewis, Columbia University, New York
Abstract:
This thesis focuses on the introduction of structures, authoring, and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. This work follows on researches on machine improvisation seen as the navigation through a musical “memory” which may consist of an offline corpus or of the continuous capture of the live music played by a human musician co-improvising with the system during a performance. These researches were mainly dedicated to free - generally non pulsed - improvisation: the work presented here focuses on idiomatic music - which generally respects a defined pulse - and extends to the general topic of composed improvisational frames, thus moving beyond the issue of practicing established idioms.
Various repertoires of improvised music rely on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. First, we propose a music generation model guided by a formal sequence called “scenario”. The musical purpose of the scenario is to deal with issues of acceptability regarding the stylistic norms and aesthetic values implicitly carried by the musical idiom it refers to, and to introduce anticipatory behaviors in the generation process. Using the formal genericity of the couple “scenario / memory”, we sketch a protocol to compose improvisation sessions or offline musical material at the scenario level. In this framework, musicians for whom the definition of a musical alphabet and scenarios is part of the creative process can be involved upstream, in the design of the musical language of the machine. We also present a dynamic architecture embedding such generation processes with formal specifications in order to combine long-term planning, anticipatory behaviors, and reactivity in a context of guided improvisation. In this context, a “reaction” is considered as a revision of mid-term anticipations in the light of external events. This architecture includes an adaptive rendering module that enables to synchronize the improvisations generated by the models with a non-metronomic fluctuating pulse, and introduces expressive temporal controls.
Finally, this work fully integrated the results of frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK, one of the offspring of the OMax system, which was used at various occasions during live performances with expert improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather the evaluations of the musicians on the models in order to validate and refine the scientific and technological choices.