informations

évènements
Pavlos Antoniadis
Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
55 min
date
15 octobre 2018

In my PhD thesis I proposed a novel paradigm of pianists’ interaction with complex music notation by the name embodied navigation. Its novelty lies in rethinking the classic notion of interpretation as interaction, and performance itself as a dynamic system. The primacy of performers’ embodied experience and the inherent plasticity of music notation are the paradigm’s central features: Embodiment is shown to shape constantly the comprehension of the notation and to transform notation in real time.

The GesTCom (Gesture Cutting through Textual Complexity) has been developed at IRCAM since 2014 in collaboration with the Interaction-Son-Musique-Mouvement team. It materializes the embodied navigation paradigm into a dedicated interactive system. It is a modular sensor-based environment for the analysis, processing and real-time control of complex piano notation through multimodal recordings. In terms of hardware, it comprises systems for the capture of movement, audio, video, MIDI and capacitive data from sensors on the piano keys. In terms of software, it is equipped with modules for the capture, analysis and control of the multimodal data; and modules for the augmentation and interactive control of music notation. Each of these systems functions both as stand-alone and integrated in the general methodology of embodied navigation.

After an overview of the theoretical framework in embodied cognition and human machine interaction, the talk will focus on GesTCom. I will present its technical features, its initial goals as to representation and interaction, its current applications, including: performance analysis, embodied interactive learning, contemporary composition, free improvisation, piano pedagogy and score-following; and its future directions, including: dissemination in selected communities of performers, web-based collaborative learning, further refinement with the integration of machine learning and MIR techniques, application in studies of sensorimotor learning and prediction, and creation of interactive systems which learn along with the performer in a human, embodied way.


Pavlos Antoniadis présente sa thèse soutenue en juin 2018

Abstract: In my PhD thesis I proposed a novel paradigm of pianists’ interaction with complex music notation by the name embodied navigation. Its novelty lies in rethinking the classic notion of interpretation as interaction, and performance itself as a dynamic system. The primacy of performers’ embodied experience and the inherent plasticity of music notation are the paradigm’s central features: Embodiment is shown to shape constantly the comprehension of the notation and to transform notation in real time. The GesTCom (Gesture Cutting through Textual Complexity) has been developed at IRCAM since 2014 in collaboration with the Interaction-Son-Musique-Mouvement team. It materializes the embodied navigation paradigm into a dedicated interactive system. It is a modular sensor-based environment for the analysis, processing and real-time control of complex piano notation through multimodal recordings. In terms of hardware, it comprises systems for the capture of movement, audio, video, MIDI and capacitive data from sensors on the piano keys. In terms of software, it is equipped with modules for the capture, analysis and control of the multimodal data; and modules for the augmentation and interactive control of music notation. Each of these systems functions both as stand-alone and integrated in the general methodology of embodied navigation. After an overview of the theoretical framework in embodied cognition and human machine interaction, the talk will focus on GesTCom. I will present its technical features, its initial goals as to representation and interaction, its current applications, including: performance analysis, embodied interactive learning, contemporary composition, free improvisation, piano pedagogy and score-following; and its future directions, including: dissemination in selected communities of performers, web-based collaborative learning, further refinement with the integration of machine learning and MIR techniques, application in studies of sensorimotor learning and prediction, and creation of interactive systems which learn along with the performer in a human, embodied way.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.