information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
01 h 07 min
date
September 25, 2018

How do we learn to perform skilled gestures, and how can interactive technologies help us in our learning? This talk examines these questions in three domains: piano, Mandarin Chinese, and theremin.
I first present two projects to support learning the expression dimension of piano playing— MirrorFugue and Andante. MirrorFugue simulates the presence of a virtual pianist whose reflection appears to play the physically moving keys. It encourages learning expressive gestures through imitation. Andante presents music as miniature figures that appear to walk and dance on the piano keyboard. It aims to help children understand the expressivity of rhythms and phrasing in terms of movements of familiar bodily movements.
Next, I discuss my current project on learning Mandarin tones using a vocal synthesizer controlled by hand gestures on a graphical tablet. Learning to identify and pronounce tones is one of the biggest difficulties for non-native learners of Mandarin. In collaboration with LAM (Lutheries - Acoustique - Musique) at UPMC, I developed a method that enables learners to practice pronunciation by tracing visual guides derived from the frequency curve of native speakers. I present results from preliminary evaluations of this method.
Finally, I share personal finding from my experiences teaching myself how to play the theremin in the past 16 months, and the process of developing gesture vocabularies for intervals, musical motifs, and expressive phrasing. This talk concludes by reflecting on common themes across all three domains, including learning by imitation, the role of multimodality, repurposing movement, and gesture design.


Gesture Learning in Music and Language, par Xiao XIAO

Abstract: How do we learn to perform skilled gestures, and how can interactive technologies help us in our learning? This talk examines these questions in three domains: piano, Mandarin Chinese, and theremin. I first present two projects to support learning the expression dimension of piano playing— MirrorFugue and Andante. MirrorFugue simulates the presence of a virtual pianist whose reflection appears to play the physically moving keys. It encourages learning expressive gestures through imitation. Andante presents music as miniature figures that appear to walk and dance on the piano keyboard. It aims to help children understand the expressivity of rhythms and phrasing in terms of movements of familiar bodily movements. Next, I discuss my current project on learning Mandarin tones using a vocal synthesizer controlled by hand gestures on a graphical tablet. Learning to identify and pronounce tones is one of the biggest difficulties for non-native learners of Mandarin. In collaboration with LAM (Lutheries - Acoustique - Musique) at UPMC, I developed a method that enables learners to practice pronunciation by tracing visual guides derived from the frequency curve of native speakers. I present results from preliminary evaluations of this method. Finally, I share personal finding from my experiences teaching myself how to play the theremin in the past 16 months, and the process of developing gesture vocabularies for intervals, musical motifs, and expressive phrasing. This talk concludes by reflecting on common themes across all three domains, including learning by imitation, the role of multimodality, repurposing movement, and gesture design. Bio: Xiao XIAO is an inventor, artist, and human computer interaction researcher. She completed a PhD at the MIT Media Lab, where she created technologies for music learning, drawing from her training in classical piano. Her work has been published at international academic conferences including CHI, TEI and NIME. Passionate about the art of learning across disciplines, Xiao has co-edited and illustrated a forthcoming book of essays on education by Artificial Intelligence pioneer Marvin Minsky. She is currently a visiting post-doctoral researcher at LAM and a research affiliate at the MIT Media Lab.

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.