Séminaires invités

Participants
  • Stefano Fasciani (conférencier)

For sonic interactive systems, the definition of user-specific mappings between sensors capturing performer's gesture and sound engine parameters can be a complex task, especially when using large network of sensors to control a high number of synthesis variables. Generative techniques based on machine learning can compute such mappings only if users provide a sufficient amount of examples embedding and underlying learnable model. Instead, the combination of automated listening and unsupervised learning techniques can minimize effort and expertise required for implementing personalized mapping, while rising the perceptual relevance of the control abstraction. The vocal control of sound synthesis is presented as a challenging context for this mapping approach.