• Set Séminaires Recherche & Technologie
  • Saison 2019-2020 - None - None > Auditory Stimulus Reconstruction from High-Resolution fMRI of Music Perception and Imagery
  • March 11, 2020
  • Ircam, Paris
Participants
  • Michael Casey (conférencier)

Recent research shows that visual stimulus features corresponding to subjects’ perception of images and movies can be predicted and reconstructed from fMRI via stimulus-encoding models.
We present the first evidence in the auditory domain that listeners were able to positively discriminate between stimulus-model reconstructions and null-model reconstructions of target-audio stimuli from fMRI images, cross-validated by stimulus. We model fMRI responses to auditory stimulus features using a multivariate pattern analysis (MVPA) representation—with dimensions corresponding to voxel locations and values corresponding to voxel activations in cortical regions of interest. Auditory stimulus features representing harmony and timbre were used as predictor variables and fMRI activations used as responses, with models predicting voxel activation patterns corresponding to the stimulus features. Response patterns to a large corpus of novel audio clips were then predicted using the trained stimulus-encoding models, creating a dataset of predicted fMRI priors and their corresponding audio clips. Using these short prior audio clips, stimuli were reconstructed via concatenative synthesis for the listening tests. The code, stimuli, and high-resolution fMRI data have been publicly released via the OpenfMRI initiative, to encourage further development of methods for probing sensory perception and cognition.

https://faculty-directory.dartmouth.edu//michael-casey
http://cosmos.ircam.fr/?p=1028