information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
48 min
date
March 11, 2020

Recent research shows that visual stimulus features corresponding to subjects’ perception of images and movies can be predicted and reconstructed from fMRI via stimulus-encoding models.
We present the first evidence in the auditory domain that listeners were able to positively discriminate between stimulus-model reconstructions and null-model reconstructions of target-audio stimuli from fMRI images, cross-validated by stimulus. We model fMRI responses to auditory stimulus features using a multivariate pattern analysis (MVPA) representation—with dimensions corresponding to voxel locations and values corresponding to voxel activations in cortical regions of interest. Auditory stimulus features representing harmony and timbre were used as predictor variables and fMRI activations used as responses, with models predicting voxel activation patterns corresponding to the stimulus features. Response patterns to a large corpus of novel audio clips were then predicted using the trained stimulus-encoding models, creating a dataset of predicted fMRI priors and their corresponding audio clips. Using these short prior audio clips, stimuli were reconstructed via concatenative synthesis for the listening tests. The code, stimuli, and high-resolution fMRI data have been publicly released via the OpenfMRI initiative, to encourage further development of methods for probing sensory perception and cognition.

https://faculty-directory.dartmouth.edu//michael-casey
http://cosmos.ircam.fr/?p=1028

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.