informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
48 min
date
11 mars 2020

Recent research shows that visual stimulus features corresponding to subjects’ perception of images and movies can be predicted and reconstructed from fMRI via stimulus-encoding models.
We present the first evidence in the auditory domain that listeners were able to positively discriminate between stimulus-model reconstructions and null-model reconstructions of target-audio stimuli from fMRI images, cross-validated by stimulus. We model fMRI responses to auditory stimulus features using a multivariate pattern analysis (MVPA) representation—with dimensions corresponding to voxel locations and values corresponding to voxel activations in cortical regions of interest. Auditory stimulus features representing harmony and timbre were used as predictor variables and fMRI activations used as responses, with models predicting voxel activation patterns corresponding to the stimulus features. Response patterns to a large corpus of novel audio clips were then predicted using the trained stimulus-encoding models, creating a dataset of predicted fMRI priors and their corresponding audio clips. Using these short prior audio clips, stimuli were reconstructed via concatenative synthesis for the listening tests. The code, stimuli, and high-resolution fMRI data have been publicly released via the OpenfMRI initiative, to encourage further development of methods for probing sensory perception and cognition.

https://faculty-directory.dartmouth.edu//michael-casey
http://cosmos.ircam.fr/?p=1028

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.