informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Shannon (Paris)
durée
01 h 24 min
date
1 juillet 2021

Advances in mixed reality and discoveries in perception science offer exciting possibilities of matching human perceptual strengths with vast distributed sensing, AI, and memory resources. In this sphere, audio-based techniques stand to lead the way, both with recent advances in non-occluding, head-tracking headphones and increasing evidence for ambient/peripheral stimuli influencing human mood, attention, and cognition.

In this talk, I present my doctoral research work moving this vision from the lab into a real-world testbed, focusing on a large-scale environmental sensing system, massive environmental audio dataset, and audio AR wearable devices I developed. In our 577-acre wetlands testbed, users are able to superimpose the past onto the present, observe distant effects of their actions, and explore events from different perspectives. An AI system perceiving sensor data and audio alongside human users can subtly direct user attention by weighting stimuli without explicitly interrupting. In studies, I found that certain digitally-produced illusions, caused by the melding of the digital sense media and the physical world, would resolve into qualitatively new perceptual experiences. Through HearThere, a bone-conducting AR system many users could not separate from their natural hearing, intuited discoveries ranged from self-evident seasonal variations to the intricate discovery of wildlife migration patterns.

Finally, I will touch on ongoing research here in Paris. We aim to create a human-in-the-loop feedback system with sensing and concatenative synthesis on a large corpus of field recordings, applied to sleep staging and meditation/mind-wandering. In its artistic format, this work was recently presented in the form of a live performance system at NIME 2021, and is due at Ars Electronica and other venues later this year. Through this talk and in conversation, I hope to intersect a broad array of IRCAM’s projects and aims: from auditory perception to concatenative synthesis, immersive and spatial audio; ability and access; and broad opportunities at the intersection of audio and human-centered AI.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.