Participants
  • Jean-Marc Jot (conférencier)

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90′s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding.