Séminaires invités

Séminaire / Conférence
  • Set Séminaires Recherche & Technologie
  • Séminaires "Recherche et technologie" - None - None > Séminaires invités - None - None > Cortical Representation of Musical Timbre
  • June 13, 2012
  • Ircam
Participants
  • Shihab Shamma (conférencier)

Complex acoustic signals such as music are usually composed of multiple sound streams that emanate from numerous sources that simultaneously change their loudness, timbre, pitch, and rhythm. Humans are able to integrate effortlessly the multitude of acoustic cues arriving at the ears, and to derive coherent percepts and judgments about the attributes of this sound. This facility to analyze an auditory scene is conceptually based on a multi-stage process in which sound is first analyzed in terms of a relatively few perceptually significant attributes (the alphabet of auditory perception), followed by higher level cortical integrative processes that organize and group the extracted attributes according to specific context-sensitive rules - the syntax of auditory perception. In this talk, I shall outline a mathematical model of this process based on physiological and psychoacoustical studies that have revealed a multiresolution representation of sound in the cortex as well as a variety of adaptive mechanisms that actively organize our perceptual space.

Biography:
Shihab Shamma is a Professor of Electrical and Computer Engineering and the Institute for Systems Research. His research deals with auditory perception, cortical physiology, role of attention and behavior in learning and plasticity, computational neuroscience, and neuromorphic engineering. One focus has been on studying the computational principles underlying the processing and recognition of complex sounds (speech and music) in the auditory system, and the relationship between auditory and visual processing. Another aspect of the research deals with how behavior induces rapid adaptive changes I neural selectivity and responses, and the mechanisms that facilitate these changes and control them. Finally, signal processing algorithms inspired by data from these neurophysiological and psychoacoustic experiments have been developed and applied in a variety of systems such as speech and voice recognition, diagnostics in industrial manufacturing, and underwater and battlefield acoustics. Other research interests include aVLSI implementations of auditory processing algorithms, and development of robotic systems for the detection and tracking of multiple simultaneous sound sources.