Séminaires invités

Séminaire / Conférence
  • Set Séminaires Recherche & Technologie
  • Séminaires "Recherche et technologie" - None - None > Séminaires invités - None - None > Understanding Perception with Audiovisual Resynthesis
  • Feb. 15, 2012
  • Ircam
Participants
  • Parag K. Mital (conférencier)

There is a chicken and egg problem in perception. In order to identify something, we must first be able to detect it. Though in order to detect it, we must also be able to identify it. How then, do our auditory and visual systems deal with the problem of representation? I’ll talk today about a few of the approaches I’ve taken towards discovering the possible solutions to this problem, and how resynthesis in particular helps us to understand how such representations could be employed in perception.
Specifically, I will delve into the theory surrounding proto-objects and the approaches I have taken in audition and vision for resynthesizing this particular process extending into corpus-based resynthesis for audiovisual mosaicing as well as approaches to object and source separation in real-time. I will also demonstrate a few of these approaches working in real-time an iPhone for the purpose of augmented perception.

Parag K Mital is an American-born London-based PhD-student in Arts and Computational Technology at Goldsmiths, University of London. As a member of the Embodied AudioVisual Interaction (EAVI) group at Goldsmiths Digital Studios, he explores embodied audiovisual perception by means of augmented realities and audiovisual installations. Through creating such experiences, he questions the processes surrounding auditory and visual perception in terms of functional roles afforded by environments.
As an installation artist, his work has been exhibited at the London Science Museum, BFI Southbank, Waterman’s Art Centre, Kinetica Art Fair, Athens Video Art Festival, Edinburgh International Film Festival, Goethe Institute (Bengaluru), and the Bengaluru Artist Residency 1. He is also a published vision scientist investigating the role of neuro-biologically motivated computational models of eye-movements in investigating active visual cognition during dynamic scene viewing.