Do you notice a mistake?
NaN:NaN
00:00
Asterisms is a form of participatory concert, without a traditional frontal stage, making it possible to develop the active listening of the public by placing them at the center of the experience. The project is based on a distributed sound diffusion in the space with a large number of speakers (Raspberry Pi based system, participants' phones, and other). The participants can be mobile or not, as they choose, to expand their listening experience within the space. Each member of the audience is unique while being part of a large movement, like stars connected in constellations. We create a common space that is both immersive and expansive.
Asterismes is currently in residency in artistic research at Ircam. On the occasion of the forum, we propose to the participants to take part in an experiment with their smartphones.
CoMo Vox est une application Web rĂ©alisĂ©e en partenariat avec Radio France, destinĂ©e aux novices qui souhaitent apprendre les Ă©lĂ©ments de bases des gestes de direction (aucune connaissance musicale prĂ©alable nâest rĂ©quise). L'application pe
March 25, 2022 35 min
Here we demonstrate newly composed music, a real-time controller, and data acquisition system for the project EAR Stretch. EAR Stretch aims to improve Contemporary Music reception in non-expert audiences by enhancing embodied temporal expec
March 25, 2022 29 min
This presentation outlines a new project that offers a novel methodology for the development of new styles of acoustically optimised auditoria and room shape based on biometric sensing and evolutionary computation. The work is based on e
March 25, 2022 29 min
The composition is a comment to Karim Haddadâs String Trio "And I have tried to keep them from falling" (2001)"I would prefer a small mountain temple" is written between 2018 and 2021. Both titles refer to Ezra Poundâs (185-1972) poem Canto
March 25, 2022 26 min
Corpos sonoros is a long-term research project proposed by Thembi Rosa and JoĂŁo Tragtenberg to work with sound-movement interactions using Giromin, a IMU based wearable Digital Dance and Music Instrument. It is an open space for the develop
March 25, 2022 30 min
Since the re-launch of the IRCAM Forum platform in 2019, a lot of improvements has been done to ensure that the users can manage their own projects, contents and discussions. The current features will be presented as well as the roadmap foc
March 25, 2022 27 min
My research is focused around listening to electromagnetic energy in everyday urban environments through sound walks whilst capturing these experiences through the use of multichannel sensing and recording devices or âassemblagesâ and 360 v
March 25, 2022 24 min
During the past year, our industry has witnessed an accelerated democratization of several technologies that enable the creation and distribution of audio and music content in new immersive audio formats - now supported in popular streaming
March 25, 2022 34 min
March 25, 2022 01 h 09 min
March 25, 2022 32 min
The presentation will discuss ongoing work implementing various AI-based techniques in om# and OpenMusic, for use in musical composition workflows. Human creativity is ill-defined by nature. Achieving "those right kinds of errors" may p
March 25, 2022 34 min
March 25, 2022 25 min
Sasha Wilde, Nicole Bettencourt Coelho and Yuki Nakayama are a trio of sound nerds who met last year and began bringing their various skills and practices together to form an experimental audio band. They employ wearable gestural technologi
March 25, 2022 28 min
A System for the Synchronous Emergence of Music Derived from Movement is an immersive audio and visual work whose purpose is to define and explore a relationship between the movement of an artistâs hand (brush or pen, etc.) and a generative
March 25, 2022 26 min
I propose AI system called Neural Ordinary Differential Equations (NODE) for sound synthesis. My method provides a simple and intuitive way to construct new sound objects and new type of sound synthesis by manipulating matrixes of maps. Dif
March 25, 2022 18 min
March 25, 2022 41 min
The Autocoder package is a tool based around a variational autoencoderââa neural network capable of learning a spectral representation of a soundfile and synthesizing a novel output based on the trained model. A spectral representation i
March 25, 2022 27 min
How can rich spatial data from acoustic instruments be applied to situate synthesized sound spatially with dynamic three-dimensional forms? How can machine learning be used to re-embody the spatial presence of live instruments and performer
March 25, 2022 29 min
Do you notice a mistake?
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Monday through Friday 9:30am-7pm
Closed Saturday and Sunday
HĂŽtel de Ville, Rambuteau, ChĂątelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.