Do you notice a mistake?
NaN:NaN
00:00
Future Perfect will be a concert performance using smart phone virtual reality technologies and ambisonic/ WaveField sound diffusion.Future Perfect explores the seam between virtual reality as a documentation format for environmental research and archiving nature, combining the thoughts that:
The Future Perfect performance will not have a fixed Point of View. interactive crowd mapping using smart phone beacons will generate personal journeys through the work and determine each audience members own viewing and listening perspectives. The work will draw on the deep expertise at IRCAM in WaveField Synthesis techniques, which through the smart phone tracking will allow sonic objects to be attached to and follow people within the concert space. HOA ambisonics will use SPAT to creat an immersive sound field immersion. Smart Phone tracking will allow the tracking of people within the concert space, using flocking and spatial spread to drive interactive musical and animation parameters.
The work will be made from 360 VR footage shot by Paine in nature preserves in Paris and Karlsruhe, blended with procedural animations, derived from plant images and HOA record gins made by the composer at the same location. Participants will be able to walk freely through the space, with vector lines being drawn between people subject to proximity and vectors of movement. Other individuals will be indicated in the VR space as outlines to make movement safe and to help develop a collective consciousness
Depuis une dizaine d’années, le département SATIS, puis la laboratoire ASTRAM et maintenant le laboratoire PRISM (Perception Représentation Image Son Musique) développe un projet de sonothèque de sons d’ambiance consultable en ligne intitul
March 8, 2018 33 min
Depuis 2011 je travaille sur un projet de recherche et d’écriture basé sur le dialogue musical et scénique entre des instruments symphoniques et des objets sonores électroacoustiques. Après la construction de trois objets (la fontaine élect
March 8, 2018 01 h 16 min
The presentation is intended as an introduction to deep learning and its applications in music. The presentation features the use deep auto-encoders for generating novel sounds from a hidden representation of an audio corpus, audio style tr
March 8, 2018 22 min
StreamCaching est un projet musical qui a débuté en juin 2017 à Hambourg. Il a été lancé au Blurred Edges festival de musique contemporaine: 10 compositions ont été commandées, numérisées, munies de données GPS et localisées à travers la vi
March 8, 2018 33 min
We observe the success of artificial neural networks in simulating human performance on a number of tasks: such as image recognition, natural language processing, etc. However, there are limits to state-of-the- art AI that separate it from
March 8, 2018 30 min
Within this project, I aim to define a mutual synthesis of sound and position of the borderlines in space. Space, as well as word, has its own shape, meaning, and wording. How it affects our perception of architecture acoustic experience sp
March 8, 2018 26 min
Le nom vient de la définition française “Musique Mixte”, qui indique l’électronique en live (ou même seulement de l’électronique sans interaction en temps réel) avec des instruments acoustiques sur scène. Il fonctionne comme un middleware:
March 8, 2018 26 min
The project explores cross-adaptive processing as a drastic intervention in the modes of communication between performing musicians. Digital audio analysis methods are used to let features of one sound modulate the electronic processing of
March 8, 2018 24 min
Do you notice a mistake?