information

Type
Ensemble de conférences, symposium, congrès
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
27 min
date
March 25, 2022

The Autocoder package is a tool based around a variational autoencoder––a neural network capable of learning a spectral representation of a soundfile and synthesizing a novel output based on the trained model.

A spectral representation is extracted from an input sound by running it through an STFT analysis, producing a number of spectral frames representing the sound moment by moment. The software through an encoder that compresses the data into a latent layer. The latent layer has eight values, each encoding some aspect of the input data. By feeding the training data through the encoder after training. By feeding arbitrary values as input into the latent vector layer, the encoder returns a new spectral frame that represents an unseen point within the spectral space of the training data and can then be used in any number of ways, e.g. for synthesis or convolution, as an impulse response in a hybrid reverb, or for cross–synthesis.

The package provides a simple and easily extendable ecosystem to assist with the experimentation and development of sound software and hardware based on the underlying neural network architecture. It is available both in code and in hardware form and comes with an osc interface allowing for easy integration with Max/MSP.

speakers

From the same archive

Workshop CoMo.education: Création de narrations collectives en son et en mouvement

CoMo Vox est une application Web réalisée en partenariat avec Radio France, destinée aux novices qui souhaitent apprendre les éléments de bases des gestes de direction (aucune connaissance musicale préalable n’est réquise). L'application pe

March 25, 2022 35 min

Video

Contemporary Music Embodiment and Perception

Here we demonstrate newly composed music, a real-time controller, and data acquisition system for the project EAR Stretch. EAR Stretch aims to improve Contemporary Music reception in non-expert audiences by enhancing embodied temporal expec

March 25, 2022 29 min

Video

Biometrically Evolved Spaces for Music - Paul Bavister

This presentation outlines a new project that offers a novel methodology for the development of new styles of acoustically optimised auditoria and room shape based on biometric sensing and evolutionary computation. The work is based on e

March 25, 2022 29 min

Video

Computer Assisted Composition

The composition is a comment to Karim Haddad’s String Trio "And I have tried to keep them from falling" (2001)"I would prefer a small mountain temple" is written between 2018 and 2021. Both titles refer to Ezra Pound’s (185-1972) poem Canto

March 25, 2022 26 min

Video

Corpos Sonoros

Corpos sonoros is a long-term research project proposed by Thembi Rosa and João Tragtenberg to work with sound-movement interactions using Giromin, a IMU based wearable Digital Dance and Music Instrument. It is an open space for the develop

March 25, 2022 30 min

Video

Forum platform: current and future collaborative workflows for tech and music projects

Since the re-launch of the IRCAM Forum platform in 2019, a lot of improvements has been done to ensure that the users can manage their own projects, contents and discussions. The current features will be presented as well as the roadmap foc

March 25, 2022 27 min

Video

Electro-acoustic Mapping

My research is focused around listening to electromagnetic energy in everyday urban environments through sound walks whilst capturing these experiences through the use of multichannel sensing and recording devices or ‘assemblages’ and 360 v

March 25, 2022 24 min

Video

iZotope

During the past year, our industry has witnessed an accelerated democratization of several technologies that enable the creation and distribution of audio and music content in new immersive audio formats - now supported in popular streaming

March 25, 2022 34 min

Video

Asterismes

Asterisms is a form of participatory concert, without a traditional frontal stage, making it possible to develop the active listening of the public by placing them at the center of the experience. The project is based on a distributed sound

March 25, 2022 32 min

Video

workshop COMO.Education

March 25, 2022 01 h 09 min

Video

Garcia for Mubone Augmented Trombone: A Sound & Movement Performance with Gesture Following and Granular Synthesis

March 25, 2022 32 min

Video

Artificial Creativity - some AI-based composition techniques

The presentation will discuss ongoing work implementing various AI-based techniques in om# and OpenMusic, for use in musical composition workflows. Human creativity is ill-defined by nature. Achieving "those right kinds of errors" may p

March 25, 2022 34 min

Video

Voice processing, Improvisation, generativity and co-creative interactions

March 25, 2022 25 min

Video

Sound design and processing, Spatialization, Voice processing, Computer Assisted Composition, Improvisation, generativity and co-creative interactions, Musical interfaces, Gestural interactions

Sasha Wilde, Nicole Bettencourt Coelho and Yuki Nakayama are a trio of sound nerds who met last year and began bringing their various skills and practices together to form an experimental audio band. They employ wearable gestural technologi

March 25, 2022 28 min

Video

Improvisation, generativity and co-creative interactions, Musical interfaces, Gestural interactions

A System for the Synchronous Emergence of Music Derived from Movement is an immersive audio and visual work whose purpose is to define and explore a relationship between the movement of an artist’s hand (brush or pen, etc.) and a generative

March 25, 2022 26 min

Video

Neural Differential Equations for Sound Synthesis

I propose AI system called Neural Ordinary Differential Equations (NODE) for sound synthesis. My method provides a simple and intuitive way to construct new sound objects and new type of sound synthesis by manipulating matrixes of maps. Dif

March 25, 2022 18 min

Video

Quick presentation of Vowelizer by Greg Beller. Conclusions and debates

March 25, 2022 41 min

Video

Corpus-Based Spatial Sound Synthesis on the IKO Compact Spherical Loudspeaker Array

How can rich spatial data from acoustic instruments be applied to situate synthesized sound spatially with dynamic three-dimensional forms? How can machine learning be used to re-embody the spatial presence of live instruments and performer

March 25, 2022 29 min

Video

share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.