informations

Type
Ensemble de conférences, symposium, congrès
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
27 min
date
25 mars 2022

The Autocoder package is a tool based around a variational autoencoder––a neural network capable of learning a spectral representation of a soundfile and synthesizing a novel output based on the trained model.

A spectral representation is extracted from an input sound by running it through an STFT analysis, producing a number of spectral frames representing the sound moment by moment. The software through an encoder that compresses the data into a latent layer. The latent layer has eight values, each encoding some aspect of the input data. By feeding the training data through the encoder after training. By feeding arbitrary values as input into the latent vector layer, the encoder returns a new spectral frame that represents an unseen point within the spectral space of the training data and can then be used in any number of ways, e.g. for synthesis or convolution, as an impulse response in a hybrid reverb, or for cross–synthesis.

The package provides a simple and easily extendable ecosystem to assist with the experimentation and development of sound software and hardware based on the underlying neural network architecture. It is available both in code and in hardware form and comes with an osc interface allowing for easy integration with Max/MSP.

intervenants

Les médias liés à cet évènement

Workshop CoMo.education: Création de narrations collectives en son et en mouvement

26 septembre 2022 00:35:01

Vidéo

Contemporary Music Embodiment and Perception

14 juin 2022 00:29:35

Vidéo

Biometrically Evolved Spaces for Music

17 mai 2022 00:29:29

Vidéo

Computer Assisted Composition

17 mai 2022 00:26:12

Vidéo

Corpos Sonoros

17 mai 2022 00:30:01

Vidéo

Forum platform: current and future collaborative workflows for tech and music projects

17 mai 2022 00:27:54

Vidéo

Electro-acoustic Mapping

17 mai 2022 00:24:05

Vidéo

iZotope

14 juin 2022 00:34:21

Vidéo

Asterismes

26 septembre 2022 00:32:42

Vidéo

workshop COMO.Education

4 avril 2022 01:09:21

Vidéo

Garcia for Mubone Augmented Trombone: A Sound & Movement Performance with Gesture Following and Granular Synthesis

21 avril 2022 00:32:41

Vidéo

Artificial Creativity - some AI-based composition techniques

21 avril 2022 00:34:42

Vidéo

Voice processing, Improvisation, generativity and co-creative interactions

17 mai 2022 00:25:41

Vidéo

Sound design and processing, Spatialization, Voice processing, Computer Assisted Composition, Improvisation, generativity and co-creative interactions, Musical interfaces, Gestural interactions

17 mai 2022 00:28:18

Vidéo

Improvisation, generativity and co-creative interactions, Musical interfaces, Gestural interactions

17 mai 2022 00:26:54

Vidéo

Neural Differential Equations for Sound Synthesis

17 mai 2022 00:18:08

Vidéo

Quick presentation of Vowelizer by Greg Beller. Conclusions and debates

17 mai 2022 00:41:43

Vidéo

Corpus-Based Spatial Sound Synthesis on the IKO Compact Spherical Loudspeaker Array

17 mai 2022 00:29:53

Vidéo

partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.