information

Type
Ensemble de conférences, symposium, congrès
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
20 min
date
March 19, 2021

Neural style transfer applied to images has received considerable interest and has triggered many research activities aiming to use the underlying strategies for manipulation of music or sound. While the many fundamental differences between sounds an images limit the usefulness of a direct translation recent research in the Analysis/Synthesis team has demonstrated that a rather similar approach to what is used to manipulate painting style in images allows for quasi transparent analysis/resynthesis of sound textures. Instead of working on 2D images in the case of sound textures the convolutional networks work on the complex STFT.
The presentation will introduce the Xtextures command line software that is available in the Forum ans that allows using these techniques not only for resynthesis of textures but also in a more creative way for texturization of arbitrary sounds

speakers

From the same archive

From psychoacoustics to deep learning: learning low-level processing of sound with neural networks - Neil Zeghidour

Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limita

March 19, 2021 18 min

Video

Deep Learning for Voice processing - Nicolas Obin, Axel Roebel, Yann Teytaut

Deep Neural Networks are increasingly dominating the research activities in the Analysis/Synthesis team and elsewhere. The session will present some of the recent results of the research activities related to voice processing with deep neur

March 19, 2021 32 min

Video

Towards helpful, customer-specific Text-To-Speech synthesis - David Guennec

The subject of automatic speech synthesis began to be popularised as early as the 1990s. Each of us has already had to deal with automatic answering machine voices that made us all suffer in the beginning. Today, however, the progress made

March 19, 2021 29 min

Video

Tools for creative AI and noise - Philippe Esling

We will present the latest creative tools developed by the RepMus team (ACIDS project), enabling real-time audio synthesis as well as music generation and production and synthesizer control, all in open-source code, as well as Max4Live and

March 19, 2021 20 min

Video

Round Table IA : Questions/discussions

March 19, 2021 30 min

Video

Melodic Scale and Virtual Choir, Max ISiS - Grégory Beller

Dans cette présentation, Greg Beller exposera les développements récents dans le domaine du traitement de la voix. Melodic Scale est un dispositif Max For Live qui modifie automatiquement une ligne mélodique en temps réel, en changeant sa g

March 19, 2021 26 min

Video

Greg Beller, David Guennec, Nicolas Obin, Axel Roebel, Hugues Vinet. Table ronde

March 19, 2021 20 min

Video

Session IA - An overview of AI for Music and Audio Generation - Doug Eck

An overview of AI for Music and Audio Generation I'll discuss recent advances in AI for music creation, focusing on Machine Learning (ML) and Human-Computer Interaction (HCI) coming from our Magenta project (g.co/magenta). I'll argue tha

March 19, 2021 47 min

Video

Interaction with musical generative agents - Jérôme Nika

The Musical Representations team explores the paradigm of computational creativity using devices inspired by artificial intelligence, particularly in the sense of new symbolic musician-machine interactions. The presentation will focus in pa

March 19, 2021 21 min

Video

share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.