informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 01 min
date
22 octobre 2019

By leveraging state of the art audio signal descriptors and recent developments in generative models for structured prediction and deep learning, this project aspires to question computational creativity issues with the goal of exploring some of the infinite application of technology in the field of generative sound art.

In particular, the present project investigates the integration of environmental sound analysis and recognition techniques with latest generative machine learning models, to provide a system that is able to discover emerging patterns of a given audio input and transfigure them in something unexpected, when trained with a large corpus of suitably selected samples. This opens new artistic perspectives on the interaction between computer generated sound systems and the surrounding environment, with the potential of “creative” and yet coherent positive feedback loops.

The project will deal with the following core aspects:

  • environmental sound analysis and recognition
  • audio classification applied to complex sound scenes
  • audio features and modeling for environmental sounds
  • machine learning for large scale and structured data
  • generative framework via structured prediction techniques exploiting (multilevel) local structure in the data.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.