information

Type
Conférence scientifique et/ou technique
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
59 min
date
June 2, 2012

Abstract: Interactive Improvisation Systems involve at least three cooperating and concurrent expert agents: machine listening, machine learning, model based generation. Machine listening may occur during the initial learning stage (off-line or real-time in live situations) and during the generation stage as well in order to align the computer production with current live input. Machine learning can be based on any statistical model capturing significant signal or symbolic stream of features that can be exploited in the generation stage. In particular, the OMax interactive computational improvisation environment will be presented.

Bio: Gerard Assayag is head of the Music Representation Research Group at IRCAM (Institut de Recherche et de Coordination Acoustique/Musique) in Paris, and head of the STMS (Sciences and Technologies of Music and Sound) Ircam/CNRS Lab. Born in 1960, he studied computer science, music and linguistics. In 1980, while still a student, he won research awards in "Art and the Computer", a national software contest launched in 1980 by the French Ministry of Research, and another one in the "Concours Micro", a contest in computing in the arts using early micro-computers. In the mid-eighties, he wrote the first IRCAM environment for score-oriented Computer Assisted Composition. In the mid-nineties he created, with Carlos Agon, the OpenMusic environment which is currently the standard for computational composition and musicology. . The concept behind OpenMusic is to provide a visual counterpart of major programming paradigms (such as functional, object and logical programming) along with an extensive set of musical classes and methods, plus an original metaphor for representing musical time in its logical, as well as chronological, aspects. Recently Gerard Assayag has created with other colleagues the OMax computaitonal improvisation system based on machine listening and machine learning and has become a widely recognized reference in the field. Gerard Assayag's research interests center on music representation issues, and include computer language paradigms, machine learning, constraint and visual programming, computational musicology, music modeling, and computer-assisted composition. His research results are regularly published in proceedings, books and journals.

speakers

From the same archive

Introduction - Geoffroy Peeters

June 2, 2012 16 min

Video

Descripteurs audio : un enjeux majeur pour la composition en temps réel

Abstract: Un des aspects les plus prospectifs de la relation instrument/machine réside dans le développement des moyens d’analyse acoustique des sons instrumentaux et vocaux en temps réel que l’on appelle descripteurs audio. Le nombre de ce

June 2, 2012 56 min

Video

VirtualBand, a MIR-approach to interactive improvisation - François Pachet

Abstract: This talk introduces a new music interaction system based on style modeling, in a MIR-oriented perspective called VirtualBand. VirtualBand aims at combining musical realism and quality with real-time interaction, by capturing esse

June 2, 2012 54 min

Video

Table ronde

June 2, 2012 44 min

Video

Gestural Re-Embodiment of Digitized Sound and Music - Norbert Schnell

Abstract: Over the past years, music information research has elaborated powerful tools for creating a new generation of applications that redefine the boundaries of music listening and music making. The recent availability of affordable mo

June 2, 2012 28 min

Video

Playing with Music - Tristan Jehan

Abstract: For the past 60 years, machines have been involved in all aspects of music: playing, recording, processing, editing, mixing, composing, analyzing, and synthesizing. However, in software terms, music is nothing but a sequence of nu

June 2, 2012 59 min

Video

Interactive Exploration of Sound Corpora for Music Performance and Composition - Diemo Schwarz

Abstract: The wealth of tools developed in music information retrieval (MIR) for the description, indexation, and retrieval of music and sound can be easily (ab)used for the creation of new musical material and sound design. Based on automa

June 2, 2012 44 min

Video

MIR beyond retrieval: Music Performance, Multimodality and Education - Sergi Jorda

Abstract: Although MIR did arguably not start as a research discipline for promoting creativity and music performance, this trend has begun to gain importance in recent years. The possibilities of MIR for supporting musical creation and mus

June 2, 2012 57 min

Video

share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.