Les médias liés à cet évènement

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 01 h 01 min

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 24 min

L'estimation de fréquences fondamentales multiples

12 mai 2005 52 min

La harpe électroacoustique

4 février 2005 01 h 18 min

Utilisation de Modalys pour le projet VoxStruments, lutherie numérique intuitive et expressive - Nicholas Ellis, Joël Bensoam

17 octobre 2007 49 min

Présentation des travaux l'équipe PdS dans le cadre du projet européen CLOSED : "Closing the Loop of Sound Evaluation and Design" - Olivier Houix

27 juin 2007 01 h 12 min

Sparse overcomplete methods, matching pursuit and basis pursuit - Bob L. Sturm

11 juillet 2007 48 min

Transformations de type et de nature de la voix - Snorre Farner, Axel Roebel, Xavier Rodet

12 septembre 2007 01 h 07 min

Segmentations et reconnaissances automatiques de phonèmes de la voix, temps différé, temps réel - Pierre Lanchantin, Julien Bloit, Xavier Rodet

19 septembre 2007 01 h 13 min

Synthèse de la parole à partir du texte et construction d'une base de données d'unités de la voix - Christophe Veaux, Grégory Beller, Xavier Rodet

26 septembre 2007 01 h 00 min

Projet ECOUTE - Jerome Barthelemy, Nicolas Donin, Geoffroy Peeters, Samuel Goldszmidt

3 octobre 2007 01 h 12 min

Projet MusicDiscover - David Fenech Saint Genieys

10 octobre 2007 01 h 10 min

Projet CASPAR - Jerome Barthelemy, Alain Bonardi

24 octobre 2007 50 min

Projet CONSONNES 1ère partie - René Caussé, Vincent Freour, David Roze

21 novembre 2007 57 min

SuperCollider and Time

0:00/0:00

SuperCollider is an audio synthesis environment with a client-server architecture. This presents some problems in dealing with timing. This talk will cover the various ways that time is handled in SuperCollider on both the language (client) side and on the synthesis engine (server) side. Issues discussed will include Open Sound Control time stamps and NTP synchronization, coordination between real time and non real time threads, synchronizing multiple SC servers, drift between network time and sample time, accounting for latency when sending commands to the server, and trade offs involving timing between sample by sample vs block processing.


James McCartney is the author of the audio synthesis and algorithmic composition programming environment named "SuperCollider".
He studied computer science and electronic music at the University of Texas at Austin, composed music for local theater, modern dance and music performances, and performed with the group "Liquid Mice" which explored the boundaries of what one could get away with performing in Austin bars in the 1980's and 90's. He was a member of the Austin Robot Group which explored robotics, cybernetics and the arts. He worked for the NASA Astrometry Science team on the Hubble Space Telescope project. He now lives in San Jose, California and continues exploring
sound.

intervenants

informations

Type
Conférence scientifique et/ou technique
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 02 min
date
21 novembre 2012

SuperCollider and Time

SuperCollider is an audio synthesis environment with a client-server architecture. This presents some problems in dealing with timing. This talk will cover the various ways that time is handled in SuperCollider on both the language (client) side and on the synthesis engine (server) side. Issues discussed will include Open Sound Control time stamps and NTP synchronization, coordination between real time and non real time threads, synchronizing multiple SC servers, drift between network time and sample time, accounting for latency when sending commands to the server, and trade offs involving timing between sample by sample vs block processing.


James McCartney is the author of the audio synthesis and algorithmic composition programming environment named "SuperCollider".
He studied computer science and electronic music at the University of Texas at Austin, composed music for local theater, modern dance and music performances, and performed with the group "Liquid Mice" which explored the boundaries of what one could get away with performing in Austin bars in the 1980's and 90's. He was a member of the Austin Robot Group which explored robotics, cybernetics and the arts. He worked for the NASA Astrometry Science team on the Hubble Space Telescope project. He now lives in San Jose, California and continues exploring sound.

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.