Do you notice a mistake?
NaN:NaN
00:00
ABSTRACT
My exit talk marks the end of my musical research residency at IRCAM.
In it I will present an overview of the aesthetic ideas behind AudioGuide, a software for concatenative synthesis written in Python and realized in collaboration with Norbert Schnell and Diemo Schwartz over the past nine months.
In order to preface my interest in composing within a concatenative framework, I will begin by proposing a way of listening to acoustic music which centers around thinking about instruments as concepts.That is, that instruments are not simply the articulators of musical content, but internalized cognitive sound-systems whose limitations and particularities have become critical to shaping our perception,
anticipation and contextualization of musical gesture, form and meaning. From within the instrument-as-concept model, I argue that
concatenative algorithms can be seen as an extension of “instrumental” composition where generalized gestural and timbral information are composed (the target) and then performed by an acoustically limited repertoire of sound-making (the database).
I will then provide a brief overview of the functionality of AudioGuide. One of my initial goals was to create a time-varying concatenative architecture capable of simultaneous selection and I will demonstrate the various strategic and computational methods that have been deployed to achieve this goal. I will show a subtractive spectral model which permits simultaneous unit selection and give
several audio examples which show the affect of this model using various databases and targets.
In closing I will address the challenges of composing with concatenative models. I will show three strategies that we have developed in order to give the user better control over concatenated results. First, I will demonstrate a flexible method for data normalization which allows the user to dynamically expand, contract and/or invert the target’s features. Second, I will show a method for restricting database selection in a time-varying manner in order to emulate “human” aspects of performance. Third, I will demonstrate a 2D browsable interface, visualized in CataRT, which permits the user to explore many different concatenated variations made with AudioGuide using the same target and database. This framework affords a better understanding of influence of different audio features and program options.
BIOGRAPHY
The music of Ben Hackbarth is dedicated to the combination of instruments and electronic sound. His electro-acoustic compositions
revolve around the timbres, gestures and acoustical properties of western instruments. Through mapping notions of instrumental
tradition, technique, virtuosity and semantic identity onto electronic sound, he seeks to create a hyper-extended instrumental space emphasizing the perception of boundaries to create friction and form.
Ben is currently a Ph.D. candidate at the University of California, San Diego where he studies composition with Roger Reynolds. At UCSD
he has also worked with Philippe Manoury, Miller Puckette and obtaineda master’s degree while studying with Chaya Czernowin. He has a
bachelor’s in composition from the Eastman School of Music where he studied with Allan Schindler, Bob Morris, Martin Bresnick, Steven
Stucky and Christopher Rouse. Ben is also a composer and researcher at the Center for Research and Computing in the Arts (CRCA) where he has collaborated with other artists to create multimedia installations with realtime graphics, sound, computer vision and motion tracking.
He has had performances by the Arditti String Quartet, Ensemble SurPlus, the Collage New Music Ensemble, the Kenners and the Wet Ink
Ensemble. Ben’s music can be heard on CD releases by SEAMUS and Carrier Records. In February, the Ensemble InterContemporain will perform Crumbling Walls and Wandering Rocks as part of their 2011season.
Do you notice a mistake?