Went to an interesting talk by George Tzanetakis earlier in the week at the DMRN meeting at Queen Mary. He was discussing how systems such as the Kinect could be used to extend the performance of an acoustic instrument by adding gesture recognition to control the electronic post-processing of a sound. Also saw a performance by Imogen Heap a few weeks ago along similar lines.
This got me thinking about how we explore sound in improvisation. I do a lot of “free improvisation” using the bassoon, and an interesting aspect of this is how I explore transformations of the current sound whilst playing, without actually making sound. One aspect of this is what we might term immanence, that is, the feeling of a new sound “on the lips” before it is actually made. My approach to free improv is primarily textural, finding musical textures that fit alongside other improvisers in the group, which provide a new direction for the music, or which set out a radically new direction for the developing improvisation. By moving a key on the instrument, or adjusting pressure on the reed, I can start to feel when a sound is about to “break” into another sound, and get some sense of what that sound is likely to be—whether it is going to be a rougher sound, or whether it is about to break out into a pure, high harmonic, or whatever.
This sense of immanence is largely absent from interfaces for electronic instruments. Whilst many kinds of playing surfaces and unusual interfaces exist, they offer little back to the player in terms of pre-aural feedback about what sound-quality they are likely to move into if they move in a particular direction in the sound-space of the system generating the sound. Creating such interfaces, and thinking about how to provide such immanence, would make an interesting research project.