Unrealtime concept and interface design

Unrealtime's distinct approach lies in its 'out-of-time' process of music-making, creating and exploring results that appear beyond the scope of an acoustic improviser's real-time recall of physically stored mechanical gestures or a composer's invention in suspended time. As the process embodies both the learning of audio gestures and their reformation into new syntactical relationships, is it possible to spur a learning process of embodiment, using notational forms of spatiotemporal indeterminacy, in which the performer can model the behaviour of the Unrealtime interface? From a reverse perspective, can a fixed-notation transcription of an Unrealtime audio composition reveal compositional forms that are mutable and externally


Expanding on current and past practice on gesture-based digital audio improvisation, Unrealtime's interface design for fast and highly reactive multiple audio-timeline access yields a high diversity of output audio-gestures through a minimum of input performer gestures. The resulting improvised audio-collages spawn new compositional practices that seek to model and challenge the performers' gestural behaviour through a hybrid notational system that combines fixed parts with elements of directed improvisation. Further collaborative possibilities have been explored through networked improvisation.