Developing Language for Live Coding

download Developing Language for Live Coding

of 6

Transcript of Developing Language for Live Coding

  • 8/6/2019 Developing Language for Live Coding

    1/6

    A principled approach to developing new languages forlive coding

    Samuel AaronUniversity of CambridgeComputer Laboratory

    [email protected]

    Alan F. BlackwellUniversity of CambridgeComputer Laboratory

    [email protected]

    Richard HoadleyAnglia Ruskin UniversityDigital Performance Labs

    [email protected]

    Tim ReganMicrosoft Research

    [email protected]

    ABSTRACT

    This paper introduces Improcess, a novel cross-disciplinarycollaborative project focussed on the design and develop-

    ment of tools to structure the communication between per-former and musical process. We describe a 3-tiered archi-tecture centering around the notion of a Common MusicRuntime, a shared platform on top of which inter-operatingclient interfaces may be combined to form new musical in-struments. This approach allows hardware devices such asthe monome to act as an extended hardware interface withthe same power to initiate and control musical processesas a bespoke programming language. Finally, we reflect onthe structure of the collaborative project itself, which of-fers an opportunity to discuss general research strategy forconducting highly sophisticated technical research within aperforming arts environment such as the development of apersonal regime of preparation for performance.

    Keywords

    Improvisation, live coding, controllers, monome, collabora-tion, concurrency, abstractions

    1. INTRODUCTIONThe Improcess project aims to create new tools for perform-ing improvised electronic music in a live setting. The keygoal of these tools is to structure the communication be-tween the performer and a suite of concurrently executingmusical processes. This fits within the broad genre knownas live coding, where the performer writes and manipulatesa computer program to generate sound. The Improcess

    project has started with a specific focus on a particulartechnical architecture and research strategy. The techni-cal starting point has been to explore the potential for livecoding performance combining domain specific music pro-gramming languages together with general purpose musicalinterface devices, such as the monome. Figure 1 shows arecent performance by the first author (shown on the left),in which a monome is used on stage together with a laptoprunning live coded software which is made visible to the

    Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, to

    republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.NIME11, 30 May1 June 2011, Oslo, Norway.Copyright remains with the author(s).

    audience via a large projection of the laptops screen.

    Figure 1: The ( tones) performing live at the Eu-ropean Ruby on Rails conference in Amsterdam,

    2010

    The research strategy of Improcess has emphasised thecreation of a cross-disciplinary team of collaborators ratherthan the more typical historical development of live codingsystems, in which an individual developer works within aninterdisciplinary context. We have also attempted to struc-ture the project with a specific emphasis on the develop-ment of a reflective performance practice. The remainderof the paper discusses each of these aspects in turn: first

    our approach to experiments with the monome and withdomain-specific languages (DSLs). Next, we address the ar-chitecture, the implementation, and the integration of thesecomponents into a personal regime of preparation for perfor-mance. We also reflect on the structure of the collaborativeproject itself, which offers an opportunity to discuss gen-eral research strategy for conducting highly sophisticatedtechnical research within a performing arts environment.

    2. THE END-USER APPROACH TO DOMAIN

    SPECIFIC LANGUAGESIn contrast to previous research in live coding, we start froma technical motivation related to the design and implemen-

    tation of DSLs, languages specialised for creating particu-lar kinds of application [15], and the design of languages forend-user programmers, people having no formal training in

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    381

  • 8/6/2019 Developing Language for Live Coding

    2/6

    programming [18]. The languages that have been developedfor live coding in the past could all be classed as DSLs, al-though this terminology is not necessarily used. Some havealso been designed according to principles that are well-known in end-user programming (EUP) - for example, thevisual syntax of Max/MSP. However, many live-coders todate have been experienced programmers, rather than mu-sicians who acquired coding skills as a way to advance their

    musical practice.In the Improcess project, we believe that there is an ad-vantage to be obtained by drawing on broader research ad-dressing DSLs and EUP from contexts outside of music. Wehave noted in the past that live coding can be an interest-ing object of study for researchers in EUP [10]. However inthis project we draw lessons in the other direction. In par-ticular, we consider the cognitive ergonomics of languagedesign, and of programming environments. This leads usto consider the tools in the programming environment, al-ternative types of syntax (including options for visual di-agrammatic syntax as well as textual syntax), and also theunderlying computational model. As an example of the di-versity of tools, syntax and computational models that areconsidered in EUP research, the spreadsheet is often anal-ysed as a DSL for the accounting domain. The tools arethose for inspecting, navigating and modifying the visualgrid. The syntax is the combination of the diagrammaticgrid with formulae that refer to that grid, and the computa-tion model is a form of declarative constraint propagation.

    The Cognitive Dimensions of Notations (CDs) is a use-ful framework in which to consider the interaction of thesedifferent factors [12]. When designing new DSLs and pro-gramming environments, it is possible to optimise for dif-ferent criteria - for example making languages that are veryeasy to change (low viscosity) or occupy small amounts ofscreen space (low diffuseness). All of these decisions involvetradeoffs, however, that can reduce the usability of the sys-tem in other respects. Duignan has in the past used CDs

    productively to analyse studio technology such as DigitalAudio Workstations [14]. However, in addition to his con-cerns, we are also interested in abstraction gradient - is itpossible for newcomers to start producing music without alarge prior investment of attention? Some elegant compu-tational models preferred by computer scientists are highlyabstract, to an extent that discourages casual use. This isanother trade-off that we consider explicitly.

    Finally, our group has in the past explored the complexrelationship between direct manipulation and programmingabstractions [8]. In a performance context, direct manipu-lation allows an audience to perceive a direct relationshipbetween action and effect. Programming languages, in ouranalysis, are essentially indirect - indeed, this is a funda-

    mental tension of live coding. We hope that the monomecan be used, not only as a device to trigger events (directmanipulation), but to modify the structure of musical pro-cesses in a way that maintains, enhances or interrogates thisfundamental tension.

    3. INTEGRATING CONCRETE INTERAC-

    TIONMany live-coders use peripheral devices, for data captureand control. However the software architectures tend to beinfluenced by conventional synthesis concerns, rather thanbeing driven by the interaction devices, since live codinginteraction is largely carried out via the laptop keyboard.

    We were interested in the opportunities that would arise bytaking a live coding approach to a specific music interac-tion device, and using this as a fundamental element of the

    performance system being developed. We therefore madethe decision to incorporate a concrete performance inter-face into the technical architecture that we are developing.

    We are currently focussing on the monome, an interactiondevice consisting of 64 buttons arranged in an 8x8 grid ontop of a box (a monome is visible between the two laptopsin figure 1). Each button contains a concealed LED whichallows them to be individually illuminated. Button presses

    are communicated via a serial link to the host computer,where they may trigger events within executing processes.The computer can illuminate the LEDs either individually,by row or column, or all 64 buttons. The monome onlyproduces and responds to these simple data; there is nohard-wired or audio functionality. The monomes design isopen-source, making it an ideal platform for free modifica-tion, rapid prototyping and experimentation [24]. Althoughthe monome is described by its makers simply as a generalpurpose adaptable, minimalist interface, the list of pub-lished applications demonstrates that it is mainly popularas a music controller. The videos provided by the mak-ers (for instance, [5]), most often adopt a control layout inwhich horizontal button rows control ordering in time andvertical columns either control pitch or trigger some selec-tion from a variety of samples.

    As a potential tool for the live coding performance con-text, we believe that the monome is complementary to theqwerty keyboard, offering a number of specific advantages.Its visible physical presence makes the actions and effectsof interaction evident to the observer. Unlike conventionaltext programming languages it offers an extremely simpleinterface onto which musical patterns may be mapped asshapes. The set of button triggers enable responsive and di-rect communication with musical environment. Finally, theembedded LEDs provide feedback allowing the software en-vironment to communicate aspects of its current state backto the performer and audience. This allows the monometo support a range interactive styles from a simple set of

    button triggers to a sophisticated low resolution GUI.The majority of monome applications to date have been

    constructed using the visual dataflow language Max/MSP(56 out of the 68 applications listed on the monome commu-nity site). Max/MSP provides an excellent entry point toprogramming for musical end-users, but it lacks a numberof important software engineering capabilities, which limitthe ability of users to develop and maintain complex and so-phisticated applications. For example, the visual format isnot compatible with the version control tools that are essen-tial for collaborative development. There is no frameworkto support the specification of unit tests needed for main-tenance of robust systems. Finally, the only abstractionmechanism is a form of modularity through the creation of

    sub-patches. It is not possible to create abstractions capa-ble of code-generation (i.e. higher-order-patches) which webelieve to be a key capability for exploring new live codingpractices. Our proposed architecture and implementationstrategy provides an alternative framework that directly ad-dresses these issues.

    4. ARCHITECTUREThe Improcess architecture extends the traditional separa-tion of audio synthesis technology from programming lan-guage implementation to explicitly consider a suite of be-spoke interface components. This approach can supportmultiple alternative DSLs and external interfaces by pro-

    viding a common run-time on top of which these clientsmay be designed and built. By sharing a common run-timeenvironment these clients can therefore inter-operate allow-

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    382

  • 8/6/2019 Developing Language for Live Coding

    3/6

    ing a performer to create and combine a set of specificallydesigned interfaces for a given musical task. This separationof the traditional language implementation into system andclient aspects allows for explicit design decisions regardingthose system concerns that dont change from a given workor performance to the next (such as abstractions represent-ing process and sound), from user concerns that must bechangeable at any time, (such as the definition of new vir-

    tual instruments and their particular control interfaces).

    Audio Synthesis

    Client 3

    Common Music Runtime

    Internal Clients

    Client 4Client 1

    External Clients

    Client 2

    DSL

    Interfaces

    monome 3D MouseOSC

    Device

    OSC

    Virtual Instruments

    DirectConnection

    VariousProtocols

    Figure 2: The Improcess architecture

    The Improcess architecture consists of three tiers as illus-trated in figure 2. At the bottom tier we have chosen to usethe SuperCollider server as the audio synthesis engine. Thisprovides an efficient, flexible and real-time-capable founda-tion, and allows us to focus on interaction and language de-sign issues rather than audio processing. Controlling andmanipulating the audio synthesis engine is the CommonMusic Runtime environment (CMR) which provides a suiteof abstractions specifically designed for the creation and ma-nipulation of musical processes and sound synthesis. Directcommunication between the CMR and the SuperColliderserver is defined in terms of Overtone [3], an open sourceproject initiated by Jeff Rose with significant contributionsby the first author.

    Overtone, and the CMR, are implemented in Clojure,which was chosen because it provides a firm foundation forlanguage experimentation and design. Clojure is an efficientfunctional language with an emphasis on immutability andconcurrency as well as full wrapper-free access to the com-prehensive Java ecosystem, highly flexible lisp syntax andmeta-programming facilities that are currently best in class.

    The top layer of the architecture is primarily meant forend-user interaction. This is via a suite of composable mod-

    ular interfaces, which we call virtual instruments, at a levelof complexity that can include complete DSLs. Each instru-ment has two aspects: an interface and a client implement-ing the required logic. Clients may either be implementedas a separate external process or as a concurrent extensionto the CMR. The interaction with these instruments may ei-ther be via a conventional programming environment suchas a GNU/Emacs buffer, or via a physical device such asthe monome. These instruments then communicate withthe CMR via the appropriate protocol in order to createand manipulate musical processes.

    A frequent risk in the design of abstract architectures isthe potential to disconnect from practical concerns withinthe application domain [9]. This can result in a design that

    may appear computationally powerful, yet not offer signifi-cant practical benefit for its intended purpos.e Another po-tential risk is the efficacy of the chosen technologies. For

    example, a given programming language might be able toexpress the desired operations yet not offer the performancesemantics necessary to execute the operations within the re-quired constraints.

    We therefore took a strategic decision at the outset of theproject to explore the interactive potential of our proposedsoftware architecture by taking a number of establishedmusic applications for the monome, and re-implementing

    them within the proposed architecture using Clo jure. Ini-tially this has consisted of the re-implementation of monomeprior-art including applications such as a sample looper,boiingg, Blinkin Park and Press Cafe. This also allowedus to explore some key technical parameters in the con-text of a mature interactive music application, rather thanencountering them only while interacting with exploratoryprototypes.

    5. ABSTRACTION DESIGNA useful and sufficient set of musical abstractions is a chal-lenge for musical systems [7] and key to the success of theCMR. The programming abstractions found in SuperCol-

    lider are also widely available in other programming lan-guages [19] and so it is clearly possible to reproduce equiva-lent semantics via a number of alternative methods. Hencewe were left with the question of which musical abstractionsto present to the user in the coding layer. Typical Super-Collider programs contain abstractions such as synthesisers,milliseconds, random numbers and routines. Would it beuseful to also offer notions such as tunings, scales, melodies,counterpoint, rhythm and groove?

    Such abstractions allow expressions that would have oc-curred in terms of the original complex sub-parts to bearticulated more succinctly and accurately. Through re-developing prior-art monome applications we were able todevelop a vocabulary of abstractions pertaining to the monomeshared across the group. We feel that this has provideda marked increase of the musical relevance of our discus-sions allowing much more subtle and precise notions of novelmonome applications to be considered.

    The main goal of the CMR is to present a suite of ab-stractions useful to the general task of building music per-formance processes. These can then be used and sharedacross a given set of clients to create new forms of instru-ment. The CMR currently supports many of the standardSuperCollider abstractions in addition to others found in al-ternative environments such as Impromptus notion of re-cursion through time [23] whereby recursive function callsare guarded by timed offsets.

    An important aspect to consider with regard to abstrac-tions is their symbolic representation. Within a procedu-

    ral environment it is possible to automatically create newabstractions as part of the normal execution of the system.The ease with which this is possible is very much dependenton the syntactic structure of the representation. One of themain advantages of using a lisp as the syntactic frameworkis that it is very amenable to code generation. This is madepossible because the syntax is represented by the basic datastructures of the language (lists, maps and vectors in Clo-

    jures case) and so generating and manipulating new code isas trivial as generating and manipulating these basic datastructures.

    As an example of this consider the CMRs synthesiser ab-straction which comes directly from the notion of a synth inSuperColliders server implementation. This is essentially a

    directed graph of unit generators such as oscillators andfilters. Consider the bubbles synth described SuperCol-liders documentation. In listing 1 we have the SuperCol-

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    383

  • 8/6/2019 Developing Language for Live Coding

    4/6

    lider syntax for this synth which should be compared withOvertones version in listing 2. Notice that conceptuallythey are very similar in addition to being represented witha similar number of characters. The Overtone syntax doesnot focus specifically on typing efficiency [22] but it is in aform that is relatively straightforward to generate automati-cally. This opens up a number of exciting possibilities wheresynthesiser definitions may be automatically generated by

    bespoke procedures driven by client interfaces. For exam-ple, we expect to create monome applications that allow theperformer to both design and instantiate new synthesiserslive as part of the performance.

    Listing 1: SuperCollider bubbles representationSynthDe f (bubble s , {

    v ar f , z ou t ;f = LFSaw . k r ( 0 . 4 , 0 , 2 4 , LFSaw . k r ( [ 8 , 7 . 2 3 ] ,

    0 , 3 , 8 0 ) ) . m i d ic p s ;z out = CombN. ar ( SinOs c . ar ( f , 0 , 0. 04 ) , 0. 2 ,

    0 . 2 , 4 ) ;Out . ar (0 , z out );

    } ) ;

    Listing 2: Overtone bubbles representation( d e f i n s t b u bb l es [ ]

    ( l e t [ r oo t (+ 8 0 ( 3 ( l f sa w : k r 8 0 ) ) )g l i s (+ r o ot ( 2 4 ( l f sa w : k r 0 . 4 0 ) ) )f r e q ( m i di c ps g l i s )s r c ( 0 . 0 4 ( s i no sc f r e q ) ) ]

    (combn s r c : d ec ay ti me 4 ) ) )

    6. IMPLEMENTATION ISSUESMusic processing architectures often introduce subtle anddemanding engineering issues relating to the managementof processor load, timing mechanics, and latency. A detaileddiscussion of these engineering issues is not appropriate tothis overview paper, but in this section, we record someof the general implementation challenges that we have en-countered while developing the CMR, both as a warningto newcomers, and to confirm findings reported by otherdevelopers in the past.

    6.1 Event stream architectureOne of the main roles of the CMR is to convert performeractions such as button presses into generated audio thatis specified in terms of musical processes. From an imple-mentation perspective a key consideration is the internalrepresentation of these actions and processes, particularlywith respect to time [16]. Existing music synthesis archi-tectures are often event based, but as already noted, theydo not support higher-order descriptions to the extent of-

    fered by functional programming languages i.e. functioncomposition. But in a functional language, it might also beconsidered more natural to represent action in terms of afunction. We therefore had to decide whether to use func-tions or events as the first class internal representation ofmusical actions.

    A key concern within our process-based model was sup-port for thread concurrency. Function calls are typicallya synchronous mechanism, with function calls executingwithin the current thread. Events are typically asynchronous,resulting in concurrent execution by a separate thread. Forthis reason, we chose an event model of musical actions,to better support our overall concern with concurrent pro-cesses in music. This also plays to Clo jures strengths given

    its novel concurrency semantics which are currently stateof the art relative to other functional languages. In par-ticular, events can then be represented by immutable data

    structures. This means that they may not be modified oncecreated, allowing them to be freely passed around from onethread to another without risk of being modified by onethread whilst being simultaneously used a second - a com-mon cause of error in concurrent systems, and one thatwould be undesirable in a live coding context.

    Events also provide an intuitive computational approachto combining musical actions. For example, we may com-

    bine a series of events into a stream which is ordered intime. We may then consider flowing the stream of eventsthrough a series of functions in a similar manner that wemay wish to flow an audio stream from an electronic guitarthrough a series of effects pedals. As in other audio pro-cessing systems such as Max/MSP, this patching togetherof streams of events through functions also opens us up tothe possibility of forking and merging given streams. Wecan therefore uses a source stream to create a number ofnew streams which contain an identical series of events butcontinue on their own path. One application of this wouldbe to generate a chord with each note played by a differentsynthesiser yet triggered by one root event.

    For example, consider Figure 3 which represents a sim-ple event stream setup for sending basic monome keypressevents to two separate synthesisers A and B. In this exam-ple we are taking the basic events from the monome, andadding them to a buffer which then forked to two sepa-rate streams - one which is connected directly to synthe-siser B and another which is mapped through a functionto modify the parameters of each event and then sent tosynthesiser A. This approach is extremely flexible allowingthe performer to define arbitrary modifications to a giveninput stream. Provided the performance semantics of themapping functions is within acceptable boundaries the endto end latency of the event stream should be acceptable forlive performance.

    Synth A

    Synth B

    ( (e) )

    Figure 3: A typical monome event stream configu-

    ration

    6.2 Performance and timingPrior to the Improcess project the first author had imple-mented a monome application framework, monomer, us-ing a generic programming language [4]. Monomer was in-tended as a general purpose toolkit for building monome

    applications which interfaced with external audio synthesissoftware via a basic MIDI interface. The result was suf-ficiently capable to build tools such as 8track (a RolandX0X-style MIDI step sequencer with 8 instruments / tracks[2]), but monomer suffered from two technical problems thatposed considerable obstacles to building more sophisticatedapplications. First was the issue of performance. Even ap-plications with minimal logic used an excessive amount ofCPU. Whilst this might be acceptable for basic applications,it meant that adding any more complexity soon saturatedthe machine. Secondly, monomers timing mechanics werebuilt on top of Rubys kernel method sleep. However vari-able delay in the process scheduler meant that the actualsleep time was greater than specified, and resulted in poor

    synchronisation with external rhythmic sources.These experiences informed the Improcess architecture

    and implementation. Precise timing of event triggering is

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    384

  • 8/6/2019 Developing Language for Live Coding

    5/6

    handled by SuperCollider, which uses a prioritised real-timethread rather than Rubys non-realtime sleep operation.Clojures execution performance far exceeds that of Ruby,and is able to further optimise particular code paths byadding type hints. Within our new architecture the mainperformance issue currently faced is the variable latency ofmessage streams within the CMR. This is not always ac-ceptable within a performing context (it may result in stut-

    tering or jitter of audio output, or delayed response to but-ton presses). This is a general issue of modern computingarchitectures due to the fact that time is often abstractedaway [17]. In our specific case it may be due to low levelmechanisms such as the JVM garbage collector (GC) halt-ing all executing threads during the compaction phase orintermediate IO buffers interrupting the flow of events. Inmost cases these issues are resolved by fine tuning the GC,scheduling events ahead of time where possible and the re-moval of blocking event calls. However, this is not always anoption - particularly in a situation whereby the performerwishes to trigger an immediate musical event.

    7. DEVELOPING A PRACTICE REGIME

    A key concern of this project has been to maintain a re-search focus on live coding as a performance practice, ratherthan simply a technical practice. An explicit goal of theresearch was for the first author, who was already a highlycompetent software developer and computer science researcher,to acquire further expertise as a performer. We build on re-search previously reported at NIME, including Nick Collins(alias Click Nilson) discussion of live coding practice [21]and Jennifer Butlers discussion of pedagogical etudes forinteractive instruments [13]. Butlers advice could certainlybe applied to monome performers, as a method to developvirtuosity on that particular controller. However, the notionof virtuosity for a programmer (or composer, as in Nashswork [20]) is more complex.

    As Click Nilson reports of experiments in reflective prac-tice by his collaborator Fredrik Olson:

    I [Olson] feel Id have to rehearse a lot moreto be able to do abrupt form changes or to havemultiple elements to build up bigger structuresover time with. I sort of got stuck in the A ofthe ABA form. Yet we also both felt we gotbetter at it, by introducing various shortcuts, byhaving certain synthesis and algorithmic compo-sition tricks in the fingers ready for episodes, and

    just by sheer repetition on a daily basis. [21], p.114.

    Nilson suggests a series of live coding practice exercises,

    which he warns must be approached in a reflective manner.As he notes,

    it would be a Cagean gesture to compose anintently serious series of etudes providing frame-works for improvisation founded on certain tech-nical abilities (ibid, p. 116).

    Despite Click Nilsons love of Cagean gestures, we agreethat there is potential for a reflective approach to live codingpractice to inform the development of etudes for the livecoder. Others have noted this, for example Sorenson andBrown [22] describe the problem of being able to physicallytype fast enough, and suggest that technical enhancementssuch as increased abstraction and editor support are the

    most important preparations for effective performance.Our collaborative project between senior computer scien-

    tists and music academics has encouraged a perspective that

    extends beyond virtuosity as a purely technical accomplish-ment, to the artistic engagement of a practised performerwith a live audience. This resulting change with respectto the intellectual scope of the first authors prior experi-ence of DSL research can be compared to recent attentionin computer science, and especially within HCI, to the valueof a critical technical practice [6] that integrates criticaland philosophical perspectives into computing. In the case

    of our own work, we consider performance as a category ofhuman experience that does not arise naturally from techni-cal work, and hence must be an explicit focus of preparationand reflection. We might call this a Performative TechnicalPractice, by analogy to Agres work.

    As noted by Nilson, Butler and Sorenson, while all musi-cians prepare themselves through regular practice, the analo-gies between conventional musical instruments and program-ming languages are far from straightforward. Live codingis, in many ways, more similar to musical composition thanto instrumental performance. Yet it seems Quixotic to dis-tinguish between practicing composition and doing com-position. A composer practices composition by doing morecomposition. If regarded in that light, any coding mightbe regarded as practice for live coding. However, we didnot believe that this offers adequate insight into the specialstatus of live coding. We preferred to distinguish betweencoding that is presented as a performance in front of an au-dience - live coding - and preparation for that performance,with no audience present - practice.

    From this perspective, not all coding is necessarily effec-tive practice. At one extreme, the first authors day-to-dayprogramming requires detailed engineering work that is es-sential to successful performance, but might not be effectiveif presented as coding to an audience (e.g. the optimisationof device drivers). At the other extreme, as noted in previ-ous research on this topic, the live coding context expectsa degree of improvisation, so preparation by simply writingthe same program over and over (as when playing scales on

    a musical instrument) seems pointless - if the program isalways the same, could it not simply be retrieved from acode library? Preparation for performance should thereforeinvolve activities that are neither original engineering, norsimple repetition. This suggests an analogy to jazz impro-visation, rather than composition or classical instrumentalcompetence.

    We explored this analogy with our advisory board mem-ber Nick Cook, a professor of mainstream music research,but with specialist knowledge in both performance researchand digital media. Although aware of live coding as a per-formance genre, he had not engaged with it in the contextof preparation for performance, so provided us with a freshperspective from which to review the previous literature.

    This helped us to distinguish between those aspects of in-strumental practice that are predetermined and intention-ally over-learned (e.g. scales), those concerned with diffi-cult corners in a specific genre that might be practiced (e.g.a tricky bridge when a verse has a key change), those thatdevelop a vocabulary of reference to other works (perhapsthrough riffs or formulae), and those that prepare a specificpiece for performance (although the notes played withinany given improvised performance will naturally vary).

    Our current strategy has therefore been to integrate theseelements into a series of daily exercises that develop fluencyof low-level actions relevant to live coding. We assume thatin actual performance, the program being created will benovel. But a fluent repertoire of low-level coding activi-

    ties will allow the performer to approach performance at ahigher level of structural abstraction - based on questionssuch as where am I going in this performance and what

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    385

  • 8/6/2019 Developing Language for Live Coding

    6/6

    alternative ways are there for getting there. In accordancewith proposals by Collins and Butler, we are planning topublish a set of etudes that can develop fluency in aspectsof a program related to rhythm, to harmonic/melodic struc-ture, to texture, and to software structure. We also recog-nise that live coding performance is a multimedia genre,in which the contents of the screen display, and the stagepresence of the performer are also essential components. A

    daily practice regime of preparation for performance shouldalso incorporate etudes that develop fluency of action inthese respects. These aspects of our work might be consid-ered as constructing an architecture of performance skillthat complements the technical architecture of the softwareinfrastructure.

    8. CONCLUSIONS & FUTURE WORKWe have described the initial results of the Improcess project,which has developed an architecture for live coding perfor-mance motivated by the considerations of domain-specificprogramming language design, and by research into end-user programming. We also have an explicit concern withmusic performance practice that has led us to treat the in-

    tegration of interface devices such as the monome as a pri-mary architectural consideration. We have created an ef-fective architecture with a functioning implementation, andhave demonstrated that it can be used to reimplement somepopular monome applications. We are now using this plat-form to further develop a regime of preparation for perfor-mance within a reflective research context. The Improcessenvironment provides a technical foundation that mirrorsthis musical and collaborative goal. We expect that it willenable rapid exploration of a diverse range of language op-tions, including the potential implementation of new lan-guage features in live contexts, and even the constructionof tangible performance instruments that can themselvesgenerate code processes.

    We have also found that explicit reflection on interdis-ciplinary collaboration has been a valuable element of ourresearch. Improcess is hosted by the Crucible network forresearch in interdisciplinary design, which aims to make in-formed contributions to public policy on the basis of projectslike this [11]. In Improcess we found the intellectual con-trasts between computational and musical perspectives suf-ficiently extreme (for example, in different team-membersvarious interpretations of the monome as either ideally flex-ible or undesirably grid-like) that we represented not onlymultiple organisations, but multiple research cultures. Wehave used the Cross-Cultural Partnership template [1] tohelp us bring together people from these different culturalnorms and legal frameworks for sharing culture.

    As a project demonstrating a performative technical prac-tice, we believe that this juxtaposition and improvisationof cultural, technical and musical processes represents theessence of the live coding enterprise.

    9. ACKNOWLEDGEMENTSThanks to the other members of the Improcess partner-ship - Tom Hall, Stuart Taylor, Chris Nash and Ian Cross -for their continued support and encouragement, and to themembers of our advisory board: Simon Peyton Jones, NickCook, Simon Godsill, Nick Collins and Julio dEscrivan.Thanks also to Jeff Rose and Fabian Aussems for their workon the Overtone project.

    10. REFERENCES[1] http://connected-knowledge.net/.

    [2] http://docs.monome.org/doku.php?id=app:8track.

    [3] http://github.com/overtone/overtone.

    [4] http://github.com/samaaron/monomer.

    [5] http://www.vimeo.com/290729.

    [6] P. E. Agre. Toward a Critical Technical Practice :Lessons Learned in Trying to Reform AI. LawrenceErlbaum Associates, 1997.

    [7] P. Berg. Abstracting the Future: The Search for

    Musical Constructs. Computer Music Journal,20(3):2427, 1996.

    [8] A. F. Blackwell. First steps in programming: arationale for attention investment models. Proceedingsof IEEE Symposia on Human Centric ComputingLanguages and Environments, pages 210, 2002.

    [9] A. F. Blackwell, L. Church, and T. Green. TheAbstract is an Enemy: Alternative Perspectives toComputational Thinking. Proceedings of the 20thannual workshop of the Psychology of ProgrammingInterest Group, pages 3443, 2008.

    [10] A. F. Blackwell and N. Collins. The ProgrammingLanguage as a Musical Instrument. Psychology ofProgramming Interest Group, pages 120130, 2005.

    [11] A. F. Blackwell and D. A. Good. Languages ofInnovation, pages 127138. University Press ofAmerica, 2008.

    [12] A. F. Blackwell and T. Green. Notational Systems -the Cognitive Dimensions of Notations framework,pages 103134. Morgan Kaufmann, 2003.

    [13] J. Butler. Creating Pedagogical Etudes for InteractiveInstruments. Proceedings of the InternationalConferences on New Interfaces for MusicalExpression, pages 7780, 2008.

    [14] M. Duignan, J. Noble, and R. Biddle. Abstraction andActivity in Computer Mediated Music Production.Computer Music Journal, 34(Barr 2003):2233, 2010.

    [15] M. Fowler. Domain-Specific Languages.Addison-Wesley, 2011.

    [16] H. Honing. Issues on the representation of time andstructure in music. Contemporary Music Review,9(1):221238, 1993.

    [17] E. A. Lee. Computing needs time. Communications ofthe ACM, 52(5):7079, May 2009.

    [18] H. Lieberman, F. Paterno, and V. Wulf. End UserDevelopment. Springer, 2006.

    [19] J. McCartney. Rethinking the Computer MusicLanguage: SuperCollider. Computer Music Journal,26(4):6168, Dec. 2002.

    [20] C. Nash and A. Blackwell. Beyond RealtimePerformance : Designing and Modelling the Creative

    User Experience. Submission to NIME, 2011.[21] C. Nilson. Live coding practice. Proceedings of the 7th

    international conference on New interfaces formusical expression, page 112, 2007.

    [22] A. Sorensen and A. R. Brown. aa-cell In Practice :An Approach to Musical Live Coding. Proceedings ofthe International Computer Music Conference, 2007.

    [23] A. Sorensen and H. Gardner. Programming WithTime Cyber-physical programming with Impromptu.Proceedings of the ACM international conference onObject Oriented Programming Systems Languages andApplications, pages 822834, 2010.

    [24] O. Vallis, J. Hochenbaum, and A. Kapur. A ShiftTowards Iterative and Open-Source Design for

    Musical Interfaces. Proceedings of the 2010Conference on New Interfaces for Musical Expression(NIME 2010), (Nime):16, 2010.

    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway

    386