Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

download Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

of 8

Transcript of Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    1/8

    Wireless technology in gesture controlled

    computer generated music

    Leonello Tarabella

    Graziano Bertini

    computer ART lab of CNUCE/IEI, Area della Ricerca di Pisavia Moruzzi 1 - 56126 Pisa, Italy

    [email protected] [email protected]

    http://www.cnuce.pi.cnr.it/tarabella/cART.html

    Abstract

    We here describe the experience of researchers and artists involved in the activities of the

    computer ART lab (cART lab) of the Italian National Council of Research (C.N.R.) in Pisa,

    regarding the wireless technology developed for controlling in real-time interactive multimedia

    performances. The most relevant devices and systems developed at cART lab for gesture

    recognition to be used for giving expression to interactive multimedia performances are here

    reported. Also, a real-time powerful music language based on C language able to perform audio

    synthesis and processing is introduced.

    1 Introduction

    We hereby report some general considerations about

    the relationships between science and music that can

    help young researchers of MOSART network for

    understanding the philosophy of this fascinating topic

    of computer music. This is an excerpt of the

    introduction written for Interface as Guest editor of

    the special issue on Man-Machine Interaction,

    Interface, Journal of New Music Research [1].

    Besides, some technical parts of this paper have been

    already presented in other conferences, here properly

    updated and extended [2].

    The history of Mathematics, Physics and Technology

    have played a crucial role in the history of Music as

    regard the evolution of the language reference syntax

    (notes and modern harmony) and the shape and the

    mechanical functionalitys of musical instruments.

    We know that for centuries musical scales has beenconstructed by notes whose pitches are related by

    simple ratios and that only at the end of 17 thcentury,

    the well-tempered schema, based on logarithms

    introduced in the realm of mathematics some decades

    before (1612), has been proposed. [3].

    It may be that the evolution from the plucking system

    used in the spinetta and in the clavicembalo to the

    striking-hammer system used in the piano, should not

    be considered as a true result of progresses in

    Science but, rather, as a consequence of the smartness

    and the craftsmanship of the Italian musical

    instruments manufacturer Bartolomeo Cristofori. But

    its highly probable that the terminal part of wind

    instruments also called bell, evolved from the

    conical shape derived from ancient Romans

    clarines and war-horns to the modern exponential-

    curve-based shape as a consequence of a precise

    awareness and knowledge on the physics of sound.

    This kind of curve allows a loud and brilliant acoustic

    response and gives the trumpet and the other horns,

    an elegant profile.

    As a counter proof, what follows is what did nothappened. The number of combinations of the three

    on/off valves of the trumpet are 7 rather than 8 as

    expected from binary arithmetic culture. This derives

    from the length of the by-passes activated when the

    valves are depressed which are related by ratios 1,2,3.

    It may be that if at the time the trumpet reached its

    final shape and mechanical evolution, binary

    arithmetic had been so popular as nowadays, the

    length of the 3 by-passes should have be chosen to

    have the ratios 1,2,4 so gaining a further combination

    which should have produced a different fingering.

    2 Technology and expressiveness

    Whatever the history, the shape and the mechanical

    interface between the human body and the musical

    instrument, playing a piece of traditional music

    means generating sounds with suitable tools

    controlled by one or more parts of the body: mouth,

    arms, hands, feet. From a physiological point of view

    playing an instrument entails activating and

    coordinating a specific set of muscles which in turn

    activates those parts of the body in contact with that

    instrument. To be sure, making music is something

    mailto:[email protected]:[email protected]://www.nuce.pi.cnr.it/tarabella/cART.htmhttp://www.nuce.pi.cnr.it/tarabella/cART.htmmailto:[email protected]:[email protected]
  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    2/8

    more than merely activating sound generators;

    musicians usually need a number of degrees of

    freedom to put a work their capabilities on their

    instrument in order to communicate their emotions,

    and the huge quantity of information downloaded

    onto the instrument assures complete control and

    high level of artistic expressiveness.

    When analog electronics began to be used to createmusic, it was natural to design a simple architecture

    derived from that of the piano, i.e. an array of

    switches assembled with the same layout as a piano

    keyboard, and a sound generator based on oscillators

    linked to the switches. When analog electronics was

    replaced by digital electronics, the functionalitys

    were greatly improved both from the point of view of

    control and of sound generation, but the same

    architecture was observed. Additional expressive

    controllers, such as pitch-bend or pitch-wheel, were

    added to the basic piano-like keyboard. Then the

    MIDI standard was introduced for interfacing a

    variety of different products released by differentmanufacturers.

    All that biased toward, and in some way inherited by,

    traditional music based on the concept of note as

    the building block for composing and performing;

    and more, all that bearing in mind consumer music

    (rock music, dance music and advertising music) as

    main targets. However, in contemporary music, and

    more generally in contemporary art, composers

    consider sound in a wide sense and outside of

    whatever reference language as the building block for

    composition. In this context the set of MIDI

    controllers are no more suitable.

    2.1 Electronic sound under gesture control

    Modern technology permits high quality sound/music

    compositions; however, musicians complain that

    computers do not allow the direct and complete

    control of sound and colors as do traditional media:

    traditional music is played by manipulating real tools,

    (manipulation derives from mani, Latin for hands)

    and traditional painting involves manipulating colors

    on the canvas. The vestibular correspondence

    (feedback) between action and art, plays an important

    role for the artist's cognitive awareness, andconsequently for his/her creativeness.

    Gesture, the successive postures of the hands, head,

    mouth and eyes, has a very important role in human

    communication specially when seen as a parallel

    language for enriching the semantic content of speech

    or as an alternative way to communicate basic

    concepts and information between people of different

    cultures and mother tongues. During the last decade

    many researchers and musicians have focused

    attention on the possibilities offered by this new

    approach to making music with computers, by

    designing and implementing original man-machine

    interfaces controlled by gesture [4,5,6].

    Due to the daily increase in computers power and

    electronics systems able to sense the presence, the

    shape, the distance and the position of objects, a new

    field of investigation and implementation has been

    started in the last few years: computer recognition of

    human gesture [7,8]. As a result, the human body

    itself can now be considered as a natural and

    powerful expressive interface able to give feelingto performances based on computer generated

    electro-acoustic music and computer generated

    visual-art. Modern human computer interfaces are

    extremely rich, incorporating traditional interface

    devices such as keyboard and mouse and a wealth of

    advanced media types: sound, video, animated

    graphics. The term multi-modal is often associated

    with such interfaces to emphasize that the combined

    use of multiple modes of perception is relevant to the

    users interface [9].

    2.2 Research on gesture

    A significant panorama on the state-of-the-art of the

    intenational research results can be found in [10]

    Trends in Gestural Control of Music edited by

    Marcelo M. Wanderley and Marc Battier at Ircam -

    Centre Pompidou, Paris, which also includes the

    research results of our Lab at CNR, Pisa. This web

    page contains information about research in the field

    of gesture capture, interfaces, and applications to

    sound synthesis and performance.

    Six different main topics are indicated:

    1) Virtual musical Instruments2) Gesture, Instrumental, empty-handed, etc.

    3) Acquisition, Gestural capture techniques

    4) Different mapping strategies

    5) Sensory feedback, haptic systems, etc.

    6) Interface examples

    Some significant excerpts from linked literature are

    here reported.

    1) In "Virtual Musical Instruments: Accessing

    the Sound Synthesis Universe as a Performer", 1994,

    Axel Mulder's, School of Kinesiology, Simon Fraser

    University, Burnaby, B.C., V5A 1S6 Canada,considers a Virtual Musical Instruments (VMI) as an

    instrument capable of mapping any type of physical

    gesture or mouvement to any class of sounds and

    writes: with current state-of-the-art human

    movement tracking techology it is possible to

    represent in real-time most of the degrees of freedom

    of a (part of the) human body. This allows for the

    design of a virtual musical instrument (VMI),

    analogous to a physical musical instrument, as a

    gestural interface, that will however provide for

    much greater freedom in the mapping of movement

    to sound. A musical performer may control therefore

    parameters of sound synthesis systems that in real-

  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    3/8

    time performance situations are currently not

    controlled to their full potential or simply not

    controlled at all. In order to decrease the learning

    and adaptation needed and avoid injuries, the design

    must address the musculo-skeletal, neuro-motor and

    symbolic levels that are involved in the

    programming and control of human movement. The

    use of virtual musical instruments will likely result innew ways of making music and new musical styles

    [11].

    2) In Toward an Understanding of Musical

    Gesture 1996, Teresa Marrin, Media Arts and

    Sciences, Massachusetts Institute of Technology,

    writes: recent work done in developing new digital

    instruments and gestural interfaces for music has

    revealed a need for new theoretical models and

    analytical techniques. Interpreting and responding to

    gestural events -- particularly expressive gestures for

    music, whose meaning is not always clearly defined -

    - requires a completely new theoretical framework.The tradition of musical conducting, an existent

    gestural language for music, provides a good initial

    framework, because it is a system of mappings

    between specific gestural cues and their intended

    musical results[12].

    3) As regard Gestural capture techniques,

    Depalle et alii, Serveur IRCAM - CENTRE

    POMPIDOU Paris, 1997, say: if we focus on the

    case of instrumental gestures, one can divide gestural

    capture techniques according to the following

    capture strategies: One can use different sensors in

    order to capture different gestures or movements.These sensors may or may not need physical contact

    to transduce the gesture(s) into electrical signals. If

    physical contact is involved, they can be called

    Haptic; otherwise, Non-Haptic. The sensors can, in

    turn, be classified as to whether the sensor outputs

    are continuous or discrete values of the variable

    sensed. We consider these techniques as direct

    gestural acquisition, since one is capturing the actual

    physical gesture or movement through the sensors

    [13].

    4) Mapping: J. Rovan, M. Wanderley, S.Dubnov, and P. Depalle, from Serveur IRCAM -

    CENTRE POMPIDOU Paris, in Instrumental

    gestural mapping strategies as expressivity

    determinants in computer music performance write:

    a common complaint about electronic music is that

    it lacks expressivity. In response to this, much work

    has been done in developing new and varied

    synthesis algorithms. However, because traditional

    acoustic musical sound is a direct result of the

    interaction between an instrument and the

    performance gesture applied to it, if one wishes to

    model this espressivity, in addition to modeling the

    instrument itself - whatever the technique/algorithm -

    one must also model the physical gesture, in all its

    complexity. Indeed, in spite of the various methods

    available to synthesize sound, the ultimate musical

    expression of those sounds still falls upon the capture

    of gesture(s) used for control and performance. We

    propose a classification of mapping strategies into

    three groups: - One-to-One Mapping : Each

    independent gestural output is assigned to onemusical parameter, usually via a MIDI control

    message. This is the simplest mapping scheme, but

    usually the least expressive. It takes direct advantage

    of the MIDI controller architecture. - Divergent

    Mapping : One gestural output is used to control

    more than one simultaneous musical parameter.

    Although it may initially provide a macro-level

    expressivity control, this approach nevertheless may

    prove limited when applied alone, as it does not

    allow access to internal (micro) features of the sound

    object. - Convergent Mapping : In this case many

    gestures are coupled to produce one musical

    parameter. This scheme requires previous experiencewith the system in order to achieve effective control.

    Although harder to master, it proves far more

    expressive than the simpler unity mapping.[14]

    5) Sensory Feedback Claude Cadoz and his

    group at ACROE - Grenoble, France where since the

    late 1970's they have been studying and developing

    interaction systems using physical modelling

    synthesis (CORDIS/ANIMA system) and force

    feedback devices (Modular Feedback Keyboard).

    Cadoz write in different places: many studies have

    shown the importance of tactile-kinesthetic feedback

    for an expert player, in comparison to the relativeimportance of visual feedback for beginners. In an

    instrumental context, it is therefore interesting to try

    to simulate these effects in order to profit from the

    expert technique developed for acoustic instruments..

    [15]

    6) Interfaces: this entry points to many

    different academic (including the cART lab), private

    organization groups and individuals which and who

    realized a great variety of interfacing systems and

    devices.

    3 The cART lab project

    The activity of the cART Lab (computer Art lab of

    CNUCE/CNR) is characterized by the design and the

    realization of systems and devices for gesture control

    of real-time computer generated music in order to

    give expression to interactive electroacoustic

    performances, as it happens in traditional music. The

    wireless technology paradigm has been taken into

    consideration.

    Main targets of the research consist of the

    implementation of models and systems for detectinggesture(s) of the human body that becomes the

  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    4/8

    This device consists of two sets of sensing elements

    that create two zones of the space, i.e. the vertical

    edges of two square-based parallelepiped, or virtual

    towers. At the time the

    first prototype was

    realized (1995), the

    shape of the beams

    suggest us the profile of

    the Twin Towers in NewYork. Starting form

    then, the device has been

    presented in conferences and concerts with this name;

    but after september 11 we decided to use no more

    this name. However, since we recently carried out a

    new versions of this device consisting of up to 8

    groups of 4 elements (towers) with special

    mechanical supports which allow different

    arrangements such as a drum set, already we had

    decided to look for a new one.

    natural interface able to give feeling and

    expressiveness to computer based multimedia

    performances. The term multi-modality is often

    related to these typologies of interfaces just for

    emphasizing that combining different modes of

    perception becomes relevant for the performer and

    for the audience that, at the end, is the final user of

    the performance [16].At the cART lab attention has been focused in

    designing and developing new original general-

    purpose man-machine interfaces taking into

    consideration the infrared beams and the real-time

    analysis of video captured images wireless

    technologies. Specific targets of the research consist

    of studying models for mapping gesture to sound

    because in electro-acoustic music nothing is pre-

    established as in traditional music and instruments

    where there exist precise and well consolidated

    timbric, syntactic (harmony) and fingering systems of

    reference.

    The basic idea consists of remote sensing gesture ofthe human body considered as a natural and powerful

    expressive interface able to get as many as possible

    information from the movements of naked hands.

    Other related targets concern - the analysis and the

    representation of audio signal by means of timbre

    models based on additive (Fourier) technique and

    more recent models [17] the implementation of

    languages for music composition and interactive

    performance - the realization of algorithms for

    synthesis and processing (e.g. coloring,

    spatialization) of sound. The research has been

    motivated by real artistic expressive needs coming

    from artists outside and inside the lab.

    4 Infrared beams controller

    The measurements of distance of the different zones

    of the hands are performed by the amount of reflectedlight captured by the receivers (Rxs) and are quite

    accurate in respect to the irregularity, in shape and

    color, of the hands palms. Voltage analog values

    coming from the Rxs are converted into digital

    format and sent the computer about 30 times/sec. The

    computer then processes data in order to reconstruct

    the original gesture [18].

    Fig. 1 Different positions of the hands

    Its so possible to detect positions and movement of

    the hands such as height and side and/or front

    rotations; besides, depending on the specific piece of

    music composed to be played, its possible and

    necessary to properly interpret data for givingmeaning to other kind of movements of the hands

    such as flying on the device in different directions

    at different heights as shown in fig.1.

    The new version of this interface has on board a

    microprocessor for pre-processing sensor data with

    calibration, linearization and other ad hocroutines. It

    meets the standalone specifications and is equipped

    with a MIDI OUT port.

    This device is stable and responsive; as a

    consequence, the generated sound provokes on the

    performer the sensation of touching the sound; this

    is a sort of a psychological feedback which greatly

    contributes to give expression to computer generatedelectro-acoustic music.

    5 Handle

    Based on real-time analysis of video captured images,

    a system for recognizing shape, position and rotation

    of the hands has been developed: the performer

    moves his/her hands in a video-camera capture area,

    the camera sends the signal to a video digitizer card

    and the computer processes the mapped figures of theperformers hands and produces data concerning x-y

    positions, shape (that is, posture)

    and angle of rotation of both the

    hands. Data extracted from the

    image analysis of every frame are

    used for controlling real-time

    interactive computer music and

    computer graphics performances.

    Both the hands are taken into consideration.

    For each hand the following steps are executed: - the

    barycenter is computed and then used for -

    constructing a one-period-signal using distances

    from the barycenter to the points along the contour of

  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    5/8

    the shape of the hand taken on radii at a predefined

    angular steps; -the resulting one-period-signal is then

    processed

    Fig. 2 One-period-signal of the open-fingers hands

    with FFT algorithm and the resulting spectrum is

    compared with previously stored spectra referring to

    specific posture of the hands; - the "closer" spectrum

    in terms of vector distance corresponds to the

    recognized shape. The program finally produces data

    concerning x-y positions, shape (posture) and angle

    of rotation of both the hands [19].

    On the basis of the functionality of this application

    two relevant systems have been implemented: theImaginary Piano and the PAGe (Painting by Aerial

    Gesture) system.

    5.1 Imaginary Piano

    In the Imaginary Piano a pianist sits as usual on a

    piano chair and has in front nothing but a CCD

    camera few meters away pointed on his hands. There

    exists an imaginary line at the height where usually

    the keyboard lays: when a finger, or a hand, crosses

    that line downward, handle systems report proper

    information regarding the key number and aspecific messages issued in accordance of "where"

    and "how fast" the line has been crossed. Messages

    are used for controlling algorithmic compositions

    rather than for playing scored music.

    5.2 PAGe system

    Another application based on the video captured

    image analysis system is PAGe, Painting by Aerial

    Gesture, which allows video graphics realtime

    performances. It has been inspired and proposed by

    visual artist Marco Cardini (www.marcocardini.com)

    after the experience of the Italian artist Lucio Fontana

    who, cutting the canvas indicated the way for

    trespassing the "limits" of the canvas itself, during

    years '93 and '94 suggested the idea of a system able

    to give the possibility of "..painting in the air.." and to

    introduce a new dimension to painting: time.

    Figure 3: PAGe system configuration

    PAGe comes after two previous trial experiences:

    Shine Hands and the Aerial Painting Hands. PAGe

    has been designed by L.Tarabella and developed by

    Dr. Davide Filidei as argument for his thesis

    discussed at the Computer Science faculty, University

    of Pisa. D.Filidei is still maintaining and updating

    PAGe following the expressive needs of artistCardini. The system permits Cardini, who also

    performs on stage, to paint images projected onto a

    large video screen by moving his hands in the air.

    6 Mapping

    The different kind of gestures such as continuos or

    sharp movements, threshold trespassing, rotations

    and shifting, are used for generating sound event

    and/or for modifying sound/music parameters.

    For classic acoustic instruments the relationship

    between gesture and sound is the result of the physicsand mechanical arrangement of the instrument itself.

    And there exist one and only one relationship.

    Using a computer based music equipment, its not so

    clear what and where is the instrument; from

    gesture interfaces such as the infrared beam

    controller or Handle, to loudspeakers which actually

    produce sound, there exist a quite long chain of

    elements working under control of the computer

    which performs many tasks simultaneously:

    management of data streaming from the gesture

    interfaces, generation and processing of sound,

    linkage between data and synthesis algorithms,

    distribution of sound on different audio channels, etc.This means that a music composition must be written

    in term of a programming language able to describe

    all the components including the modalities for

    associate gesture to sound, also said how to map

    gesture to sound. The mapping makes therefore part

    of the composition.

    7 A new realtime music languageIn order to put at work the facilities offered by the

    gesture interfaces we realized, at first we took into

    consideration the most popular music languages:

    MAX/DSP and Csound.

    http://www.marcocardini.com/http://www.marcocardini.com/
  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    6/8

    Unfortunately, both languages resulted not precisely

    suited for our purposes mainly because Csound was

    not so realtime as declared and Max was not so

    flexible for including video captured images analysis

    code. Using two computers (first one for managing

    interfaces, second one for audio synthesis) connected

    via MIDI, resulted awkward and inefficient.

    We started writing basic libraries for processingsound and for driving the gesture interfaces bearing

    in mind the goal of a programming framework where

    to write a piece of music in terms of synthesis

    algorithms, score and management of data streaming

    from gestural interfaces [20]. On the long run the

    framework became a very efficient, stable and

    powerful music language based on pure C

    programming, that is pure-C-Music, or pCM.

    pCM falls in the category of the embedded

    language and needs a C compiler in order to be

    operative; at first this may appear an odd news, but

    consider the good news of getting a compiled code of

    a synthesis algorithm which generate and processsound running many times faster in respect to Csound

    which, on the counter part, is an interpreted language.

    7.1 pCM, pure-C-Music language

    PCM has been implemented using one of the most

    popular C compiler or better, multiplatform

    development system: Metrowerks Code Warrior. As

    a result a pCM composition consists of a CW project

    which includes all the necessary libraries including,

    once again, a DSPlib consisting of a number of

    functions (at the moment about 50) able to implementin realtime the typical synthesis and processing

    elements such as oscillators, envelope shapers, filters,

    delays, reverbs, etc..

    The composition itself a C program consisting of four

    functions(void): Init(), Orchestra(), Score() and

    Finish() properly invoked by the main program which

    controls the whole machinery; for pratical

    problems its a good idea the four functions make

    part of the same file which also includes declaration

    of the variables visible by the all functions and

    expecially from Orchestra() and Score(). Everything

    here is written following C syntax; synthesis

    algorithms, score and declaration of variables anddata structures; everything here is compiled into

    machine code and running at CPU speed.

    The Init() function includes everything regarding

    initialization and/or loading such as envelopes, tables,

    samples, delays, reverbs, variables and data

    structures. Usually it includes calls for opening Midi,

    Audio and/or Video Input channels and is called once

    at the beginning of the composition. As a counter part

    the Finish() function is called at the end and is used

    for closing channel and dispose previously allocated

    memory.

    7.2 Instruments in pCM

    An instrument is defined in the Orchestra() function

    and consists of code for sound synthesis and

    processing; it is continuously called at audio

    sampling rate, that is 44100 times per second. An

    instrument is defined in terms of an ordinary C

    program (with all the programming facilities such asfor, do-while, and if-then-else control structures)

    which calls functions belonging to the DSPlib;

    assigning the results of the whole computation to two

    predefined system variables outLand outR,sound

    is generated. Look at the following simple example..sig = ampli*Env(1)*OscSin(1,freq);outL = sig*pan;outR = sig*(1.-pan);

    where

    -sigis a local variable;

    - ampli,freqandpanare global variables filled in by

    Score();

    - Env(1) and OscSin(1,freq) belong to the DSPlib

    and sounds like that:

    float OscSin (int nOsc, float freq){

    float pos;pos = oscFase[iN][nOsc] + freq;if (pos>=tabLenfloat)

    {pos=pos-tabLenfloat;}if (pos

  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    7/8

    position to left-right panning. In the MacOs

    environment, the mouse position is returned invoking

    the GetNextEvent(..) tool-box function which leaves

    the x,y position values can be found in the

    Event.where.-variable and used as follows:

    .pan= Event.where.h/1023.;

    frq= 800-Event.where.v;

    These two lines make part of the Score() which is

    automatically and repeatedly called by the main

    mechanism. Since the mouse spans between 0 and

    1023 horizontally and from 0 to 767 vertically, the

    variable pan and frq communicate proper values to

    the instrument for changing frequency and panoramic

    position.

    Its also possible to generate (offline or in realtime

    under control of data streaming from gesture

    interfaces) sequences of events automatically

    activated by the Scheduler(), a special mechanismwhich triggers sounds at the right times and change

    parametric values in the instrument.

    Being too long to explain here the potentialities of the

    Scheduler() and the other facilities offered by pCM,

    we stop here on the paper and promise well give a

    satisfying demo during the meeting.

    8 Conclusions

    In the introduction we reported a small but

    significative panorama of the state-of-the-art on the

    different problems, solutions and results of people

    and research centers around the world active on the

    gesture interface topic. In particular we described our

    approach and the results reached.

    The systems and devices we realized have been used

    several times in the last years for demos,

    conferences, lectures and concerts, always well

    accepted with interest and enthusiasm by the

    audience from both scientific and artistic points of

    view.

    From the experience gained after deeply using the

    gesture devices and systems during our academic andartistic activity, we realized that its time now for

    better formalizing mapping strategies and for

    classifying sets of gestures [21].

    After all, this research activity finds now the right

    place in the MOSART project for what concerns in

    particular the involvement of young researchers of

    the network.

    9 Acknowledgments

    Special thanks are due to Massimo Magrini and

    Gabriele Boschi who, as professional programmersand electronic designers, greatly contributed in

    developing software for audio synthesis and

    processing and hardware related to the gesture device

    and systems; special thanks are also due to artist

    Marco Cardini who greatly contributed and still is

    contributing to the artistic life of cART lab of CNR in

    Pisa and to Dr. Davide Filidei who, as a student first

    and as a professional programmer now, put and puts

    at work his skillfulness and love for Art.

    References

    [1] Tarabella L. - Guest Editor of the Special Issue onMan-Machine Interaction in live Performance -

    Interface, Journal of New Music Research, Swets &

    Zeitlinger B.V. Vol.22 n.3 (1993)

    [2] Tarabella L., Bertini G., Giving expression tomultimedia performances ACM Multimedia 2000,

    Workshop Bridging the Gap: Bringing Together New

    Media Artists and Multimedia Technologists Los

    Angeles, 2000

    [3] Desmond K., A timetable of Inventions andDiscoveries, M. Evans & Company, Inc. New York

    (1985)

    [4] Paradiso J., Electronic music: new ways to play. InIEEE Spectrum, Vol. 34(12), pages 18-30. IEEE

    Computer Society Press, (1997)

    [5] Tarabella L., Magrini M., Scapellato G., Devices forinteractive computer music and computer graphics

    performances , IEEE Computer Society Press,

    (1997).

    [6] Rowe R., Interactive Music Systems - MachineListening and Composing. MIT Press, 1993.

    [7] Rowe, R., Machine Musicianship Cambridge: MITPress. March 2001 ISBN 0-262-18296-8

    [8] Wexelblat A., An Approach to natural gesture invirtual environment in ACM ToCHI, pp.179-200

    ACM Press, (1996)

    [9] Nigay L., Coutaz J., A generic platform foraddressing the multimodal challenge. In Procs of

    ACM CHI95 pagg. 98-105, ACM Press

    [10] M.Wanderly,

    [11] Mulder A:, ``Virtual musical instruments: Accessingthe sound synthesis universe as a performer,'' in

    Proceddings of the First Brazilian Symposium on

    Computer Music, 1994.

    [12] T. A. Marrin and R. Picard, ``A Methodology forMapping Gestures to Music Using Physiological

    Signals.'' Presented at the International Computer

    Music Conference(ICMC'98), Ann Arbor, Michigan,

    1998.

    [13] C. Cadoz and C. Ramstein, ``Capture, representation

    and composition of the instrumental gesture,'' inProc. Int. Computer Music Conf. (ICMC'90), pp. 53-

    56, 1990.

    http://www.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/http://www.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/http://www.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/http://www.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/
  • 8/10/2019 Wireless Technology in Gesture Controlled Computer Generated Music Leonello Tarabella 10.1.1.200.317

    8/8

    [14] P. Depalle, S. Tassart, and M. Wanderley,Instruments Virtuels Resonance, pp. 5-8, Sept. 1997.

    [15] J. Rovan, M. Wanderley, S. Dubnov, and P. Depalle,``Instrumental gestural mapping strategies as

    expressivity determinants in computer music

    performance'' in Proceedings of the Kansei - The

    Technology of Emotion Workshop, (Genova - Italy),Oct. 1997.

    [16] R. Bargar, ``Multi-modal synchronization and theautomation of an observer's point of view,'' in

    Proccedings of the 1998 IEEE International

    Conference on Systems, Man, and Cybernetics

    (SMC'98), 1998.

    [17] Bertini G., Tarabella L., Magrini M., Spectral datamanagement tools for additive synthesis,

    Proocedings of DAFX (Digital Audio Effects),

    Universit di Verona, dicembre 2000

    [18] Tarabella L., Bertini G., Sabbatini T., The TwinTowers: a remote sensing device for controlling live-

    interactive computer music. In Procs of 2nd

    international workshop on mechatronical computer

    system for perception and action, SSSUP, Pisa, (1997)

    [19] Tarabella L., Magrini M., Scapellato G., A system for

    recognizing shape, position and rotation of the handsin in Proceedings of th Internationl Computer Music

    Conference 97 pp 288-291, ICMA S.Francisco,

    (1997)

    [20] Tarabella L., Bertini G., Boschi G., A data streamingbased controller for realtime computer generated

    music, Proceedings of ISMA2001, International

    Symposium on Musical Acoustics, Fondazione Cini,

    Venezia.

    [21] R. Bargar, ``Multi-modal synchronization and theautomation of an observer's point of view,'' in

    Proccedings of the 1998 IEEE International

    Conference on Systems, Man, and Cybernetics

    (SMC'98), 1998.