What’s New with MPEG? Symbolic Music Representation in...
Transcript of What’s New with MPEG? Symbolic Music Representation in...
With the spread ofcomputertechnology into theartistic fields, newapplication scenariosfor computer-basedapplications ofsymbolic musicrepresentation(SMR) have beenidentified. Theintegration of SMRin a versatilemultimediaframework such asMPEG will enablethe development ofa huge number ofnew applications inthe entertainment,education, andinformation deliverydomains.
42 1070-986X/05/$20.00 © 2005 IEEE
What’s New with MPEG?
Music is a complex piece of infor-mation. Although it’s usuallyconsidered in its audible repre-sentation, many notations have
attempted to represent visually or by othermeans the information needed to produce musicas the author composed it. Music notationsaren’t intended exclusively for performers, how-ever; much like the music they conveyandespecially 20th-century musicmusical scoresare often considered art masterpieces in theirown right.
A symbolic music representation (SMR) is alogical structure based on symbolic elements rep-resenting audiovisual events, the relationshipbetween those events, and aspects related to howthose events can be rendered. Many symbolic rep-resentations of music exist, including differentstyles of chant, classic, jazz, and 20th-centurystyles; percussion notation; and simplified nota-tions for children and vision-impaired readers.
Information technology’s evolution has morerecently changed the practical use of music rep-resentation and notation, transforming themfrom a simple visual coding model for sheetmusic into a tool for modeling music in comput-er programs and electronic devices in general.1
Consequently, we currently use SMR or musicnotation for purposes other than producing sheetmusic and teachingfor example, for audio ren-dering, entertainment, music analysis, databasequerying, and performance coding.
The integration of SMR in versatile multime-
dia frameworks such as the Moving PictureExperts Group’s MPEG-4 standard, with technolo-gies ranging from video and audio to 3D scenes,interactivity, and digital rights management, willenable the development of many new applica-tions. This will let users load and receive musicnotation integrated with multimedia in consumerelectronic devices, so as to manipulate music rep-resentation without infringing on copyright laws.
Multimedia music notationNot only is music representation changing
with respect to its purely music-related use, butassociated applications that integrate multime-dia and interactivity are rapidly evolving as com-puter technology spreads into the arts. Newscenarios for computer-based applications of SMR(see also the “Current Multimedia InteractiveMusic Applications” sidebar) include these:
❚ multimedia music for music tutoring systemssuch as Imutus(http://www.exodus.gr/imutus)and Musicalis (http://www.musicalis.fr);
❚ multimedia music for “edutainment” and“infotainment” (information and entertain-ment such as news on new music events thatincludes some entertainment aspects)forexample, for archives such as Wedelmusic’s(http://www.wedelmusic.org) and for theaterssuch as OpenDrama (http://www.iua.upf.es/mtg/opendrama); and
❚ cooperative music editing in orchestras andmusic schools, such as the Moods2,3 project.
Users have discovered the multimedia experi-ence, and more suitable multimedia representa-tions of music have in many cases replaced thetraditional music notation model. The musiccommunity and industry need a comprehensiverepresentation of music information integratedwith other media. To realize multimedia andinteractive applications, we must account foraspects ranging from information modeling tointegration of the music representation withother data types. The main problems centeraround acceptably organizing music semanticsand symbols to cope with the more general con-cepts of music elements and their relationshipwith audiovisual content.
The capability to represent nonwestern musicnotations, such as those from Asian countries,the Middle East, and Northern Africa, is also
Symbolic MusicRepresentationin MPEG
Pierfrancesco Bellini and Paolo NesiUniversity of Florence
Giorgio ZoiaÉcole Polytechnique Fédérale de Lausanne
43
Octo
ber–D
ecemb
er 2005
becoming of paramount importance. To supportthe inclusion of these music representations, themodel must be sufficiently open and general.Furthermore, educational aspects strongly influ-ence requirements for effective music represen-tation models, because they must integratesemantics information for rendering or playingand performance evaluation at the music ele-ment level. Many researchers have addressed thenotation-modeling problem with its related ren-dering through computer systems.4-9
Symbolic music representation Current tools for applications integrating mul-
timedia with music notation are based on pro-prietary formats. The introduction of an MPEGSMR will let users move some relevant complex-ity from software tool development to the con-
tent model, so that standard tools can reenter thesame content, including SMR, on more tools anddevices. This will increase the market potentialfor new music applications. It was on thesegrounds that the MusicNetwork proposed a newwork item for integrating SMR into MPEG stan-dard formats (primarily MPEG-410).
MPEG SMR will allow the synchronization ofsymbolic music elements with audiovisual eventsthat existing standardized MPEG technology canrepresent and render. The breadth of MPEG stan-dards for multimedia representation, coding, andplayback, when integrated with SMR through anefficient collaborative development, will provideincreased content interoperability in all MPEGSMR-compatible products. It will also help withexchanging and rendering SMR codes in a secure,efficient, and scalable manner.
New devices, products, and several innovative research anddesign projects supported by the European Commission, suchas the MusicNetwork Center of Excellence (http://www.interactivemusicnetwork.org), are helping to popularize multi-media interactive music. These new applications exploit mediaintegration and distribution via the Internet, satellite data broad-cast, and other means to reach a large number of users.Innovative features are beginning to inundate the market, espe-cially in the area of music education. Products include Freehand(see http://www.freehandsystems.com), Yamaha tools (http://www.digitalmusicnotebook.com/home), Sibelius Music Educa-tional Tools (http://www.sibelius.com), and Finale CODA solu-tions (http://www.finalemusic.com/).
One of the best-known computer-based music applica-tions—notation editing for professional publishing and visual-ization—is often too focused on the visual rendering of musicsymbols.1-3 Similar applications include Sibelius (http://www.sibelius.com), Finale CODA (http://www.finalemusic.com), Capella (http://www.whc.de/capella.cfm), and MidiNotate,(http://www.notation.com).
Music publishers must produce music scores that are highquality in terms of the number of symbols and their placementon the staff. Several Extensible Markup Language (XML)-com-pliant markup languages for music modeling are available. Theyinclude the Musical Notation Markup Language (MNML),MusicML, Music Markup Language (MML), MusicXML,4
Wedelmusic,5 and CapXML.6
Most focus on modeling the music elements to representand exchange them among applications. Only a few of the for-mats can cope with the emerging needs of innovative multi-media music interactive applications, which mostly use
proprietary and incompatible technologies that reshape themusic content for each tool and have difficulty exchanging(mostly notational) information among tools.
With no standardized formats for the symbolic representa-tion of music (SMR), developers must implement their ownsolutions, which are highly variable in efficiency, scope, quali-ty, and complexity. Previous attempts to standardize musicnotation include the Standard Music Description Language(SMDL)7 and Notation Interchange File Format (NIFF).8
References1. D. Blostein and L. Haken, “Justification of Printed Music,” Comm.
ACM, vol. 34, no. 3, Mar. 1991, pp. 88-99.
2. J.S. Gourlay, “A Language for Music Printing,” Comm. ACM, vol.
29, no. 5, May 1986, pp. 388-401.
3. B.W. Pennycook, “Computer-Music Interfaces: A Survey,” ACM
Computing Surveys, vol. 17, no. 2, June 1985, pp. 267-289.
4. M. Good, “MusicXML for Notation and Analysis,” The Virtual
Score: Representation, Retrieval, Restoration, W.B. Hewlett and E.
Selfridge-Field, eds., MIT Press, 2001, pp. 113-124.
5. P. Bellini and P. Nesi, “Wedelmusic Format: An XML Music
Notation Format for Emerging Applications,” Proc. 1st Int’l Conf.
Web Delivery of Music, IEEE Press, 2001, pp. 79-86.
6. P. Bellini, R. Della Santa, and P. Nesi, “Automatic Formatting of
Music Sheets,” Proc. 1st Int’l Conf. Web Delivering of Music
(Wedelmusic), IEEE Press, 2001, pp. 170-177.
7. Standard Music Description Language, ISO/IEC DIS 10743, Int’l
Organization for Standardization/Int’l Electrotechnical Commis-
sion, 1995.
8. NIFF 6a.3: Notation Interchange File Format, NIFF Consortium,
1998; http://neume.sourceforge.net/niff/.
Current Multimedia Interactive Music Applications
Figure 1 summarizes the new applications thatMPEG SMR makes possible. The many music-related software and hardware products current-ly available can greatly benefit from MPEG SMR,because it will foster new tool development byallowing increased functionality at a reducedcost. This increased functionality will quicklyaffect such applications as interactive music tuto-rials, composition and theory training, and mul-timedia music publication.
Information about the ongoing MPEG SMRactivity—including a large collection of docu-ments containing MPEG SMR requirements, sce-narios, examples, and links—is available from theMPEG ad hoc group on SMR Web pages at http://www.interactivemusicnetwork.org/mpeg-ahg.
Application scenarios Currently available products presenting some
form of integration of SMR and multimediainclude tools in the following areas:
❚ music education (for example, notation toolsintegrating multimedia, multimedia electron-ic lecterns for classrooms and lecture halls,and music education via interactive TV),
❚ music management in libraries (such as musictools integrating multimedia for navigationand synchronization),
❚ interactive entertainment (for example,karaoke-like synchronization between sound,text, and symbolic information),
❚ piano keyboards with symbolic music repre-sentation and audiovisual capabilities, and
❚ multimedia content integrated with musicnotation on musical instruments, PDAs, andso on.
Some innovative applications integrating syn-chronization of music notation and other mediaand 3D virtual reality are already in use in com-mercial tools such as Voyetra, SmartScore, PlayPro,and PianoTutor, and in prototypes from researchand design projects (such as OpenDrama,Wedelmusic,1 Moods,3 and Imutus). All of theseapplications exploit MPEG technology for stan-dard multimedia integration and the possibility ofdistributing various forms of content in a com-pletely integrated manner. Organizations can usethe distribution of multimedia music content toreach the mass market on interactive TV, mobiledevices, media centers, and PCs for education,entertainment, and other applications.
Entertainment One of the most interesting applications is the
publication of multimedia music for entertain-ment, interactive music tutorials, and multime-dia music management in libraries, archives, andtheaters. These applications integrate SMR withaudio, lyrics, annotations, and visual and audi-ble renderings (such as notations, parts and mainscores, ancient and common western notation),and synchronize with audiovisual content.
For example, a multimedia music applicationfor lyric operas, such as OpenDrama, synchro-nizes the music score with a video of the opera orwith a virtual reality version (see Figure 2), lettingthe user
❚ follow the music by reading the instruments’or singers’ score and selecting the lyric in a dif-ferent language if desired;
❚ read the libretto and its translation;
❚ read and understand the plot (which can beintricate); and
❚ personally annotate the music to mark a par-ticular passage.
With such an application, a user inside a the-ater could follow the actions on stage using apalmtop installed in the stalls.
EducationAnother application is the exploitation of
44
IEEE
Mul
tiM
edia
School serverArchiveDistributorProducer…
Distribution
Applications
End-userdevices
For entertainment,in theaters, archives, at home, or at school
For music education at home or at school
Figure 1. General
scenarios of MPEG
symbolic music
representation (SMR)
in entertainment and
education. MPEG-4
with SMR supports
distribution through
satellite data
broadcast, the Internet,
and wireless or
traditional
communication to
theaters, homes,
archives, and devices
such as interactive TVs
and PDAs.
MPEG SMR to support education (Musicalis,Imutus, and Wedelmusic are examples). As Figure3 illustrates, these applications integrate SMRwith audio, lyrics, annotations, and semantics forvisual and audible renderings, and synchronizewith audiovisual content. They also provide 3Dvirtual renderings of hand and body positions(posture and gesture) or of the scene.
Music education and edutainment are musicrepresentation’s largest markets. Just as we cansynchronize a text to images and sounds in com-mercial DVDs, we can also synchronize audio,video, and music representation for variousinstruments and voices, and with various mod-els or rendering aspects (tablature, Neumes,Braille, and so on). The capability to distributecontent in different forms but related to the samepiece of music is therefore desirable.
For music education and courseware con-struction, music notation must be integratedwith multimedia capabilities. In a large part ofEurope about 80 percent of young students studymusic in schools; thus, many of them can readmusic representation and play an instrument tosome extent. Lower but still relevant percentagescan be observed in other countries.
Music courseware must integrate music repre-sentation with other elements, such as video andaudio files; storic, biographic, and theory docu-ments; and animation. A music course can alsooffer exercises requiring special music notationsymbols (assigned by an instructor or user-defined,for example) or audio processing (such as play ortheory training assessment). Thus, music educa-tion and courseware production requires that boththe client (music content usage) and server (con-tent generation) have a visual representation ofthe musical model with full access to logicalaspects and additional semantics information forvisual and audio rendering, as well as the capabil-ity to manipulate music, support synchronization,and establish relationships with other media.These systems and models must therefore let usersdo the following:
❚ navigate through music representation fea-tures and multimedia content by followinglinks associated with music symbols or enti-ties (such as a video explaining a difficult pas-sage’s semantics or the composer’s goal);
❚ edit music, which implies dynamically alter-ing the symbolic music’s logical, audio, andvisual domains;
❚ transpose music notationthat is, dynami-cally altering details in the symbolic music’slogical, audio, and visual domains;
❚ play music by interpreting the symbolic musicrepresentation to produce, for example, avisual rendering with some scrolling and asynchronized audio;
❚ format music notationthat is, dynamicallyinterpret the symbolic music representationto produce a visual rendering that fits thescreen or window size;
❚ reduce a music score with one or several partsto a single piano part, which involves dynam-ically altering aspects in symbolic music’s log-ical, audio, and visual domains;
❚ select one lyric from a collection of multilingual
45
Figure 2. Mock up of a
tool exploiting MPEG
SMR for entertainment
for theaters, interactive
TV, archives, and
PCs/Tablet PC.
Music score
Scorebrowser
Teacher, example, and explanations
Hand positionsin virtual reality
Exercise tempo controlFinger positions
Figure 3. Mock up of a
tool for exploiting
MPEG SMR for
education that’s
suitable for assisted or
self-learning on
interactive TV, Tablet
PC, and mobile devices.
lyrics for the same music representation, or uselyrics in which music notation is synchronizedwith video and/or other audio content;
❚ synchronize audiovisual events with musicrepresentation elements to show visually howa piece of music should be played or a scorefollowed (this can also involve changing per-formance velocity, adjusting playing parame-ters, and so on);
❚ show video of the instructor playing an instru-ment synchronously with the music notation orshow the 3D rendering of correct hand gestures;
❚ play along with the computer, which couldplay selected parts using real audio, musicalinstrument digital interface (MIDI, http://www.midi.org) files, or a more sophisticatedmusic notation rendering, or could commenton and correct the user’s work; and
❚ format music for rendering on differentdevices, in different formats, or with differentresolutions (for example, spoken, or talked,music is a verbal description of a music con-text useful for hearing-impaired, dyslexic, oryoung students, or for nonexperts editingmusic notation; see http://projects.fnb.nl/Talking%20Music/default.htm).
More interactive and complex players integrateadditional capabilities to support the following:
❚ textual, notational, or audiovisual annotation,including capabilities for versioning andattaching complex information and renderinghints to SMR elements;
❚ recognition of notes sung or played by aninstrument using pitch recognition or othertechnologies—that is, the audio events must bein line with the notation conventions at thelogical or symbolic level, creating an appropri-ate reconstruction and maybe rendering;
❚ assessment with respect to semantic annota-tionsfor example, how to execute a certainsymbol or assess a student’s execution of amusic notation piece (as noted earlier, agood level of assessment requires the precisemodeling of pitch, rhythm, and sound qual-ity, as well as detailed descriptions of physi-cal gestures);
❚ cooperative music notation for classes, fororchestral work on music lecterns (for exam-ple, using Tablet PCs), for rehearsal manage-ment, and so on; and
❚ customization of SMR rendering according tothe individual’s needs (for example, selectingsymbols to be verbally described, printed, orplayed, or selecting a simpler model for thevisualization or playing).
Integration Current standardized MPEG tools don’t sys-
tematically or satisfactorily support symbolic formsof music representation. Indeed, MPEG-4 covers ahuge media domain. In particular, the standard’saudio part lets an author synchronize standardMIDI content with other forms of coding.
Furthermore, structured audio (SA) allowsstructured descriptions of audio content through anormative algorithmic description language asso-ciated with a score language that’s more flexiblethan the MIDI protocol. Although it’s possible toextract symbolic representation from the infor-mation carried by SA tools (including MIDI), thisinformation can’t sufficiently guarantee correctnotation coding because they lack, for example,information about visual and graphic aspects,many symbolic details, a thorough music notationmodeling, and many necessary details for a correcthuman–machine interaction through the SMRdecoder (for example, visual and concise repre-sentation of sequences of events are representedby ornaments and not by the detailed notes).
MPEG-7 also provides some symbolic music-related descriptors, but they aren’t intended forcoding SMR as a form of content. On the otherhand, SMR content can be a complete represen-tation in itself and can be synchronized withother audiovisual elements.
The integration of SMR in MPEG satisfies theapplication requirements described earlier, as wellas many more, letting such tools integrate withthe powerful MPEG model for multimedia repre-sentation, coding, and playback. At the same time,compliance with the MPEG SMR standard won’trequire compliance with all MPEG-4 parts andtools. MPEG mechanisms can cope with the stan-dard’s complexity through profiles, which couldlet users shape specific SMR-oriented applicationsusing only a few selected MPEG-4 features, as isthe case for many other parts of MPEG-4. In fact,SMR’s added value for multimedia music integra-tion goes beyond the development inside MPEG-
46
IEEE
Mul
tiM
edia
47
4, because its core part can also act as a standalonetool outside the MPEG framework.
SMR will be integrated into MPEG-4 by
❚ defining an XML format for text-based sym-bolic music representation that will allowinteroperability with other symbolic musicrepresentation and notation formats and actas a source for the production of equivalentbinary information stored in files or streamedby a suitable transport layer;
❚ adding a new object type for delivering a bina-ry stream containing SMR and synchroniza-tion information and providing an associateddecoder that will let users manage the musicnotation model and add the necessary musicalintelligence to allow human interaction; and
❚ specifying the interface and behavior for theSMR decoder and its relationship with theMPEG-4 synchronization and interactionlayer (systems part, binary format for scenes,or BIFS, nodes, and advanced graphics parts).
Figure 4 illustrates SMR’s use during the author-ing phase, highlighting the MPEG-SMR ExtensibleMarkup Language (XML) format’s central role.Users can produce SMR XML content using appro-priate converters or a native SMR music editor. AnMPEG-4 SMR-enabled encoder can then multiplexthe SMR XML file into an MPEG-4 binary file (stan-dard XML binarization is also available in MPEG).A server can store or distribute MPEG-4 content asa binary stream to clients. The SMR binary streamcontains information about music symbols andtheir synchronization in time and space. A decoderin the user’s terminal (player) converts this streaminto a visual representation, which can be ren-dered, for example, in the BIFS scene.
Figure 5 reports a simple example of anMPEG-4 player supporting MPEG-4 SMR. Theplayer uses the SMR node in the BIFS scene torender the symbolic music information in thescene (for example, by exploiting functionalityof other BIFS nodes) as the SMR decoder decodesit. The user can interact with the SMR (to changea page, view, transpose, and so on) through theSMR interface node, using sensors in associationwith other nodes defining the audiovisual, inter-active content. The user sends commands fromthe SMR node fields to the SMR decoder (dashedlines in Figure 5), which generates a new view fordisplaying the scene.
The SMR decoder consists of several compo-nents, as Figure 6 (next page) illustrates:
❚ The binary decoder decodes the binary streamcoming from the MPEG-4 delivery multime-dia integration framework (DMIF) interface orfrom an MPEG-4 file. The decoder extracts theoptional SMR rendering rules and synchro-nization information from the SMR accessunits (AUs), loading the SMR rendering rulesdata structure to any SMR rendering rulesengine and sending the synchronizationinformation to the SMR manager.
MusicXMLfile
(XML)
Sibeliusfile
(XML)
Finalefile
(binary)
Converters
MPEG-4 authoringapplication
MPEG-4 file
(binary)
MP4file
(binary)
To distribution
MPEG-Symbolic music
representation, or SMR, editor
MPEG-SMRfile
(XML)
Figure 4. SMR in the
authoring phase.
DM
IF
AAC decoder
MP4 player
BIFS scene
SMRdecoder
SMRnode
Audiosource
Other BIFS nodes
End user
SA decoder
Decoder
AAC Advanced audio codingBIFS Binary format for scenesDMIF Delivery multimedia integration frameworkSA Structured audioSMR Symbolic music representation
Figure 5. SMR inside
an MPEG-4 player.
48
IEEE
Mul
tiM
edia
❚ The SMR model contains SMR elements andinformation only and refers other object typesto other MPEG objects.
❚ The SMR renderer, controlled by the SMRmanager, uses the SMR model with its para-meter values and the SMR rendering rules toproduce a view of the symbolic music infor-mation in the SMR decoder buffer. The ren-dering part of the SMR in the visual domain isbased primarily on the Wedelmusic solution.11
Wedelmusic uses modules of music justifica-tion and symbols positioning working togeth-er to arrange the SMR symbol placement onthe page or screen according to parameterssuch as page size, justification values, format-ting rules, and style.
❚ The SMR decoder buffer can contain pixelsand vector graphics information, dependingon the specific SMR coded content. Thegraphic functionality of BIFS (or its textual for-mat, Extensible MPEG-4 Textual format[XMT-A]) partially overlaps that of scalablevector graphics (SVG), which is a current toolused for implementing the graphic layer ofsymbolic music.
❚ The SMR manager coordinates the SMRdecoder’s behavior. It receives and interpretsevents from the SMR node interface.Depending on the command type, it modifiesparameters in the SMR model (such as trans-position) and controls the SMR renderer. Italso controls the synchronized rendering.
❚ The SMR node and other BIFS nodes producecomposition buffers. The SMR node specifiesthe interface events to the rest of the BIFSscene and the user.
In MPEG-4, integration and synchronizationrely on AUs, which are individually accessibleportions of data in elementary streams. An AU isthe smallest data entity to which the system (ormedia synchronization) layer can attribute tim-ing information.
We can encode SMR into the bit stream AUsin three ways:
❚ We can include part (or all) of the SMR syn-chronization information (that is, time labelsassociated with events) in the header(ObjectTypeSpecificInfo).
❚ We can insert SMR synchronization informa-tion in AUs with embedded time stamps andthus can deliver AUs with content intendedfor a later use.
❚ We can insert SMR synchronization informa-tion in AUs without embedded time stamps.In this case, the AU’s decoding time stamp isthe reference time; events are to be scheduledupon receipt.
We can associate each possibility with differentuse cases. In some situations, content is more orless available before its playback, and everythingis codable. In other situations, we must use AUs(for example, if the performer is playing a MIDIdevice in real time and transmitting it in a stream).
SMR dedicates particular care to the relation-ship with MIDI information. If a publisher wantsto use only some MIDI files in MPEG-4-compli-ant devices (using the simplest object typedefined in the SA subpart), and if these devicessupport SMR visualization, the specification willlet a user client tool automatically convert MIDIfiles (through a specific algorithm) into SMR onthe client side and render them. Similarly, itmight only deliver the SMR. In these cases, theclient can generate the MIDI information fromSMR for use with MIDI-compliant devices. Thisis particularly important to guarantee straight-forward adaptation of current devices.
ConclusionIntegration of SMR in multimedia frame-
works, and particularly MPEG, is opening theway to the implementation of a large set of newapplications in education, entertainment, andcultural valorization. The overall architecture andfunctionality presented in this article are consol-idated, and the standardization process has
SMR manager
Symbolic music info,events
User input, fields value, and so on
SMR model
SMRrendering
rules
SMR
rend
erer
SMRBIFSnode
Synchronization information
SMR decoder
Bina
ryde
code
r
SMR
Decoder buffer
MPEG-SMRstreamfrom DMIF
Figure 6. Structure of
an MPEG-SMR decoder.
already produced the reference model and soft-ware. Some side issues will be solved as the stan-dardization progresses. These issues include therelationship with parts of SA other than MIDI,the exact scope of formatting rules for musicsymbols, and the precise specification of standardinteraction with a compliant decoder.
Looking ahead, MPEG SMR might have valueas an independent standard for music represen-tation and as a data type to be integrated intoapplications based on MPEG-21, which addressother fundamental issues for multimedia frame-works, such as digital rights management andmedia adaptation. MM
AcknowledgmentsWe express our deep thanks to everyone who
took part in the MPEG ad hoc group on SMR, allthose belonging to the MusicNetwork workinggroup on music notation, and those who attend-ed joint meetings of MPEG and MusicNetwork. Inparticular, we thank Jerome Barthelemy, DavidCrombie, Matthew Dovey, Eleanor Selfridge Filed,James Ingram, Neil McKenzie, Steve Newcomb,Kia Ng, Hartmut Ring, Perry Roland, Martin Ross,Jacques Steyn, Tillmann Weyde, and Tom White.We apologize in advance if we forgot to mentionother names.
References1. P. Bellini et al., “Multimedia Music Sharing among
Mediateques, Archives, and Distribution to Their
Attendees,” J. Applied Artificial Intelligence, Taylor
and Francis, Sept.–Oct. 2003, pp. 773-795.
2. P. Bellini, F. Fioravanti, and P. Nesi, “Managing Music
in Orchestras,” Computer, Sept. 1999, pp. 26-34.
3. P. Bellini, P. Nesi, and M.B. Spinu, “Cooperative
Visual Manipulation of Music Notation,” ACM
Trans. Computer-Human Interaction, vol. 9, no. 3,
Sept. 2002, pp.194-237.
4. D. Blostein and H.S. Baird, “A Critical Survey of
Music Image Analysis,” Structured Document Image
Analysis, H.S. Baird, H. Bunke, and K. Yamamoto,
eds., Springer Verlag, 1992, pp. 405-434.
5. D.A. Byrd, Music Notation, PhD dissertation, Dept.
of Computer Science, Indiana Univ., 1984.
6. R.B. Dannenberg, “A Structure for Efficient Update,
Incremental Redisplay, and Undo in Graphical
Editors,” Software Practice and Experience, vol. 20,
no. 2, Feb. 1990, pp. 109-132.
7. R.B. Dannenberg, “A Brief Survey of Music
Representation Issues, Techniques, and Systems,”
Computer Music J., vol. 17, no. 3, 1993, pp. 20-30.
8. G.M. Rader, “Creating Printed Music
Automatically,” Computer, June 1996, pp. 61-68.
9. E. Selfridge-Field, ed., Beyond MIDI: The Handbook
of Musical Codes, MIT Press, 1997.
10. F. Pereira and T. Ebrahimi, eds., The MPEG-4 Book,
USC Integrated Media Systems Center Press, 2002.
11. P. Bellini, I. Bruno, and P. Nesi, “Automatic
Formatting of Music Sheets through MILLA Rule-
Based Language and Engine,” J. New Music
Research, vol. 34, no. 3, 2005, pp. 1-21.
Pierfrancesco Bellini is a con-
tract professor in the Department
of Systems and Informatics at the
University of Florence. His
research interests include object-
oriented technology, real-time
systems, formal languages, and
computer music. Bellini has a PhD in electronic and
informatics engineering from the University of
Florence, and has worked on Moods, Wedelmusic,
Imutus, MusicNetwork, Axmedis, and many other pro-
jects funded by the European Commission.
Paolo Nesi is a full professor in
the Department of Systems and
Informatics at the University of
Florence and chief of the
Distributed Systems and Internet
Technology research group. His
research interests include object-
oriented technology, real-time systems, computer music,
and parallel and distributed architectures. Nesi has a PhD
in electronic and informatics engineering from the
University of Padoa. He has coordinated several multi-
partner international research and design projects of the
European Commission such as Moods, Wedelmusic,
Axmedis, and the Interactive Music Network.
Giorgio Zoia is a scientific advisor
at the Signal Processing Institute of
the Ecole Polytechnique Fédérale
de Lausanne (EPFL). His research
interests include 3D spatialization,
audio synthesis and coding, rep-
resentations and description of
sound, interaction, and intelligent user interfaces for
media control. Other research interests include com-
pilers, virtual architectures, and fast execution engines
for digital audio. He has a PhD in technical sciences
from EPFL.
Readers may contact Paolo Nesi at [email protected].
49
Octo
ber–D
ecemb
er 2005