MUSIC FOR CLARINET AND ISPW COMPOSED BY CORT LIPPE … · MUSIC FOR CLARINET AND ISPW COMPOSED BY...
Transcript of MUSIC FOR CLARINET AND ISPW COMPOSED BY CORT LIPPE … · MUSIC FOR CLARINET AND ISPW COMPOSED BY...
1
MUSIC FOR CLARINET AND ISPWCOMPOSED BY CORT LIPPE (1992)
By: Glenda A. Miller
Besides the usual avenue of opening the liner notes on the CD, my introduction to
Cort Lippe was through a personal viewpoint section on a website featuring the
composer. Lippe gave his thoughts on being labeled a “computer musician.” He
explained that his “…reservations and ‘mistrust’ of technology extends not only to the
computer, as a musical tool, but to all its ramifications as a human tool.” He thinks some
of his mistrust of technology is absurd, but believes that any technological advances
made by mankind through history have had good and bad effects and uses.
Lippe also comments on the idea that technology empowers us and can lead some
musicians to think they know more than they do. Lippe states that, “…most electronic
musicians only understand a portion of this very complicated and multi-disciplinary
domain, and many electronic musicians are not excellent pedagogues.” He sees the
computer being used extensively in the music education field and one of Lippe’s
“mistrusts” of the computer as a musical tool is the ability of the inexperienced music
student to rely too much on the computer. He believes that the student’s skill as a
musician could be compromised if the student uses the computer too early in his or her
education. The student will not develop certain skills such as inner ear (imagination),
learning basic music keyboard, and notating readable music by hand. Lippe’s comment
on all of this: “I wonder what this will lead to eventually? But maybe none of this is
important if we have computers? I just think we should keep in mind the possibility that
someday the electricity might be turned off. How many of us will still be able to make
music if that happens?” With this said, as a composer, Lippe uses the computer in almost
2
all areas of his work. He is a composer of instrumental or instrumental and computer
works. His primary interest is in real-time interactive computer music
(http://eamusic.darmouth.edu/~wowem/interviews/lippe.html).
Dr. Mark Ballora teaches music technology at Penn State University, and states in
an e-article written for Electronic Musician, “Nothing about the idea is new (interactive
composition), people have been writing and playing interactive works for more than 25
years. But the pioneers worked for institutions that could spend hundreds and thousands
of dollars on specialized computer systems.” Ballora goes on to say that PCs are
intertwined with everyday life. “In interactive composition and performance,” he states
that “. . . control of a piece includes a computer that has been programmed to sense
significant musical features from a human performer and produces its own music in
response.” (http://emusician.com/mag/square _one/emusic_interaction/index.html.
Lippe believes the advantages are clear in that real-time interaction gives a
musical flexibility impossible in the tape domain. “The dynamic relationship between
performer and musical material, as expressed in the musical interpretation, can become
an important aspect of the man/machine interface for the composer and performer, as
well as the listener, in an environment where musical expression is used to control an
electronic score,” says Lippe. Real time decision making is what a musician does while
interpreting a piece of music in a performance situation, but using this information and
recognizing what a musician is doing on as many different levels as possible is what the
composer does best (http://www.music.buffalo.edu/faculty/lippe/lippepublications.shtml).
A brief history about real time interaction and composition would probably help
the reader as it did the author doing the research for this paper. In 1969, Georges
3
Pompidou became President of France. One of his priorities was that “Paris possess a
cultural center that would be both a museum and a center of creation, where visual arts
would coexist with music, cinema, books, and audio-visual research.”
(http://www.music.psu.edu/Faculty%20Pages/Ballora/INART55/ircam.html). In 1970,
Pompidou invited Pierre Boulez to create and direct a music research institute to be
associated with the newly forming Centre National D’Art Contemporain. Boulez was
named Director of Music and Acoustic Division of the center called IRCAM (Institut de
Recherche et Coordination Acoustique Musique/Institute for Research and Coordination
of Music and Acoustics). In its initial research approach and equipment, there was
continuity from both Bell Labs and Stanford. In 1974, Max Matthews was appointed
scientific advisor in which he would spend three or four months out of the year at
IRCAM and the rest of his time at Bell Labs. Jean Claude Risset headed the computer
department and Luciano Berio, headed the electronic music department. The first
computers at IRCAM were a PDP-10 and a PDP-11, and by 1976, sounds were heard
from Music V, Music 10, and from a variety of other software that had been developed at
Stanford. IRCAM officially opened in 1977 (Chadabe 1997: 118-122).
In 1975, Berio brought Guiseppe Di Giugno to Paris. Di Giugno was introduced
to Max Matthews, and eventually Di Guigno came to IRCAM to work on a digital
synthesizer. He begins the 4 Series of digital synthesizers. Before IRCAM, Mathews
had developed a system for Pierre Boulez (when he was director of the New York
Philharmonic) that did not require the use of tape called the Conductor. The Conductor
system allowed electronic elements to be dynamically controlled by external devices such
as joysticks and percussion instruments. Mathews continued to work on the Conductor
4
program at IRCAM. In 1981, Di Guigno finished the 4X synthesizer, which was meant
a s a un ive r sa l mus i ca l mach ine fo r s i gna l p roces s ing
(http://emusician.com/mag/square_one/emusic_interaction/indes.html;http://www.music.
psu.edu/Facilty%20Pages/Ballora/INART55/ircam.html).
In 1985, Miller Puckette arrived at IRCAM and began to work on 4X-related
software with Philippe Manoury. In 1997, working with a Flutist, Monoury wanted the
flute’s sound to be transformed in the 4X, pitches played by the flutist would determine
how the flute’s sound would be transformed, and at the same time, the flutist would
trigger other electronic sounds. There was a timing problem which Miller Puckette
solved by writing a real time scheduler. He said, “I used ideas that I have learned from
Max Mathews and so I ended up calling the program MAX.” In the Spring of 1987,
Miller Puckette wanted to make this whole process easier. He decided to move the whole
program to the Macintosh and created a graphics interface, because the configurations
were getting too complicated. He called the graphics interface the Patcher. In 1988,
Miller rewrote the program making it more interactive (Chadabe 1997: 182-184).
Later, MIDI (Musical Instrument Digital Interface) handling was added to the
capabilities of the 4X real time system. The 4X was a considerable expense to purchase
and maintain. Digital signal processing chips were introduced coupled with installation
of these chips on a variety of add-on boards. The boards were designed for personal
systems such as the IBM PC, Apple Macintosh, and NeXT machine, and made Digital
Signal Processing hardware very affordable. The accelerated use of Digital Signal
Processing in live performances brought about the IRCAM Signal Processing
Workstation (ISPW). The ISPW consists of a NeXT computer equipped with a special
5
accelerator board, on which reside two Intel i860 processors. A new version of the MAX
was written to include signal objects. These objects can be used to build signal
processing programs, just as MIDI Max objects are configured to implement control
programs. When the two classes are combined, the conceptualization and implementation
of interactive systems using real time signal processing becomes easier to use (Rowe
1994: 15-27).
Powerful signal analysis techniques can be realized on the ISPW. The work
station is fast enough to perform an FFT and inverse FFT in real time simultaneously
with an extensive network of other signal and control processing. FFTs are Fast Fourier
Transform Algorithms. A Fast Fourier transform (FFT) is an efficient algorithm to
compute the Discrete Fourier Transform (DFT) and its inverse. FFTs are of great
importance to a wide variety of applications, from digital signal processing to solving
partial differential equations to algorithms for quickly multiplying large integers (Rowe
1994: 15-27).
Both pitch- and envelope-tracking objects have been used for compositional
sketches. Envelope-tracking is a function (also called keyboard tracking, key follow, and
keyboard rate scaling) that changes the length of one or more envelope segments
depending on which key on the keyboard is being played. Envelope tracking is most
often used to give the higher notes shorter envelopes and the lower notes longer
envelopes, mimicking the response characteristics of percussion-activated acoustic
instruments, such as guitar and marimba. In pitch tracking, accurate pitch estimates of
speech are necessary for several applications, including speech coding, speech
recognition, and prosody extraction. With such a wide range of interest, many researchers
6
have worked on constructing pitch determination algorithms that are ideal for their
applications. If continuous sensing of pitch, amplitude, and timbral information can be
achieved from the audio signal alone, the entire sensing/processing/response chain could
be reduced to a single machine (Rowe 1994: 15-27).
In 1990, Opocode Systems released a commercial version of MAX, a graphic
programming environment for interactive music systems. MAX is an object oriented
programming language, in which all programs are realized by manipulating graphic
objects on a computer screen and making connections between them (Rowe 1994: 15-
27).
In a paper written by Lippe, “A Composition for Clarinet and Real-Time Signal
Processing: Using MAX on the IRCAM Signal Processing Workstation,” he explains the
advantages of using real-time continuous control parameters with acoustic instruments.
Lippe states, “In the frequency domain, pitch tracking can be used to determine the
stability of pitch on a continuous basis for recognition of pitch-bend, portamento,
glissando, trill, tremolo, etc. In the amplitude domain, envelope following of the
continuous dynamic envelope for articulation detection enables one to determine flutter-
tongue, staccato, legato, sforzando, crescendo, etc. In the general domain, FFTs, pitch
tracking, and filtering can be used to track continuous changes in the spectral content of
sounds for detection of multiphonics, inharmonic/harmonic ratios, timbral brightness, etc.
High level event detection combining the analyses of frequency, amplitude, and spectral
domains can provide rich control signals that reflect subtle changes found in the input
signal.” (http://www.music.buffalo.edu/faculty/lippe/lippepublications.shtml).
7
Cort Lippe’s resume is 13 pages long, written in 10 point type, and covers his
career and a multitude of accomplishments from the very early 1970s to present day. He
has studied music, composition, formalization and analysis, programmed music,
electronic and computer music. According to his formal biography, Cort Lippe studied
composition in the United States with Larry Austin (composer, University of South
Florida). He spent a year studying Renaissance music in Italy. Three years were spent at
the Instituut voor Sonologie in the Netherlands in the fields of computer and formalized
music with G.M. Koenig and Paul Berg. Lippe lived for eleven years in France. In Paris,
he worked at Centre d’Etudes de Mathematique et Automatique Musicales (CEMAMu),
directed by Iannis Xenakis, for approximately three years and studied at the University of
Paris following Xenakis’ course on formalized music. While still in Paris, he worked for
eight years at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM),
founded by Pierre Boulez. There, he developed real time musical applications and gave
courses on new technology in composition.
Lippe has followed composition and analysis seminars with many composers
including: Boulez, Donatoni, Huber, Messiaen, Penderecki, Stockhausen, and Xenakis,
and has written for most major ensemble formations. He has received numerous
international composition prizes and his music has premiered at major festivals
worldwide. Today, he teaches composition and is director of the Lejaren Hiller Computer
Music Studios at the State University of New York (SUNY), Buffalo
(http://www.music.buffalo.edu/faculty/lippe/index.shtml).
The composition Music for Clarinet and ISPW (1992), by Cort Lippe, was created
using the IRCAM Signal Processing Workstation (ISPW) and the software MAX.
8
IRCAM (Institut de Recherche et Coordination Acoustique/Musique) developed this real
time digital processing system, the ISPW, between 1988-1991. Miller Puckette
developed this version of MAX for the ISPW that includes signal processing objects, in
addition to many of the standard objects found in the Macintosh version of MAX.
Currently, there are over 40 signal processing objects in MAX. Objects exist for most
standard signal processing tasks including: filtering, sampling, pitch tracking, threshold
detection, direct-to-disk, delay lines, and FFTs, etc. With this ISPW version of the
MAX, the ease with which one creates control patches in the original Macintosh version
of MAX is carried over into the domain of signal processing
(http://www.music.buffalo.edu/faculty/lippe/lippepublications.shtml).
Rowe illustrates and explains the various levels of processing in Lippe’s piece.
“In the signal flow chart (Illustration 1), there are various levels of processing involved in
the realization of the piece. First of all, the clarinet is sampled through an ADC and
routed through the pitch tracker resident on an IRCAM i860 DSP board (ISPW) mounted
on the NeXT computer. The output of the pitch tracker goes on to a score following
stage, accomplished with the “explode” object. Index numbers output from “explode”
then advance through an event list, managed in a “glist” object, which sets signal
processing variables and governs the evolution of a set of compositional algorithms.”
(Rowe 1994: 88-89).
9
Illustration 1. Signal Flow Chart Depicting Various Levels of Processing Involved in Cort Lippe’s Composition,
“Music For Clarinet and ISPW” (1992)
The Composer in the Computer Age – VII, CDCM Computer Music Series Vol. 24,
presents computer music compositions by five accomplished composers. Cort Lippe is
one of the five composers and his piece, Music for Clarinet and ISPW (IRCAM Signal
Processing Workstation), was written in 1992 for clarinetist Esther Lamneck and
premiered in New York in March, 1992. Esther Lamneck received her Doctorate from the
Juilliard School of Music. She maintains an active career as clarinet soloist and as
conductor and director of the New York University New Music and Dance Ensemble.
Lamneck has performed throughout the United States and Europe. The
scientific/technical definition of how this piece was produced is reiterated word for word
from the liner notes of the CD as follows:
10
‘Technically, the clarinet pitches are tracked by the computer as the performer plays.This pitch information is sent to a “score follower” which allows the computer to followthe player’s performance by comparing it to a copy of the score which is stored in thecomputer. At specific points designated in the score, electronic events are triggered bythe score follower. Thus, the clarinetist has the double role of performer and“conductor” (of the electronic part). The computer also tracks other parameters of theclarinet, such as amplitude and continuous pitch change, and uses this information forcontinuous control of the digital synthesis algorithms running in the computer on amore local level than the “event” level. The intent is to give the player a level ofmusical control based on performance expressivity, which hopefully allows for a certaindegree of interactivity between the performer and the computer.
‘All the sounds used in the electronic part come from the composed clarinet part andare recorded and transformed by the computer in real time during the piece. Thus, themusical and sound material for the instrumental and electronic parts are one and thesame. The instrument/machine relationship is neither a dialogue nor a duo. Musically,the computer part is not separate from the clarinet part, but serves rather to “amplify”the clarinet in a multitude of dimensions and directions.’ – By Cort Lippe (© 1997Centaur Records, Inc.)
The primary goal of this paper is to discuss the 17:55 minute piece, Music for
Clarinet and ISPW, as a listener and also attempt to understand the composer’s intentions
and skills. The piece appears to have four sections with the fourth section being the
longest at 15:45 minutes in length. The opening of the piece starts with a few single,
monophonic notes from the clarinet that form a motif-like sound. At 5 seconds, the notes
become slightly frenzied and polyphonic (enter electronic sounds) and end with several
trills from the clarinet at 21 seconds. After a long pause from 21 to 32 seconds, the piece
resumes again with the same monophonic clarinet motif that was heard in the first
section. At this point in the second section, the note pattern changes and the
musical/electronic sound quickly rises in pitch with frequent trills interjected by the
clarinet and ends at precisely 1 minute. After a six second pause, the piece resumes again
at 1:06 with the same short clarinet motif as in the two previous sections. This third
section is longer (1:06-2:10) and after the same familiar motif, the section is accented
11
with rapid staccato crescendos by the clarinet that culminate with long shrill notes
resembling elephant-like noises.
The fourth section follows ten seconds of silence (2:10-2:20) and is introduced
with the clarinet playing a new higher pitched melody of notes, all monophonic. At 2:39,
the clarinet is “mixed” with electronic sounds. This section seems to have more space,
one or two seconds of silence between clarinet, clarinet/electronic, and electronic sound
interfaces. This space causes the listener to be tossed about between silences looking for
links that can tie each sequence together. This is not always possible. An Egyptian-like
melody emerges at 2:36 to 3:33, which is very refreshing. At 5:00 to 5:16, the clarinet
sounds like a violin playing rapid staccato notes that rise and then descend, this also
happens at 6:00 to 6:15. A low, slow, synthesized clarinet starts to play at 5:18 and
gradually rises to a frenzied pitch that ends with two electronic metallic attack sounds and
then returns to a slow, synthesized sound at 5:50. This metallic shrill sound is also heard
at 6:25, 6:41, and 7:00.
During a 28 second interval (7:00 to 7:28), there are two low notes that introduce
another frenzy of high pitched notes, crackling and screeching, that immediately descend
to a single note. The vision that comes to mind is that of a rusty, turning ferris wheel in
an amusement park horror flick. The ferris wheel slowly grinds to a halt with the empty
chairs swinging back and forth. The next 28 seconds gives the listener a moment to
breath. There is a melody of dissonance and low pitched notes by the clarinet. Exactly at
8:00, you are abruptly awakened with two sets of high pitched crescendos that screech to
a halt at 8:15.
12
The tempo of the piece slows down considerably between 8:15 and 10:55. The
listener experiences a few notes here and there mixed with a buzz and screech and a
muted, high pitched frenzy of notes. There is a welcome change at 10:55, single notes
from the clarinet slowly transform into high pitched, staccato intervals. A tension is felt
by the listener, as each succeeding interval rises in pitch to a fevered, frenzied climax at
12:37. At this the moment, the listener experiences an explosion of piercingly loud,
screeching, rhythmic, convulsing notes that are delivered from the clarinet and electronic
mix. The climatic intervals eventually start to descend very slowly and do not come to
rest until 13:40. Again the listener is given an opportunity to breath. At 13:40 to 15:29,
the clarinet has a strange electronic sound, the tempo slows down and is totally dissonant,
but the tone is smooth and continuous.
From 15:29 to 16:00, the clarinet plays single, loud, unmelodic, computer
synthesized notes that come at the listener at random. The listener can also hear a
brushing or breathing sound at 15:47. From 16:00 to 17:28, all sounds are barely audible
with a random note sounding on occasion or a few notes played in sequence. There is
some white noise that can be heard, but it is barely audible. During the last 27 seconds of
the piece, the listener is treated with delightful, rapidly ascending, melodic scales from
the computer synthesized clarinet.
According to Lippe, “The signal processing used in Music for Clarinet and ISPW
include several standard signal processing modules: reverb, delay, harmonizing, flanging,
frequency shifting, spatializing, and frequency/amplitude modulation.” He also wants
the listener to know that several non-standard sampling techniques were also used,
including a time stretching algorithm developed by Miller Puckette (MAX inventor).
13
“Thus, one can slow down a sample playback while maintaining the original pitch, or
change the pitch of a sample playback without changing its duration.” says Lippe
(http://www.music.buffalo.edu/faculty/lippe/lippepublications.shtml).
The clarinet has the capacity to glide smoothly between pitches, and this allows
for a highly expressive style of playing. Its tone is more mellow than that of any of the
other woodwinds (Wright 2004: 47). I feel this real time interplay of the clarinet and
computer demonstrates a beautiful union between the past and present. The clarinet has
been around for a few hundred years, and when combined with the recent mastery of high
tech computer systems, creates this specialized genre of music known as computer
music. Cort Lippe has been a leading figure in the international electroacoustic music
community and has been active in the field of interactive computer music for more than
20 years. I believe Lippe has reached his goal of creating an intimacy between machine
and performer with this piece.
14
WORK CITED
http://eamusic.darmouth.edu/~wowem/interviews/lippe.html
http://www.music.buffalo.edu/faculty/lippe/lippepublications.shtml
http://www.music.buffalo.edu/lippe/pdfs/LippeCV.pdf
http://www.music.buffalo.edu/faculty/lippe/index.shtml
http://emusician.com/mag/square_one/emusic-interaction/index.html
http://www.music.psu.edu/Faculty%20Pages/Ballora/INART55/ircam.html
Chadabe, J. 1997 Electric Sound – The Past and Promise of Electronic Music. New Jersey:
Prentice Hall.
Rowe, R. 1994 Interactive Music Systems – Machine Listening and Composing.
Massachusetts: The MIT Press.
1997 The Composer in the Computer Age – VII, CDCM Computer MusicSeries Volume 24, 1997 Centaur Records, Inc., 8867 Highland Road,Suite 206, Baton Rouge, LA 70808, Liner Notes.
Wright, C. 2004 Listening To Music, Fourth Edition, Schirmer, a division of Thomson
Learning, Inc.
Illustration 1. Rowe, Interactive Music Systems – Machine Listening and Composing
Cover Page: The Composer in the Computer Age – VII, CDCM Computer Music SeriesVolume 24