Non-Musicians' and Musicians' Perception of Bitonality

download Non-Musicians' and Musicians' Perception of Bitonality

of 24

Transcript of Non-Musicians' and Musicians' Perception of Bitonality

  • http://pom.sagepub.com/Psychology of Music

    http://pom.sagepub.com/content/38/4/423The online version of this article can be found at:

    DOI: 10.1177/0305735609351917 2010 38: 423 originally published online 24 March 2010Psychology of Music

    Mayumi Hamamoto, Mauro Botelho and Margaret P. MungerNon-musicians' and musicians' perception of bitonality

    Published by:

    http://www.sagepublications.com

    On behalf of:

    Society for Education, Music and Psychology Research

    can be found at:Psychology of MusicAdditional services and information for

    http://pom.sagepub.com/cgi/alertsEmail Alerts:

    http://pom.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://pom.sagepub.com/content/38/4/423.refs.htmlCitations:

    What is This?

    - Mar 24, 2010 OnlineFirst Version of Record

    - Sep 21, 2010Version of Record >>

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Non-musicians and musicians perception of bitonality

    Mayumi Hamamoto, Mauro Botelho and Margaret P. MungerDavidson College, USA

    AbstractBitonal music is characterized by a certain, dissonant effect that had been believed to be clearly audible by everyone. However, Wolpert found that non-musicians were unable to identify bitonal versions of originally monotonal musical passages as such in a free response task. The present study replicated Wolperts findings, but also had participants rate song clips for likeableness, correctness and pleasantness. Bitonal music was rated lower on all dimensions independent of the individuals level of musical training, with no difference in ratings by non-musicians and musicians. In addition, following a brief training session, non-musicians (less than one year of musical training) identified whether clips were monotonal or bitonal at equivalently high rates as the intermediate (mean 2.4 years) and expert (mean 9.2 years) musician groups.

    keywordsbitonality, musicians, non-musicians, perception, tonality

    IntroductionAn enormous number of differences have been observed between how expert musicians and nave listeners respond to music using both behavioural and neurophysiological methods (e.g., Koelsch & Mulder, 2002; Peretz & Zatorre, 2005; Regnault, Bigand, & Besson, 2001). A differ-ence we found particularly intriguing was the apparent inability of non-musicians to hear the unique effect of bitonal music (Wolpert, 1990, 2000), music that is characterized by harsh and unexpected dissonances. It had been believed that any person, regardless of previous musical experience, would find the effect of bitonality to be so unpleasant that bitonality would be immediately perceptible (e.g., Gosselin et al., 2006). In fact, consonant and dissonant chords elicit different patterns of neuronal activity in the primary auditory cortex for many animals, and the magnitude of the oscillatory pattern elicited by dissonant chords correlates with per-ceived dissonance in humans (Fishman et al., 2001).

    Corresponding author:Margaret P. Munger, Davidson College, Box 7001, Davidson, NC, 280357001, USA. [email: [email protected]]

    Psychology of Music38(4) 423445

    The Author(s) 2010Reprints and permission: http://www. sagepub.co.uk/journalsPermission.navDOI: 10.1177/0305735609351917

    http://pom.sagepub.com

    Article

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 424 Psychology of Music 38(4)

    Bitonality can be observed in the works of many composers, most notably in those of Milhaud, perhaps the most famous polytonalist, but also in the music of Bartk, Britten, Casella, Honegger, Ives, Koechlin, Prokofiev and Strauss, to mention only the more prominent names (Morgan, 1991; Symms, 1996; Watkins, 1995; Whittall, 2001b). Bitonality is the simultane-ous use of two ... keys. This may occur briefly or over an extended span (Randel, 1986; see also Whittall, 2001a). (Polytonality refers to the use of three or more keys simultaneously.) The group of composers associated with Milhaud known as Les Six was especially fond of bi- and polytonality (DeVoto, 1993; Mdicis, 2005; Messing, 1988).

    Composers themselves cite bi- and polytonality as integral to their work (Bartk, 1931/1976, 1943/1976; Casella, 1924; Koechlin, 1925; Milhaud, 1923/1982). Compositional treatises routinely present bi- and polytonality as a viable compositional procedure (Cope, 1977; Dallin, 1974; Smith Brindle, 1986). Bitonality may also be associated with a pieces text or dramatic structure, as, for example, in Brittens operas Peter Grimes and The Turn of the Screw (Whittall, 2001a). It should be noted that such interpretations often fall under the category of eye music: the notational peculiarities of bitonality such as differing key signatures that convey symbolic meaning that is apparent to the eye but not to the ear (Dart, 2001).

    Disagreement among musicians and especially music theorists arises not in relation to the terms definition, but whether or not bitonality can be perceived. In spite of the evidence out-lined above, many theorists have been less than accepting of the concept of bitonality on the grounds that it cannot be perceived. Van den Toorn famously declared:

    The bitonality or polytonality of certain passages in [Stravinskys music] can no longer be taken seriously ... Presumably implying the simultaneous (C-scale tonally functional) unfolding of separate tonalities or keys, these notions real horrors of the musical imagination have widely (and merci-fully) been dismissed as too fantastic or illogical to be of assistance. (van den Toorn, 1983, pp. 6364)

    Two widely-used 20th-century theory and analysis texts reflect this scepticism: one does not mention the topic at all even though it discusses music that is treated as bitonal in other quar-ters (Straus, 1990), the other mentions bitonality briefly but questions its perceptibility (Wil-liams, 1997). Studies sympathetic to bi- and polytonality exist, but are infrequent (e.g., Harrison, 1997; Stein, 2005).

    The source of music theorys claims that bi- and polytonality are not perceptible is not entirely clear, but seems to be rooted in semantics, namely, in the logical contradiction inherent in the term bitonality rather than in any experimental evidence (Boretz, 1973; Forte, 1955; see also Tymoczko, 2002). Tonal pitch structure is typically conceived to be strictly hierarchical (e.g., Lerdahl, 2001 or Lerdahl & Jackendoff, 1983). Thus, tonality is understood as the projec-tion in time of a single [consonant] triad by means of ... linear and harmonic prolongations of this triad (Babbitt, 1952, p. 261). Note the use of the word single in the definition: in other words, if music is to be heard in terms of a key or a pitch-class centre, all of its pitches will be heard in terms of this single governing and superordinating pitch class. Conversely, bitonality, as mentioned above, is the simultaneous use of two keys. Consequently, because tonality by definition admits only one pitch centre, many theorists consider bi- or polytonalism a figment of the imagination, and explain so-called bi- or polytonal passages as a partitioning of the chro-matic scale, or of certain pitch-class sets such as the octatonic scale or octad (Berger, 1978; Straus, 1990; van den Toorn, 1983). The octatonic collection is an eight-note scale in which whole steps and half steps, or half steps and whole steps, alternate. The octad collection is also an eight-note scale consisting of two whole steps, three half steps, and two whole steps (Figure

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 425

    1). An easy way to conceive of the octad is to arrange the pitches of a major scale in fifths rather than in steps, say, FCGDAEB, and then add one additional fifth to obtain the octad, FCGDAEBF sharp.

    Finally, it should also be noted that antagonism towards bi- and polytonality is as old as the concept itself. Early criticism of bitonality was often virulent, and frequently mixed in with nationalistic and anti-Semitic rhetoric (Mdicis, 2005; Messing, 1988). Schoenbergs opposi-tion to bi- and polytonality, although especially vehement, emphasized musical and structural reasons (Schoenberg, 1923/1975, 1925/1975, 1926/1975; see especially Schoenbergs sec-ond untitled essay in Stein, 1975; Messing, 1988 mentions a number of still unpublished essays by Schoenberg that also describe polytonality negatively). But it is important to remem-ber that most of Schoenbergs criticism of polytonality dates from the mid-1920s, that is, dur-ing the peak of bi- and polytonalitys popularity but also precisely the time he was composing his first 12-tone pieces (and enduring his own share of negative criticism).

    Bitonality: perceptual issuesIt is puzzling that the logical contradiction (inasmuch as music theorists are concerned) inherent in the concept of bitonality is capable of quashing discussion. Although we recog-nize that experimental studies in the perception of bitonality are at a beginning state, it is surprising that one can dismiss the concept as fantastic or illogical without any experimen-tal evidence. Bitonality, and the perceptual issues it raises, came to a head in an exchange between two music theorists and Stravinsky scholars: Dmitri Tymoczko (2002, 2003a, 2003b) and Pieter van den Toorn (2003). Briefly, Tymoczko questions if the octatonic scale should be considered the source of Stravinskys pitch materials as van den Toorn has fre-quently claimed (1975, 1983, 1987). Tymoczko shows that many passages in Stravinskys music can be explained as resulting from the superposition of musical elements. Echoing the views of many composers, Tymoczko points out that bitonality is a useful explanation for the construction of a piece (2003b). Side-stepping for a moment the contentious issue of percep-tion of bitonality, Tymoczko calls such layering of elements polyscalarity: the simultaneous use of musical objects which clearly suggest different source-collections. Polyscalarity is a kind of local heterogeneity, a willful combination of disparate and clashing musical elements (Tymoczko, 2002, p. 84).

    (a)Octatonic collection

    (b)Octad collection

    Figure 1. Eight-notescales:(a)theoctatoniccollection;(b)theoctadcollection.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 426 Psychology of Music 38(4)

    Tymoczko goes on to address the issue of perception. While he admits that a piece of music cannot, in the fullest and most robust sense of the term, be in two keys at once (2002, p. 84), bi- or polytonality as a concept is relevant for music in which distinct pitch configurations natu-rally segregate themselves into independent auditory streams, each of which, if heard in isolation, would suggest a different tonal region (Tymoczko, 2002, p. 84; our emphasis). Auditory scene analysis is the ability to identify different sound sources within the auditory stream, and listeners use a variety of acoustic cues to distinguish auditory objects including frequency, intensity and spatial location (for review, see Bregman, 1990, 1993). In addition, a complex sound with mul-tiple harmonics will be identified as two auditory objects when one of the harmonics is mistuned relative to the others (Alain, Arnott, & Picton, 2001). Using fundamental frequencies of 200 and 400 Hz, participants identified the complex sound as two distinct auditory objects when the degree of mistuning of a harmonic was 4 percent, which corresponds roughly to a semitone (Alain et al., 2001). Being able to distinguish multiple acoustic objects is not the same as being able to distinguish multiple tonalities, but it is clearly an initial requirement.

    We suggest that the unique effect created by bitonality results from the combination of two independent yet interacting factors: first, there is the sensory roughness that results when tones are too close in pitch, with the most dissonant intervals involving a difference of about 4 percent (a bit less than a semitone, Plomp & Levelt, 1965; Rasch & Plomp, 1999). Second, there is the abstract and high-level discordance created when distinct pitch configurations that proj-ect independent auditory streams, each in a different key, point to two different tonics, in viola-tion of tonalitys basic premise. Note that it is not necessarily the case that all bitonal music will be dissonant (Dallin, 1974; Smith Brindle, 1986). Conversely, tonal music highly saturated with chromaticism and, for that matter, atonal music, can be very dissonant without giving the impression of bitonality. In his description of the perceptual experience of bi- and polytonality, Tymoczko avails himself of a rather colourful metaphor: polytonal music tends to involve a very distinctive sort of crunch (Tymoczko, 2003b, p. 2). We understand Tymoczkos crunch to be synonymous with the more well-defined term sensory roughness. In spite of its collo-quial ring, we adopted Tymoczkos crunch for its ready comprehensibility by non-musicians, part of our pool of participants.

    While we agree that independent auditory streams are essential for the perception of bitonal-ity, one needs to take into account the varying degrees of independence between the musical structures that project the independent auditory streams. At one end of the spectrum lies Tymoc-zkos extreme example of an oboist playing My Country Tis of Thee in F major in one corner of a room, while in another corner a pianist plays The Star-Spangled Banner in D-flat major (Tymoczko, 2003b). Two auditory streams thus project two different keys, but they are so inde-pendent, unrelated and uncoordinated that they would not be perceived as a bitonal piece, or, for that matter, any sort of piece, but rather as two pieces, each in one key, played simultaneously.

    At the other end of the spectrum lies the locus classicus of bitonality, the so-called Petroushka chord (Figure 2; the clarinets are notated in C). Traditionally, this passage has been explained as the bitonal superposition of the keys of C and F sharp major, even by the composer himself (Stravinsky & Craft, 1962). The passage is quite dissonant, even by early 20th-century stan-dards: all but one simultaneity, the C sharpE in bar 9, are non-triadic. The sustained dimin-ished third A sharpC, alternating with the minor second GF sharp, is especially biting. As is readily apparent, this passage consists of two highly coordinated parts, played by clarinets 1 and 2: they are timbrally homogenous, and exhibit identical rhythm and contour. In fact, the two clarinet parts are so well coordinated that they cannot be considered as independent

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 427

    auditory streams. In fact, this coordination helps to smooth out the otherwise dissonant effect of this passage. It should come as no surprise, then, that experimentation has corroborated van den Toorns hypothesis that this passage is heard in terms of the octatonic collection (Krum-hansl, 1990; Krumhansl & Schumckler, 1986). Figure 3 illustrates how the octatonic scale is partitioned between the two clarinets and bassoon: C, E and G, suggesting C major, are given to clarinet 1, and C sharp, F sharp and A sharp, suggesting F sharp major, are assigned to clarinet 2 and bassoon. Two pitch-classes of the octatonic collection, D sharp and A, are not used, and the bassoons G sharp (circled in Figure 2) is a rogue element, that is, it does not belong to the octatonic scale (but fits in comfortably in F sharp major).

    Somewhere in between the two examples above lie the two types of stimuli used in this study: the contrived bitonal examples created by Wolpert (1990, 2000) and also used in blocks 1 and 2 of this study, and the originally composed bitonal phrases taken from Milhauds piano suite Saudades do Brasil. Wolpert took a pre-existing piece, often well known and familiar to partici-pants, and transposed the melody up a perfect fifth (1990), or up or down a whole step (2000). Like Wolpert (2000), we also took well-known pieces and transposed the melody up and down by a whole step. These contrived examples create a bitonal crunch that is less jarring than the thought experiment of playing two different pieces in two distinct keys in opposite corners of a room, but a bit more pronounced than in the Petroushka passage. The passages taken from Sau-dades were composed intentionally as bitonal, and present a bitonal effect that is more subtle and nuanced than the contrived examples. Not only does Milhaud vary the interval separating the two keys, dissonances are better controlled, avoiding the arbitrary dissonances of the con-trived examples. Moreover, a phrase often becomes monotonal at its conclusion.

    Cl. 1 in Bb

    Cl. 2 in A

    Bsn. 1, 2

    p

    Molto meno. q = 5049

    mf lamentoso

    3

    3

    3

    3

    1.

    Figure 2. Stravinsky,Petroushka,SecondTableau,bars915:score(theclarinetsarenotatedinC).

    Clarinet 1

    Clarinet 2, Bassoon 1

    Figure 3. Stravinsky,Petroushka,SecondTableau,bars915:partitioningoftheoctatoniccollectionbyinstruments.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 428 Psychology of Music 38(4)

    Musicians vs. non-musiciansRemarkably, bitonal sensory roughness or crunch is perceived easily and immediately by musicians but often goes by unnoticed by non-musicians (Wolpert, 1990, 2000). In her study on melody recognition with different instrumental and harmonic accompaniments, Wolpert (1990) made an unexpected observation about non-musicians. Participants were presented with three versions of the same tune: the model (melody with accompaniment in same key), the same tune played by a different instrument (melody and accompaniment in same key), and the tune played by the same instrument as the model, but with the accompaniment in a differ-ent key from the melody. In choosing the melody that is more similar to the model, musicians always selected the choice based on key equivalency, whereas 95 percent of non-musicians chose in terms of instrumentation, and so claimed the bitonal version was the most similar to the model. To non-musicians, timbre was more important than harmony, even harmony that made the musicians cringe. But even more intriguingly, while all of the musicians noticed the dissonant effect of a melody accompanied in a different key, over half of the non-musicians detected nothing. All musicians, within seconds of hearing the effect of bitonality, smiled or grimaced, but none of the non-musicians did so. This result was true for both familiar tunes, such as the nursery rhymes Mary Had a Little Lamb and Twinkle Twinkle Little Star, and original melodies composed for the study.

    Wolpert (2000) focused more directly on the perception of bitonality: Myrow and Gordons You Make Me Feel So Young was sung by a professional singer with an orchestral accompani-ment, and then manipulated so that the accompaniment was transposed either up or down two semitones. Participants listened to the three versions of the song (original, accompaniment transposed up, and accompaniment transposed down), and answered one open question: What differences, if any, do you hear among the excerpts? Again, Wolpert found considerable differences in perception between musicians and non-musicians. Musicians unanimously iden-tified the melody and accompaniments keys as the primary difference. In fact, musicians often did not hide the unpleasantness of their experience with bitonality; several asked to be excused from listening to the bitonal pieces in their entirety. On the other hand, only 40 percent of the non-musicians mentioned the difference in key.

    Wolperts (1990, 2000) results contradict not only common assumptions regarding listen-ers familiar with western tonal music, but also behavioural and psychophysiological work on the perception of other aspects of pitch structure, namely harmony and mode (e.g., Costa, Fine, & Ricci Bitti, 2004; Gagnon & Peretz, 2003; see Peretz & Zatorre, 2005 for a review of psycho-physiological work). For example, when non-musicians rate the happiness and sadness of a melody, they are influenced by mode (major or minor), either with relatively simple experi-menter-created melodies (Gagnon & Peretz, 2003; Halpern, Martin, & Reed, 2008) or art music excerpts (Costa et al., 2004). While mode is certainly musically distinct from bitonality, it is surprising that mode would matter when bitonal crunch (dissonance and/ or incompatible independent auditory streams) does not. Significantly, these listeners were not asked to articu-late what was different between the clips, as Wolpert (1990, 2000) asked, but simply to assign a rating. We speculate that perhaps non-musicians do not have the vocabulary to describe the bitonal effects perceived when hearing Wolperts stimuli, and a rating task would reveal more clearly the musical sensitivity of non-musicians.

    For example, Regnault et al. (2001) presented sequences of chords to musicians and non-musicians with instructions to rate the final chord for consonance/dissonance, defining conso-nance as pleasant, everything seems OK and dissonance as unpleasant, something seems

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 429

    wrong for the non-musicians. Musicians were 97 percent accurate overall, and with these instructions non-musicians had 81 percent accuracy overall, with 84 percent accuracy for dis-sonant chords. In addition, event-related brain potentials (ERPs) were recorded and revealed that musicians and non-musicians both show a larger late positive component for dissonant over consonant chords (late epoch: 300800 ms following the chord). Musical expertise does interact with ERPs in an earlier epoch (100200 ms), with musicians having larger positive peaks to dissonant chords while non-musicians have larger positive peaks to consonant chords. The non-musicians are processing consonant and dissonant chords differently, as revealed by the ERPs, and can identify the chords fairly accurately (Regnault et al., 2001). However, differ-ent ERP patterns for stimuli do not always correspond with listeners being able to explicitly identify the stimulus differences (Koelsch & Mulder, 2002).

    While Regnault et al. (2001) found that non-musicians can identify dissonant chords through pleasantness ratings, non-musicians struggle when asked to explicitly identify the mode of an excerpt (Halpern et al., 2008; Leaver & Halpern, 2004). Halpern and her colleagues found that non-musicians can label major and minor songs as happy and sad respectively, but asking them to label the same songs using major and minor results in chance performance. Even musicians have some difficulty with the major/minor labels, though performance is above chance (Halpern et al., 2008; Leaver & Halpern, 2004). Leaver and Halpern (2004) explored different types of training for non-musicians, finding that a short lesson that included music theory and explicit links to the affective labels improved non-musicians performance, but not to the same levels as musicians. In addition, an ERP study revealed that musicians have a large late positive compo-nent when processing the note critical for mode classification that is missing in non-musicians (Halpern et al., 2008), highlighting the fact that non-musicians are processing music differently.

    Wolpert (1990, 2000) was interested in what non-musicians would spontaneously identify regarding music, emphasizing that her question was what do people hear rather than what can people hear. Given the influence of other musical characteristics on ratings on emotion (e.g., Costa et al., 2004; Gagnon & Peretz, 2003) and the evidence that non-musicians are at least implicitly aware of harmony, as revealed by ERPs (e.g., Koelsch & Mulder, 2002; Regnault et al., 2001), we wondered how non-musicians would spontaneously use scales of likeableness, pleasantness and correctness to rate monotonal and bitonal music. As mentioned above, it is possible that non-musicians do hear bitonality as readily as musicians, but simply lack the vocabulary to describe it. In addition, we examined whether participants, especially non-musi-cians, could be taught to identify bitonality in concert-hall music where the composer had cho-sen to write in bitonality. As will be discussed more fully below, the selections from Milhauds Saudades offer a subtle bitonal crunch, and, in our estimation, present a more robust test than the one offered by contrived musical examples.

    For the purposes of this study, we assume that accompaniment and melody project inde-pendent auditory streams. This assumption makes common sense; it also seems implicit in Wolperts studies (1990, 2000). Importantly, it is also how the composer Milhaud chooses to separate the keys in Saudades. Melody and accompaniment are differentiated by register, rhythm and pacing, timbre and dynamic level, yet are sufficiently coordinated in terms of meter and especially pitch structure to be perceived as integral parts of an organic whole. All the bitonal excerpts used in this study, whether contrived from a pre-existing piece or origi-nally composed by Milhaud, present the accompaniment in one key and the melody in another.

    Finally, we wish to draw a distinction between the terms tonality and key as used in this study. We understand tonality to be a musical system where any and all pitch configurations are perceived as being organized in terms of one and only one referential pitch class: the tonic.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 430 Psychology of Music 38(4)

    Key identifies the one major or minor triad that is projected by linear and harmonic prolonga-tions. This being said, we also recognize that the term tonality has a wide and often contradic-tory range of meanings, and is often used synonymously or interchangeably with key (see Hyer, 2001). This is why bitonality, in spite of its name, means the simultaneous presence of two distinct keys or scales, not musical systems. At times, we will use the invented and tautologi-cal term monotonal rather than tonal solely to distinguish clearly from bitonal or polytonal. Thus, when participants are asked to identify the tonality of an excerpt, they are in fact asked to determine whether the passage is (mono)tonal or bitonal.

    The current experiment has three components. First, we would like to replicate Wolperts (2000) finding that non-musicians do not spontaneously identify change in tonality within a free response task. We will use musical excerpts from both the classical and mid- to late-20th-century popular songbook, and create manipulated (bitonal) versions of each clip by shifting the melody two semitones. Koelsch, Fritz, Cramon, Mller, & Friederici (2006) labeled a similar manipulation as (permanently) dissonant. Second, we would like to see how non-musicians, along with musicians, use rating scales for likeability, correctness and pleasantness for a similar set of original and manipulated (bitonal) clips. We are not going to instruct them on how to map these adjective scales onto the tonality of the clip, though previous work suggests that the (permanently) dissonant clips will be mapped onto the more negative ends of these scales (e.g., Gosselin et al., 2006; Koelsch et al., 2006). Finally, we would like to explore how readily non-musicians can learn to label bitonality. When asked to identify when the pianist gets lost dis-sonance was correctly identified (Gosselin et al., 2006), suggesting non-musicians are sensitive to dissonance. However, when explicitly asking non-musicians to identify major and minor mode, Halpern et al. (2008) found that non-musicians struggled, even though they were able to accurately label major as happy and minor as sad. Ours is perhaps an even more challenging task because we are using Milhauds Saudades, which were originally bitonal compositions and are subtler than the (permanently) dissonant clips used in previous literature (Koelsch et al., 2006; Wolpert, 2000; our blocks 1 and 2).

    MethodParticipantsForty-two participants (aged 1822 years, gender was not recorded) were recruited from intro-ductory psychology classes and student music ensembles. Musical experience was assessed by a brief survey at the end of the experiment, and revealed three distinct groups. Training in music theory was not considered, adopting Krumhansls (1983) idea of focusing on partici-pants musical experience. If the participants only musical experience came from ensemble participation, such as school band or chorus, we divided the number of years by two for the purposes of assigning groups, because they had not received the attention of private instruc-tion. Musicians (n = 14) had at least five years of private music lessons on an instrument (mean years of training = 9.2 years). Intermediate musicians (n = 14) had between one and five years of private lessons (mean years of training = 2.4 years), and non-musicians (n = 14) were those with less than one year of private lessons.

    StimuliMusic clips were created in MIDI (Musical Instrument Digital Interface) files using Steinbergs sequencer program Cubase SX (version 3.0). Sample sounds of orchestral instruments were

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 431

    taken from Symphonic Orchestra Silver Edition (Eastwest), drum sounds from BFD (fxpansion), and guitar sounds from Virtual Guitarist (Steinberg).

    Blocks 1 and 2 presented two versions of classical and popular music excerpts: the original tonal version, and an altered bitonal version where the accompaniment was moved up or down two semitones, matching the manipulation used by Wolpert (2000). Classical excerpts included the beginning phrases of Schuberts Ave Maria (Ellens Gesang III, Op. 52, No. 6, D. 839), the first movement of Mozarts Bassoon Concerto in B flat Major, K. 186e, and the arias Evry Valley and Rejoice, Rejoice from Handels Messiah. Popular music excerpts were taken from jazz and pop songs from the mid to late 20th century: Gordons Unforgettable, Lennon and McCartneys A Day in the Life, McCartneys Yesterday, and Menken and Ashmans Beauty and the Beast.

    For block 3, we constructed stimuli in a manner completely opposite to those in Wolperts study (1990, 2000). Rather than start with a monotonal piece and then distort it to render it bitonal, we took pieces that had been written bitonally by a renowned composer, and then made small adjustments to make them monotonal. Thus, for block 3 we selected five phrases from Milhauds Saudades: Corcovado, Copacabana, Ipanema, Paineras and Botafogo. All selections were taken from the initial phrase of the piece with the exception of Ipanema, which was taken from an internal phrase. Again, save for Ipanema, the stimuli are similar in their construction. Botafogo is typical and will serve to illustrate the stimuli in block 3 (Figure 4).1 The phrases consist of an accompaniment built on a characteristic Brazilian syncopated rhythm that oscillates between tonic and dominant at the rate of one chord per measure. The left-hand accompaniment does not use a complete scale; rather, it projects its key using a pen-tachord formed from the first five pitches of the scale. The right-hand melody is built on a differ-ent scale; unlike the accompaniment, it utilizes all the pitches of the scale. Figure 5 shows how the pitches of each key are distributed among melody and accompaniment. Tonal versions of each excerpt were created by slightly altering the right-hand melody to match the key of the left-hand accompaniment, thus eliminating some of the bitonal crunch (Figure 6). While the rewriting eliminated the bitonality, some dissonances remained. The alterations were minute in order to preserve the composers style. In fact, changes were so minimal that listeners would not perceive an alteration unless they were intimately acquainted with Saudades. (The accom-paniment was not changed.) Finally, we note that Milhaud also uses a variety of intervals to separate the key of the melody from that of the accompaniment. Table 1 illustrates the key dif-ferences in the pieces chosen for this study. (Ipanema is missing in Table 1; as will be discussed below, it is not entirely bitonal.)

    There are significant differences between Wolperts and our contrived examples and Mil-hauds pieces. When a monotonal piece is altered by transposing the melody only to another key, dissonances are created at salient moments where consonances typically existed, such as at phrase beginnings, downbeats and strong beats, and especially at the cadence that closes the phrase. Milhaud also uses bitonality to create dissonances, but exercises greater control in their deployment, as illustrated in Botafogo (Figure 4). The accompaniment is in F minor. After a two-bar vamp, the melody enters in F sharp minor. Striking dissonances are heard at salient moments: an augmented fifth on the downbeat of bar 3, a major seventh on the downbeat of bar 4 and the clashes created by the right-hand chords in bars 711. Dissonances ease toward the end of the phrase (this is mirrored by the diminuendo to mp), and with the C natural in bars 1314 the melody slips out of F sharp minor into F minor, the key of the accompaniment. Even when a phrase does not begin with a dissonance, as is the case in Copacabana, striking disso-nances are placed on most beats throughout the phrase (Figure 7). This particular

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 432 Psychology of Music 38(4)

    Table 1. Bitonalorganizationofselectpassagesfrom Milhaud,Saudades do Brasil,Op.67

    Movement Melody Interval Accompaniment

    Botafogo F sharp natural minor Minor second F-minor pentachordCopacabana B major Major third G-major pentachordCorcovado D major Perfect fifth G-major pentachordPaineras C major Major third A-flat major pentachord

    Doucement (84 = q)

    mp en dehors

    5

    f

    10

    mp

    Figure 4. Milhaud,Saudades do Brasil,Op.67,No.2,Botafogo,originalbitonalversion.

    Left-hand accompaniment

    F minor pentachord

    Right-hand melody

    F# natural minor

    Figure 5. Milhaud, Saudades do Brasil, Op.67,No.2,Botafogo,partitioningofpitchcollectionbymelodyandaccompaniment.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 433

    compositional strategy, to begin a phrase with a clear bitonal effect and strong dissonance, and gradually taper to a consonant, monotonal state, is also heard in Paineras.

    ProcedureParticipants were individually tested in a quiet, well-lit room, using a Macintosh G4-based iMac computer with Bose TriPort headphones set to a standardized, comfortable volume level deter-mined by the experimenter. Instructions were presented on-screen, and the participants responded using the computer keyboard and mouse.

    Block 1 asked participants to listen to two versions of a musical excerpt (the original tonal and an altered bitonal) and type into the computer any differences they heard, similar to the free-response task from Wolpert (2000). Figure 8 shows the experimental procedure. The sec-ond version started playing immediately after the first, and a prompt appeared on screen asking participants to indicate any differences they heard. The four different pairs of excerpts were presented in a unique random order for each participant, and the tonalbitonal sequence was counterbalanced between pairs. Song set 1 included original and manipulated (bitonal) ver-sions of classical excerpts Rejoice, Rejoice and the first movement of Mozarts Bassoon Con-certo, along with popular excerpts A Day in the Life and Unforgettable. Song set 2 included original and manipulated (bitonal) versions of classical excerpts Ave Maria and Evry Valley along with popular excerpts Yesterday and Beauty and the Beast. Half the participants heard song set 1 in block 1, and the other half heard song set 2.

    Doucement (84 = q)

    mp en dehors

    5

    f

    10

    mp

    Figure 6. Milhaud, Saudades do Brasil,Op.67,No.2,Botafogo,recomposedmonotonalversion.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 434 Psychology of Music 38(4)

    Block 2 asked participants to listen to single versions of a musical clip and use a computer mouse to answer three questions using Likert-scales: How much do you like this song? (1, not at all likeable, to 5, like a lot.) How correct does this song sound? (1, wrong, to 5, correct.) How pleas-ant does this song feel? (1, unpleasant, to 5, pleasant.) All three scales appeared on-screen simul-taneously, and participants could answer in any order. Over the course of block 2, participants heard two new classical and two new popular excerpts, in both original tonal and altered bitonal versions, for a total of eight clips to rate. The musical clips were presented in a unique random order for each participant. The musical excerpts used in blocks 1 and 2 were counter-balanced between participants, so that excerpts that appeared in block 1 for half the partici-pants appeared in block 2 for the other half.

    Block 3 comprised three phases (see Figure 8): defining monotonality and bitonality, train-ing participants with feedback to identify the tonality of a musical clip, and then testing partici-pants ability to identify the tonality with new musical clips. Copacabana was always used in the training phase (Figure 7), and Saudades set 1 included original and manipulated (mono-tonal) versions of Ipanema and Paineras, while Saudades set 2 included original and manipu-lated (monotonal) versions of Botafogo and Corcovado. While the original, bitonal version of Copacabana was presented, the following definition appeared on screen: BITONAL: Notice sometimes there is a crunch in the sound. This should sound somewhat unpleasant and feel like it shouldnt be that way. Afterwards, the altered monotonal version was presented, with the following definition on screen: MONOTONAL: Now the song sounds smooth and more pleasant.

    Following these definitions and examples, two additional excerpts, (either Saudades set 1 or Saudades set 2) were used for training. Both bitonal and monotonal versions of each were pre-sented in random order for each participant (four total trials). Participants were asked to iden-tify whether the passage was monotonal or bitonal, and then were given feedback regarding the accuracy of their response and the actual tonality of the clip. These training clips were repeated until participants correctly identified the tonality of four excerpts in a row. At this point, partici-pants were asked to identify the tonality of two new excerpts from Saudades, (Saudades sets were counterbalanced between participants). Both tonal and bitonal versions of each excerpt were presented in random order for each participant, for a total of four trials.

    p

    Calme (88 = q)

    5

    Figure 7. Milhaud,Saudades do Brasil,Op.67,No.4,Copacabana.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 435

    After completing all three blocks, participants were briefly interviewed about their musical experience, answering questions about private music lessons, musical genres performed and listened to, and how many hours a day they listened to music.

    ResultsFree responseBlock 1 presented monotonal and bitonal versions of the same musical excerpt, and asked par-ticipants to list any differences they heard. Mention of dissonance, differences in key between

    Block 1Listen to song set A (original and bitonal versions)Task: Comment on any differences

    Block 2Listen to song set B (original and bitonal versions)Task: Rate likeability, correctness & pleasantness

    1 = not at all likeable/wrong/unpleasant5 = likeable/correct/pleasant

    Phase 1: Definitions Listen to original Copacabana (bitonal version) View definition (on screen)

    Bitonal: Notice sometimes there is a crunch in the sound. This should sound somewhat unpleasant and feel like it shouldnt be that way.

    Listen to manipulated Copacabana (monotonal version) View definition (on screen)

    Monotonal: Now the song sounds smooth and more pleasant.

    Phase 2: Training Listen to Saudades set 1 (original and monotonal versions) Task: Identify tonality, with feedback

    Repeated until 4 correct in a row

    Phase 3: Testing Listen to Saudades set 2 (original and monotonal versions) Task: Identify tonality, no feedback

    Block 3

    Figure 8. Experimentalprocedure.Songsetsincludedtwoclassicalandtwopopularexcerpts(originalandmanipulatedbitonalversions)andwerecounterbalancedbetweenparticipants.Saudadessetsincludedtwoexcerpts(originalandmanipulatedmonotonalversions)andwerealsocounterbalancedbetweenparticipants.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 436 Psychology of Music 38(4)

    melody and accompaniment, or something being out of tune earned 2 points. Discussion of a preference, correctness, pleasantness or overall key or pitch change earned 1 point. Two raters scored each of the free responses (interrater reliability = 0.91). The 15 differences between the raters (out of 168 responses) were resolved through discussion, and subsequent interrater reli-ability was 1.0. The points were totalled for the entire block so that the highest possible score was 8 (2 points maximum per clip for four clips). A one-way ANOVA between the three groups found a significant effect of group (F(2, 39) = 12.73, p < .01), with Scheff post hoc compari-sons revealing that musicians scored higher (M = 74%, SD = 27) than both intermediate musi-cians (M = 37%, SD = 35, p .30). Free response scores and years of music lessons were significantly correlated (r = .64, p < .01).

    Musicians superior performance on the free response task replicates Wolpert (2000), and highlights that musicians certainly have a different vocabulary for describing pitch structure. Of particular interest, 11 participants (four musicians, three intermediate musicians and one non-musician) used the phrase out of tune to describe the bitonal stimuli, which, in our esti-mation, is a close approximation to a correct evaluation of the aural effect of the stimuli. Non-musicians were more likely to offer nonexistent differences, such as tempo change or added instruments, and to use vague descriptions such as the sound was brighter or more intense. In fact, six of the non-musicians actually scored 0 points, mentioning no differences between the musical excerpts that could possibly be interpreted as relating to key, or even pleasantness.

    Rating dataFor the participant ratings in block 2 (see Table 2), we computed repeated-measures MANOVAs with musical experience as a between-participant factor, and a 3-adjective scale (likeability/correctness/pleasantness) by 2 tonality (monotonal/bitonal) by 2 genre (classical/popular) within design. All reported effects and interactions have p < .05. No main effect was found for musical experience (F < 1.0). Main effects were found for tonality, F(3, 37) = 42.75, and genre, F(3, 37) = 3.01, with monotonal excerpts rated higher than bitonal excerpts on all scales, and pop excerpts rated higher than classical excerpts on all scales. Significant interactions occurred between musical experience, tonality and two of the adjective scales: correctness, F(2, 39) = 5.54, and pleasantness, F(2,39) = 3.27. Tukeys HSD analysis revealed that musicians had a larger drop in correctness and pleasantness ratings when responding to bitonal clips compared to monotonal clips (see Figure 9).

    One of the popular excerpts was accidentally repeated in blocks 1 and 2 for half the partici-pants, and consequently separate ANOVAs for each adjective scale with song set as a between-participant factor along with 2 tonality by 2 genre within design were computed. A main effect of song set was observed only for correctness ratings, F(1,36) = 7.41, and a t-test revealed that the participants who heard the bitonal popular clip twice (in blocks 1 and 2), rated it as more correct than those who heard it only once (t(40) = 2.31). As expected from the MANOVA reported above, tonality and genre were also main effects.

    The interactions between tonality and musical experience reveal that musicians were more sensitive to tonality than the intermediate and non-musician groups when rating correctness and pleasantness (see Figure 9), but there was not the main effect of musical experience we expected based on the results of block 1 and Wolperts earlier work (1990, 2000). The results

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 437

    of block 2 make it clear that non-musicians do respond spontaneously to tonality, generally rating bitonal clips as unlikeable, incorrect and unpleasant as musicians do.

    Identifying bitonalityPercentages correct for identifying the bitonality of the original Milhaud excerpts (and its absence in the adjusted excerpts) were calculated for each group: musicians (M = 86%,

    Table 2. Meanratingsfornon-musicians,intermediatemusiciansandmusiciansonthescalesinblock2(1=notatalllikeable/wrong/unpleasantto5=likealot/correct/pleasant)

    Group Task Monotonal (original) Bitonal (manipulated)

    Classical Popular Classical Popular

    Non-musicians Likeability 3.71 (0.58) 3.96 (0.75) 2.43 (0.65) 2.93 (0.70)Correctness 4.39 (0.71) 4.07 (0.90) 2.79 (0.91) 2.89 (0.98)Pleasantness 4.25 (0.73) 4.21 (0.78) 2.82 (1.10) 3.11 (0.88)

    Intermediate musicians Likeability 3.61 (0.81) 3.54 (0.75) 2.21 (0.89) 2.57 (1.14)Correctness 3.96 (0.89) 3.86 (0.93) 2.46 (1.08) 2.57 (1.21)Pleasantness 4.00 (0.88) 4.07 (0.70) 2.54 (1.05) 2.93 (1.25)

    Musicians Likeability 4.00 (0.65) 4.18 (0.58) 2.04 (0.93) 2.64 (1.20)Correctness 4.50 (0.68) 4.68 (0.37) 1.86 (1.03) 2.36 (0.99)Pleasantness 4.50 (0.55) 4.64 (0.50) 2.25 (1.09) 2.71 (0.97)

    Note:Standarddeviationsareshowninparenthesis.

    Monotonal Bitonal1

    2

    3

    4

    5

    Corr

    ectn

    ess

    ratin

    g

    Non-musiciansIntermediateMusicians

    Monotonal Bitonal1

    2

    3

    4

    5

    Plea

    sant

    ness

    ratin

    g

    Figure 9. Foreachlevelofmusicalexperience:non-musicians(lessthanoneyearofprivatemusiclessons,opensquare),intermediatemusicians(betweenoneandfiveyearsoflessons,opencircle)andmusicians(morethanfiveyearsoflessons,closedtriangle):(a)meanratingforcorrectness(1=wrong,5=correct);(b)meanratingforpleasantness(1=unpleasant,5=pleasant).

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 438 Psychology of Music 38(4)

    SD = 16), intermediate musicians (M = 73%, SD = 18), and non-musicians (M = 80%, SD = 24). A one-way ANOVA found no main effect of group (F(2, 39) = 1.4, p > .26), with Scheff post hoc comparisons revealing no differences between the groups (ps > .2). All groups were equally accurate at distinguishing bitonal excerpts from monotonal excerpts. In fact, after training, there is no effect of musical experience, with non-musicians showing equivalent accuracy to musicians and intermediate musicians. There was also no correlation between accuracy for tonality and years of music lessons (r = .14, p = .37). Within the training session, each group made the same number of errors (musicians, 2.0 errors; intermediate musicians, 1.9 errors; non-musicians, 1.9 errors), with no correlation between number of errors and years of music lessons (r = .06, p = .69). It is particularly noteworthy that the non-musicians, who were unable to identify the tonality differences in the free response task of block 1, per-formed equivalently to the most advanced musicians with only a small amount of training.

    Accuracy for identifying the tonality of each Milhaud excerpt is presented in Figure 10, with 95 percent confidence intervals. All the participants correctly identified the tonality of Pain-eras, as revealed by the perfect score (and no confidence intervals), but performance was at chance for Ipanema and the monotonal version of Botafogo.

    Musical preferencesAll participants, regardless of level of musical training, reported their listening preferences to be popular, rock or some variation (e.g., classic rock, oldies), with no more specifics. Both inter-mediate musicians and musicians reported performing almost exclusively classical music (nine of 14 intermediate musicians, 11 of 14 musicians), with only one musician indicating jazz and two others indicating anything as the genre they regularly performed.

    Figure 10. RatingsfortonalityidentificationofMilhaud,Saudades do Brasil,Op.67.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 439

    DiscussionNon-musicians in our study are sensitive to tonality, and will spontaneously rate bitonal music as unlikeable, incorrect or unpleasant at rates equivalent to experienced musicians replicat-ing Koelsch et al. (2006) and Gosselin et al. (2006). Non-musicians did not spontaneously include appropriate descriptions of bitonal music such as bitonal, crunchy, dissonant or even out of tune in a free response task, replicating Wolpert (2000). However, once their attention was directed to those characteristics, non-musicians were able to identify bitonality as well as musicians (block 3). These results complement research showing non-musicians sensitivity to other aspects of pitch structure, such as mode (Costa et al., 2004; Gagnon & Per-etz, 2003) or how well a chord fits within its harmonic context (Regnault et al., 2001). Our data suggest that bitonality may be even more readily apparent than mode, given that non-musicians performed equivalently to musicians, in contrast with non-musicians difficulty in identifying mode (Halpern et al., 2008). Since bitonality is such an unusual compositional style, restricted to a very narrow time period, used by a relatively small number of composers and associated with short-lived aesthetic movements, it is not surprising that its characteristic sound (its crunch) is so unfamiliar to most non-musician and intermediate musician partici-pants. Conversely, mode and its association with certain emotional states is well-known, ubiq-uitous, and used in all sorts of musical styles and over a long period indeed, it is still used today (for a survey of the literature on mode change see Gabrielsson & Lindstrm, 2001).

    Block 1 (free response) successfully replicated Wolperts (2000) finding that non-musicians do not spontaneously mention bitonality. There was a significant difference in the way musi-cians and non-musicians/ intermediate musicians responded to bitonal and monotonal ver-sions of the same music. While the musicians did not score perfectly in the free response task, they did identify the bitonality of the clips. Whereas Wolpert (2000) used professional musi-cians, we used musically involved undergraduate students, which probably accounts for the lower accuracy in our musician group. Non-musicians were more likely to offer non-existent differences, such as changes in tempo, instrumentation, or timbre. The scoring criteria were designed to be relatively generous, ignoring the made-up changes and giving full credit if more ambiguous terms than bitonality, such as out of tune, were mentioned. Nonetheless, the per-formance of musicians and non-musicians is clearly dissimilar.

    Intermediate musicians (one to five years of private music lessons) did not perform statisti-cally better than non-musicians in the free response task, which suggests that something may happen after the fifth year of lessons which contributes to ones ability to recognize and cor-rectly identify bitonality. We realize that the students in our study were not part of a single, organized music programme but came from widely diverse musical backgrounds, and included singers and instrumentalists, so we can only speculate on the source of the difference. It seems likely that intermediate musicians who received structured and traditional music instruction through individual lessons, playing from notated music rather than by ear, and emphasizing classical art music, would possess a greater sensitivity towards tonality.

    As noted above, a few musicians and intermediate musicians used the phrase out of tune to describe the bitonal stimuli. Although technically incorrect, other than bitonal, out of tune is the most appropriate answer that a participant could possibly offer for a bitonal pas-sage. Again, if participants have had music lessons coupled with extensive ensemble experi-ence, such as orchestra and choir, they might be conscious of intonation and thus more sensitive to bitonal clashes.

    The rating task (block 2) revealed that non-musicians and intermediate musicians sponta-neously rate bitonal music as unlikeable, incorrect or unpleasant to the same degree as

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 440 Psychology of Music 38(4)

    musicians (Figure 9). It seems likely that the lack of comment from non-musicians and inter-mediate musicians on the bitonal crunch effect during the free response trials has more to do with a lack of vocabulary than a lack of perception. In addition, all groups rated the pop excerpts as more likeable and pleasant compared to the classical excerpts, which fits with the listening habits of our participants, as revealed in the music experience survey.

    In regard to blocks 1 and 2, we should note that monotonal music that is altered or distorted to become bitonal by transposing the melody only up or down one or two semitones creates a strong and unique sort of crunch. (This particular experimental strategy was also integral to Wolpert, 2000.) As mentioned above, bitonal crunch results from two different types of clash. First, there is the tonal incompatibility between two distinct pitch configurations that create independent auditory streams. In the case of this study, these are melody and accompaniment. Second, there are the dissonant intervals that may be created between such distinct configura-tions. While transposing the melody of a monotonal piece may, in and of itself, not create any more dissonances than a piece that is conceived bitonally by the composer, it will, invariably, create its strongest dissonances at moments that, according to tonal syntax, are, or ought to be, stable. These moments typically include phrase beginnings and endings, where the accompani-ment would typically play the tonic chord and the melody would begin on a note of the tonic chord, and also downbeats of measures, and even strong beats.

    Block 3 investigated how well listeners could identify bitonality when a piece was conceived by the composer as bitonal. The contrast between an original bitonal piece and a pre-existing monotonal piece that is altered by transposing only the melody may be small but is critical. For instance, each original bitonal piece will present a unique dissonance profile. Consider, for example, the opening phrase of Copacabana (Figure 7). Although it is as bitonal as the excerpts used in this study, it differs from the other phrases in that dissonances are not heard on downbeats and most strong beats. Also, Milhaud favours a strategy of bringing bitonal phrases to a monotonal and consonant close, again apparent in Copacabana. This particular charac-teristic is entirely absent in a monotonal piece that is modified to bitonal by transposing its melody. Incidentally, Morgan (1991) rejects the notion that the bitonalism in Saudades can actually be heard because of Milhauds particular compositional strategy. Morgan goes on to claim that the left-hand accompaniment controls the harmony of the passage, and the right-hand melody provides merely a dissonant coloring (1991, p. 165). For these reasons, we assume that the Milhaud passages would be more difficult to perceive, especially by participants with no or only intermediate musical experience. Their success at identifying the tonality of the Milhaud passages is thus particularly compelling, and points to a more sophisticated ear than previously supported.

    Our within-participant design allows for a relatively direct comparison between the free responses of block 1 and successful identification of tonality in block 3. Specifically, the same non-musicians who were at such a loss for words to identify what was different between the original and manipulated (bitonal) clips were able to identify the tonality of a clearly more sophisticated manipulation of tonality by block 3. That the clips involved more subtle manipu-lations of tonality is apparent from the difficulty all groups had with Ipanema. The most obvi-ous pieces of the experimental design to facilitate this are the definition and training phases of block 3, which defined bitonality and monotonality with one excerpt (original and manipu-lated monotonal), then provided feedback regarding actual tonality for two additional excerpts (again, original and manipulated monotonal). Participants were then tested on two completely new excerpts and level of musical experience no longer separated participants ability to iden-tify tonality. Of course, because this is a within-participant design, it is also possible that mere

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 441

    exposure to bitonal versions of the classical and popular clips from blocks 1 and 2 also contrib-uted to this improved identification. The current study cannot distinguish between an exposure effect and the training of block 3, but given that non-musicians improved only with training when asked to identify major and minor modes (Halpern et al., 2008; Leaver & Halpern, 2004), training seems likely to be a necessary component for identifying tonality in the current experi-ment. There are two aspects of the training that might be interesting to explore in future research: the effect of explicitly defining monotonality and bitonality using examples, and the practice with feedback. It is possible that the non-musicians improved identification of bitonal-ity in block 3 was supported mainly by the explicit definition, and that the subsequent practice with feedback was unnecessary. The practice clips repeated until participants had successfully identified the tonality of four clips in a row, and there was no difference in the number of errors for any group. In other words, our non-musicians and musicians did not require different amounts of training on the tonality identification, suggesting that the definition itself was more important than the practice.

    From original bitonal excerpts from Saudades we created monotonal versions by changing certain notes of the melody to conform to the harmony of the accompaniment (Figure 6). In all cases, the changes were slight: shifting a few melody notes one or two semitones up or down sufficed to make it conform to the key of the accompaniment while preserving the melodys original melodic contour. Consequently, our recomposed passages sounded very much like the originals but lacked only the bitonal crunch, characteristic of this and many other pieces by Milhaud. It is important to note that not every dissonance was eliminated in this way, but only those dissonances that resulted directly from the bitonal clash. So in the monotonal version of Botafogo, for example, a minor seventh between accompaniment and melody is still heard on the downbeat of bar 4 (Figure 6), but it has a milder effect than the harsher major seventh cre-ated by the bitonal superposition in the original (Figure 4). Because the bitonal stimuli in block 3 were composed originally as such, and because the monotonal stimuli in block 3 differ from their bitonal source in only minimal ways (dissonances preserved), we judge our block 3 to be a more robust test than blocks 1 or 2.

    As mentioned above, Ipanema is a bit different from all the other Milhaud passages (Figure 11). Results showed that participants detected bitonality at a rate no better than chance (see Figure 10). A few factors may have contributed to this divergent result. Although it is the begin-ning of a major section, the Ipanema excerpt is an internal phrase. It lacks the brief vamp that introduces many of the opening phrases in Saudades. One role the vamp plays is to establish clearly the key of the accompaniment. Such clarity contributes to a strong opposition with a melody when it enters in a different key. But even setting aside the issue of the introductory vamp, the bitonalism of Ipanema is less clear than all other stimuli (Figure 11). The right-hand melody suggests strongly F major, with the B natural (bar 54) and D sharp (bar 56) func-tioning as chromatic passing tones. But the key of the left-hand accompaniment is less clear: although suggestive of C major if considered in isolation, it conforms quickly and perfectly to the key of the melody by incorporating the initial B flat to form a dominant seventh in the key of F major.

    Although the move to the G flat major chord in bar 57 may seem unexpected and abrupt, and perhaps could even be interpreted as resulting from the bitonal ambiance of the piece, it is, in fact, a somewhat common monotonal progression. Although the E natural of the right-hand melody suggests a G flatF bitonal superposition, a listener is more likely to hear a G flat major-minor seventh chord, for the E natural is enharmonically equivalent to F flat. The C7G flat7 chord succession can be heard monotonally; jazz musicians would recognize

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 442 Psychology of Music 38(4)

    it as a move from the regular dominant seventh to the tritone-substitute dominant seventh in F major (for a brief overview of jazz harmony see Strunk, 2002; jazz chord symbols and a traditional roman-numeral analysis are provided in Figure 11). Although a tonic resolu-tion, albeit altered to a secondary dominant, is suggested in bar 62, the key of F major never comes to fruition.

    Our recomposition of Botafogo also yielded curious results (Figure 10). While participants had no trouble identifying the original version as bitonal, they were unable to identify the recomposed version as monotonal. We are at a loss to explain why this particular recomposi-tion was not as successful as the others (see Figure 6 for a score of the recomposed version of Botafogo). Perhaps the dissonances that remained in the recomposition, in particular the sev-enth on the downbeat of bar 4, may have confused some participants.

    ConclusionWith very little training, all our listeners were quite successful at differentiating between bitonal and monotonal excerpts, with the non-musicians performing at a rate equivalent to the musi-cians. The ease with which our less experienced groups learned to identify the unique sensory roughness of bitonality suggests that this is something that they quite readily perceive and are able to articulate, once they have acquired the appropriate vocabulary. Moreover, the results of the one tonally ambiguous passage, Ipanema (Figures 10 and 11), indicate that listeners are able to distinguish between different kinds of dissonant passages: those that result from bitonal-ity, and those that do not. Our study also suggests that the dismissive attitude some music theo-rists have taken toward bitonality may be misguided. Musical analyses considering bitonality should be encouraged (perhaps along the lines of Harrison, 1997 or Stein, 2005), as well as further exploration of auditory streaming in musical works. Finally, music analysis textbooks

    53[Nerveux (116 = q)]

    57

    pp

    F: V7C7

    II7G7

    (sub V7 )

    58

    V4"/IVF/E

    Figure 11. Milhaud,Saudades do Brasil,Op.67,No.5,Ipanema;chordsymbolsandromannumeralanalysisprovided.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 443

    should perhaps be less timid in their presentation of bitonality. Another interesting question for future research would be to examine the aesthetics of bitonality, and how musical experience impacts preference ratings for bitonal versus monotonal music.

    Note

    1. Scores for all Milhaud original and recomposed excerpts can be accessed at http://www.davidson.edu/academic/psychology/Munger/additional/examples.htm.

    References

    Alain, C., Arnott, S. R., & Picton, T. W. (2001). Bottom-up and top-down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Per-ception and Performance, 27(1), 7289.

    Babbitt, M. (1952). Review of Structural Hearing, by Felix Salzer. Journal of the American Musicological Soci-ety, 5(3), 260265.

    Bartk, B. (1931/1976). The influence of peasant music on modern music. In B. Suchoff (Ed.), Bla Bartk Essays (pp. 340344). Lincoln, NE: University of Nebraska Press.

    Bartk, B. (1943/1976). Harvard lectures. In B. Suchoff (Ed.), Bla Bartk Essays (pp. 354392). Lincoln, NE: University of Nebraska Press.

    Berger, A. (1978). Problems of pitch organization in Stravinsky. Perspectives of New Music, 2(1), 1142.Boretz, B. (1973). Meta-variations, Part IV: Analytical fallout (I). Perspectives of New Music, 11(1), 146223.Bregman, A. S. (1990). Auditory scene analysis. Cambridge, MA: MIT Press.Bregman, A. S. (1993). Auditory Scene Analysis: Hearing in complex environments. In S. McAdams &

    E. Bigand (Eds.), Thinking in sound: The cognitive psychology of human audition (pp. 1036). Oxford: Oxford University Press.

    Casella, A. (1924). Tone-problems of to-day. The Musical Quarterly, 10(2), 159171.Cope, D. (1977). New music composition. New York: Schirmer Books.Costa, M., Fine, P., & Ricci Bitti, P. E. (2004). Interval distributions, mode, and tonal strength of melodies

    as predictors of perceived emotion. Music Perception, 22(1), 114Dallin, L. (1974). Techniques of twentieth century composition: A guide to the materials of modern music (3rd

    ed.). Dubuque, IA: William C. Brown.Dart, T. (2001). Eye music. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians

    (2nd ed.). London: Macmillan.DeVoto, M. (1993). Paris, 191845. In R. P. Morgan (Ed.), Modern times: From World War I to the present

    (pp. 3359). Englewood Cliffs, NJ: Prentice-Hall.Fishman, Y. I., Volkov, I. O., Noh, M. D., Garell, P. C., Bakken, H., Arezzo, J. C., et al. (2001). Consonance

    and dissonance of musical chords: Neural correlates in auditory cortex of monkeys and humans. Jour-nal of Neurophysiology, 86(6), 27612788.

    Forte, A. (1955). Contemporary tone structures. New York: Columbia University Press.Gabrielsson, A., & Lindstrm, E. (2001). The influence of musical structure on emotional expression. In

    P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 221248). Oxford: Oxford University Press.

    Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to happysad judgments in equi-tone melodies. Cognition and Emotion, 17(1), 2540.

    Gosselin, N., Samson, S., Adolphs, R., Noulhiane, M., Roy, M., Hasboun, D., et al. (2006). Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex. Brain, 129, 25852692.

    Halpern, A. R., Martin, J. S., & Reed, T. D. (2008). An ERP study of majorminor classification in melodies. Music Perception, 25(3), 181191.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • 444 Psychology of Music 38(4)

    Harrison, D. (1997). Bitonality, pentatonicism, and diatonicism in a work by Milhaud. In J. M. Baker, D. W. Beach, & J. Bernard (Eds.), Music theory in concept and practice (pp. 393408). Rochester, NY: University of Rochester Press.

    Hyer, B. (2001). Tonality. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians (2nd ed.). London: Macmillan.

    Koechlin, C. (1925). volution de lharmonie: Priode contemporaine, depuis Bizet et Csar Franck jusqu nos jours [The evolution of harmony: The contemporary period, from Bizet and Csar Franck to the present day]. In A. Lavignac & L. d. L. Laurencie (Eds.), Encyclopdie de la musique et dictionnaire du Conservatoire; 2e partie: technique, esthtique, pdagogie (pp. 591760). Paris: Delagrave.

    Koelsch, S., Fritz, T., Cramon, D. Y. V., Mller, K., & Friederici, A. D. (2006). Investigating emotion with music: An fMRI Study. Human Brain Mapping, 27, 239250.

    Koelsch, S., & Mulder, J. (2002). Electric brain responses to inappropriate harmonies during listening to expressive music. Clinical Neurophysiology, 113, 862869.

    Krumhansl, C. L. (1983). Perceptual structures for tonal music. Music Perception, 1(1), 2862.Krumhansl, C. L. (1990). Cognitive foundations of musical pitch. Oxford: Oxford University Press.Krumhansl, C. L., & Schumckler, M. A. (1986). The Petroushka chord: A perceptual investigation. Music

    Perception, 4(2), 153184.Leaver, A. M., & Halpern, A. R. (2004). Effects of training and melodic features on mode perception. Music

    Perception, 22(1), 117143.Lerdahl, F. (2001). Tonal pitch space. Oxford: Oxford University Press.Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press.Mdicis, F. D. (2005). Darius Milhaud and the debate on polytonality in the French press of the 1920s.

    Music and Letters, 86(4), 573591.Messing, S. (1988). Neoclassicism in music: From the genesis of the concept through the Schoenberg/Stravinsky

    polemic. Ann Arbor, MI: UMI Research Press.Milhaud, D. (1923/1982). Polytonalit et atonalit [Polytonality and atonality]. In J. Drake (Ed.), Notes sur

    la musique: Essais et chroniques (pp. 173188). Paris: Flammarion.Morgan, R. P. (1991). Twentieth-century music: A history of musical style in modern Europe and America. New

    York: W. W. Norton.Peretz, I., & Zatorre, R. J. (2005). Brain organization for music processing. Annual Review of Psychology,

    56, 89114.Plomp, R., & Levelt, W. J. M. (1965). Tonal consonances and critical bandwith. Journal of the Acoustical

    Society of America, 37, 11101123.Randel, D. M. (Ed.). (1986). The new Harvard dictionary of music. Cambridge, MA: The Belknap Press of

    Harvard University Press.Rasch, R., & Plomp, R. (1999). The perception of musical tones. In D. Deutsch (Ed.), The psychology of

    music (pp. 89112). San Diego, CA: Academic Press.Regnault, P., Bigand, E., & Besson, M. (2001). Different brain mechanisms mediate sensitivity to sensory

    consonance and harmonic context: Evidence from auditory event-related brain potentials. Journal of Cognitive Neuroscience, 13, 241255.

    Schoenberg, A. (1923/1975). New music. In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 137139). Berkeley and Los Angeles, CA: University of California Press.

    Schoenberg, A. (1925/1975). Tonality and form. In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 255257). Berkeley and Los Angeles, CA: University of California Press.

    Schoenberg, A. (1926/1975). Opinion or Insight? In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 258264). Berkeley and Los Angeles, CA: University of California Press.

    Smith Brindle, R. (1986). Musical composition. Oxford: Oxford University Press.

    by guest on January 21, 2013pom.sagepub.comDownloaded from

  • Hamamoto et al. 445

    Stein, D. (2005). Introduction to musical ambiguity. In D. Stein (Ed.), Engaging music: Essays in music anal-ysis (pp. 7788). New York: Oxford University Press.

    Stein, L. (1975). Schoenberg: Five statements. Perspectives of New Music, 14(1), 161173.Straus, J. N. (1990). Introduction to post-tonal theory. Englewood Cliffs, NJ: Prentice-Hall.Stravinsky, I., & Craft, R. (1962). Expositions and developments. New York: Doubleday.Strunk, S. (2002). Harmony. In B. Kernfeld (Ed.), The new Grove dictionary of jazz (2nd ed.). London:

    Macmillan.Symms, B. R. (1996). Music of the twentieth century: Style and structure (2nd ed.). New York: Schirmer

    Books.Tymoczko, D. (2002). Stravinsky and the octatonic: A reconsideration. Music Theory Spectrum, 24(1),

    68102.Tymoczko, D. (2003a). Octatonicism reconsidered again. Music Theory Spectrum, 25(1), 185202.Tymoczko, D. (2003b). Polytonality and superimpositions. Unpublished paper. Retrieved January 2008,

    from http://www. music. princeton. edu/ ~dmitri/ polytonality. pdfvan den Toorn, P. C. (1975). Some characteristics of Stravinskys diatonic music. Perspectives of New

    Music, 14(1), 104138.van den Toorn, P. C. (1983). The music of Igor Stravinsky. New Haven, CT: Yale University Press.van den Toorn, P. C. (1987). Stravinsky and the Rite of Spring: The beginnings of a musical language. Berkeley

    and Los Angeles, CA: University of California Press.van den Toorn, P. C. (2003). The sounds of Stravinsky. Music Theory Spectrum, 25(1), 167185.Watkins, G. (1995). Soundings: Music in the twentieth century. New York: Schirmer Books.Whittall, A. (2001a). Bitonality. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musi-

    cians (2nd ed.). London: Macmillan.Whittall, A. (2001b). Form. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians

    (2nd ed.). London: Macmillan.Williams, J. K. (1997). Theories and analyses of twentieth-century music. Fort Worth, TX: Harcourt Brace

    College Publications.Wolpert, R. S. (1990). Recognition of melody, harmonic accompaniment, and instrumentation: Musi-

    cians vs. nonmusicians. Music Perception, 8(1), 95106.Wolpert, R. S. (2000). Attention to key in a non-directed music listening task: Musicians vs. Nonmusi-

    cians. Music Perception, 18(2), 225230.

    Biographies

    MauroBotelho is an Associate Professor of Music at Davidson College, USA, where he teaches music theory, analysis, music of Brazil and music of Latin America. He has an interest in the rhythm and form of tonal music, and has presented his research in the USA, Europe and Brazil.Address: Davidson College, Box 7131, Davidson, NC, 280357131, USA. [email: [email protected]]

    Mayumi Hamamoto is a graduate of psychology from Davidson College, USA. She can be contacted c/o Dr Margaret Munger.

    MargaretP.Munger is a Professor of Psychology at Davidson College, USA, where she teaches cognitive psychology and research methods in attention and perception. She also co-authors a blog on research psychology, Cognitive Daily (http://www.scienceblogs.com/cognitivedaily).

    by guest on January 21, 2013pom.sagepub.comDownloaded from