Psych 121 Lecture 15

download Psych 121 Lecture 15

of 17

Transcript of Psych 121 Lecture 15

  • 8/13/2019 Psych 121 Lecture 15

    1/17

    Sound Localization How do you locate a sound?

    If an owl was hooting in the woods at night, how would you know where it was?Similar dilemma to determining how far an object isTwo ears: Critical for determining auditory Locations

    Auditory Space PerceptionThe auditory system helps us to perceive spatial locations of events, especially their direction in spacefrom us.We can also perceive the distances of sounds to some degree.

    Auditory Space Perception

    The auditory system helps perceive spatial locations of events

    Direction in space from us Distance

    There are multiple ways in which direction is computed by auditory processes, due to: differences amongsounds

    differences amongsituations, combination of processes.

    Auditory Space Perception Most auditory space informationdepends on the fact we have two earsin different positions. As in stereoscopic depth in vision, thetwo ears work as a system.

    Auditory Space Perception

    Differences in the arrival at the twoears of an ONSET of a sound can be used to localize it.

    Interaural Time Difference (ITD)

    The difference in time between a soundarriving at one ear versus the other.

    retina vs ears

    Different Inputs to the Two Ears

  • 8/13/2019 Psych 121 Lecture 15

    2/17

    Azimuth An imaginary circle that extendsaround us, in a horizontal plane. Can analyze ITD:

    Where would a sound source need to be located to produce maximum possible ITD?What location would lead to minimum possible ITD?

    What would happen at intermediate locations?

    Interaural time differences for sound sources varying in azimuth from directly in front of listeners (0degrees) to directly behind (180 degrees)

    Interaural time differences for different positions around the head

  • 8/13/2019 Psych 121 Lecture 15

    3/17

    Auditory Space Perception Differences in the arrival atthe two ears of an ONSET of a sound can be used to localize it. Differences in the PHASE of soundwaves arriving at the two ears can be used for localization.

    Speed of sound: 1100 ft / sec

    Extra 2/3 feet (8) around head takes.0006 sec = .6 msecDELAY (to far ear) of .6 msec is small relative to 10 msec time for one wave cycle.

  • 8/13/2019 Psych 121 Lecture 15

    4/17

    Speed of sound: 1100 ft / sec

    DELAY (to far ear) of .6 msec isLARGE relative to 1 msec time for one wave cycle.

    (1 msec) (10 msec)

    Extra 2/3 feet (8) around head takes.0006 sec = .6 msec

  • 8/13/2019 Psych 121 Lecture 15

    5/17

    This is why you have trouble hearing the direction of pure high frequencies. (battery low in fire alarm)

    Auditory Space Perception

    Differences in the arrival at the twoears of an ONSET of a sound can be used to localize it. Differences in thePHASE of sound waves arriving at the two ears can be used for localization.

    Differences in theINTENSITY of soundwaves arriving at the two ears can be used for localization.

    Sound Localization Interauraltime differences (ITD): The difference in time between a sound arriving at one ear versus theother

    Azimuth: The angle of a sound source on thehorizon relative to a point in the center of the headbetween the ears

    Measured in degrees, with 0 degrees being straight aheadAngle increases clockwise, with 180 degrees being directly behind

    Sound Localization Interaural level difference (ILD): The difference inlevel (intensity) between a sound arriving at one ear

    versus the otherFor frequencies greater than 1000 Hz, the head blocks some of the energy reaching the opposite earILD is largest at 90 degrees and90 degrees; nonexistent for 0 degrees and 180 degrees

    ILD generally correlates with angle of sound source, but correlation is not quite as great as it is withITDs

    The two ears receive slightly different inputs when the sound source is located to one side or the other

  • 8/13/2019 Psych 121 Lecture 15

    6/17

  • 8/13/2019 Psych 121 Lecture 15

    7/17

    Phase and intensity have complementary role

    Sound Localization

    Potentialproblem with using ITDs and ILDs for sound localization:Cone of confusion: A region of positions in space where all sounds produce the same ITDs and ILDs

    Elevation adds another dimension to sound localization (Part 1)

  • 8/13/2019 Psych 121 Lecture 15

    8/17

    Elevation adds another dimension to sound localization (Part 2)

    Sound Localization Overcoming the cone of confusion

    Turning the head can disambiguate ILD/ITD Similarity

  • 8/13/2019 Psych 121 Lecture 15

    9/17

    Sound Localization Shape and form of pinnae helps determinelocalization of sound

    Head-related transfer function (HRTF): A function that describes how the pinnae, ear canals, head, andtorso change the intensity of sounds with different frequencies that arrive at each ear from differentlocations in space (azimuth and elevation)

    Each person has their own HRTF (based on their own body) and uses it to help locate sounds

    Directional transfer functions (DTFs)

    The effect of elevation on DTFs

    The shape of the PINNAE also helps resolve ambiguities in sound location, primarily for highfrequencysounds.

    The pinnae funnel certain sound frequencies better than others, and the intensity of each frequency variesdepending on the direction of the sound.

    With someone elses pinna you will have trouble indicatingthe direction of a sound. (and can get weirdperceptions, like front-back permutations, and inside-the-head localization.

  • 8/13/2019 Psych 121 Lecture 15

    10/17

    Sound Localization

    Auditory distance perceptionSimplest cue: Relative intensity of soundInverse-square law: As distance from a source increases, intensity decreases faster such that decrease in

    intensity is the distance squared

    Spectral composition of sounds: Higher frequencies decrease in energy more than lower frequencies assound waves travel from source to one ear

    Relative amounts of direct versus reverberantEnergy

    Complex Sounds

    HarmonicsLowest frequency of harmonic spectrum:

    Fundamental frequencyAuditory system is acutely sensitive to natural relationships between harmonics

    What happens when first harmonic is missing?Missing-fundamental effect: The pitch listeners hear corresponds to the fundamental frequency, even if

    it is missing

    Many environmental sounds, including voices, are harmonic

    If the fundamental (lowest frequency) of a harmonic sound is removed, listeners still hear the pitch of thismissing fundamental

  • 8/13/2019 Psych 121 Lecture 15

    11/17

    When only three harmonics of the same fundamental frequency are presented, listeners still hear the pitchof the missing fundamental frequency (Part 1)

    When only three harmonics of the same fundamental frequency are presented,

    listeners still hear the pitch of the missing fundamental frequency (Part 2)

  • 8/13/2019 Psych 121 Lecture 15

    12/17

    Complex Sounds

    Timbre: Psychological sensation by which a listenercan judge that two sounds with the samefundamental loudness and pitch are dissimilar

    Timbre conveyed by harmonics and other frequencies

    Perception of timbre depends on context in which sound is heardBoth physical context and sound context

  • 8/13/2019 Psych 121 Lecture 15

    13/17

    Complex Sounds Attack and decay of sound

    Attack: Part of a sound during which amplitude increases (onset)Decay: Part of a sound during which amplitude decreases (offset)

    We use the onsets of sounds (attacks) to identify them

  • 8/13/2019 Psych 121 Lecture 15

    14/17

    Auditory Scene Analysis

    What happens in natural situations?Acoustic environment can be a busy place with multiple sound sourcesHow does the auditory system sort out these sources?

    Source segregation (or auditory scene analysis): Processing an auditory scene consisting of multiplesound sources into separate sound images

    Separating sounds from one another is like living in a visual world made of glass

    Auditory Scene Analysis A number of strategies to segregate sound sources:

    Spatial separation between soundsSeparation based on sounds spectral ortemporal qualities

  • 8/13/2019 Psych 121 Lecture 15

    15/17

    Auditory stream segregation: The perceptual organization of a complex acoustic signal into separateauditory events for which each stream is heard as a separate event

    Auditory Scene Analysis Grouping by timbre

    Tones that have increasing and decreasing frequencies, or tones that deviate from a rising/falling

    pattern pop out of the sequence

    Timbre affects how sounds are grouped

    Auditory Scene Analysis Grouping by onset

    When sounds begin at the same time, or nearly the same time, they appear to be coming from the samesound source

    This helps group different harmonics into a single complex toneConsistent with the Gestalt law of common fate

    Spectrograms of a bottle bouncing (a) or breaking (b)

  • 8/13/2019 Psych 121 Lecture 15

    16/17

    Auditory Scene Analysis People are surprisingly good at learning how to recognize a completely new sound

    McDermott, Wroblewski, & Oxenham (2011)Created novel sounds and played them many times in combination with other novel sounds that did not

    repeat

    Listeners needed very few repetitions to learn the new sounds, even though they never heard them inisolation

    The listeners task was to identify a novel complex sound when the target andanother complex soundoverlapped in one combined sound. (b) A new target was repeatedly played at the same time as different

    distracters

    Continuity and Restoration Effects

    How do we know that listeners really hear a sound as continuous?Principle of good continuation: In spite of interruptions, one can still hear a soundExperiments using a signal detection task (e.g.,

  • 8/13/2019 Psych 121 Lecture 15

    17/17

    Kluender and Jenison, 1992) suggest that missing sounds are restored and encoded in the brain as if theywere actually present!

    When a sound is deleted and replaced with a loud noise, listeners will hear the sound as if it continuesthrough the noise

    Continuity and Restoration Effects Restoration of complexsounds (e.g., music, speech)

    Listeners use higher-order sources ofinformation, not just auditory information, to restore missing

    segmentsGaps in a sound stream are less detrimental if filled with noise rather than with silenceWith noisy gaps, listeners cant even reliably specify where the gaps were

    Adding noise improves comprehension