Language Comprehension Speech Perception Naming Deficits.

28
Language Comprehension Speech Perception Naming Deficits
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    228
  • download

    0

Transcript of Language Comprehension Speech Perception Naming Deficits.

Page 1: Language Comprehension Speech Perception Naming Deficits.

Language Comprehension

Speech PerceptionNaming Deficits

Page 2: Language Comprehension Speech Perception Naming Deficits.

Thought / High-level understanding

Semanticsmeaning

Orthographytext

Phonologyspeech

Connectionist framework for lexical processing, adapted from Seidenberg and McClelland (1989) and Plaut et al (1996).

Triangle Model

Reading Speech perception

Page 3: Language Comprehension Speech Perception Naming Deficits.

Speech Perception

• The first step in comprehending spoken language is to identify the words being spoken, performed in multiple stages:

1. Phonemes are detected (/b/, /e/, /t/, /e/, /r/, )

2. Phonemes are combined into syllables (/be/ /ter/)

3. Syllables are combined into words (“better”)

4. Word meaning retrieved from memory

Page 4: Language Comprehension Speech Perception Naming Deficits.

Spectrogram: I owe you a yo-yo

Page 5: Language Comprehension Speech Perception Naming Deficits.

Speech perception: two problems

• Words are not neatly segmented (e.g., by pauses)

• Difficult to identify phonemes– Coarticulation = consecutive speech sounds blend

into each other due to mechanical constraints on articulators

– Speaker differences; pitch affected by age and sex; different dialects, talking speeds etc.

Page 6: Language Comprehension Speech Perception Naming Deficits.
Page 7: Language Comprehension Speech Perception Naming Deficits.

How Do Listeners Resolve Ambiguity in Acoustic Input?

• Use of context

– Cross-modal context• e.g., use of visual cues: McGurk effect

– Semantic context• E.g. phonemic restoration effect

Page 8: Language Comprehension Speech Perception Naming Deficits.

Effect of Semantic Context

• Pollack & Pickett (1964)– Recorded several conversations.– Subjects in their experiment had to identify the words

in the conversation.– When words were spliced out of the conversation and

then presented auditorily, subjects identified the correct word only 47% of the time.

– When context was provided, words were identified with higher accuracy

– clarity of speech is an illusion; we hear what we want to hear

Page 9: Language Comprehension Speech Perception Naming Deficits.

Phonemic restoration

Auditory presentation Perception

Legislature legislatureLegi_lature legi latureLegi*lature legislature

It was found that the *eel was on the axle.

It was found that the *eel was on the shoe.

It was found that the *eel was on the orange.

It was found that the *eel was on the table.

Warren, R. M. (1970). Perceptual restorations of missing speech sounds. Science, 167, 392-393.

wheel

heel

peel

meal

Page 10: Language Comprehension Speech Perception Naming Deficits.

McGurk EffectPerception of auditory event affected by visual processing

Harry McGurk and John MacDonald in "Hearing lips and seeing voices", Nature 264, 746-748 (1976).

Demo 1AVI: http://psiexp.ss.uci.edu/research/teachingP140C/demos/McGurk_large.aviMOV: http://psiexp.ss.uci.edu/research/teachingP140C/demos/McGurk_large.mov

Demo 2MOV: http://psiexp.ss.uci.edu/research/teachingP140C/demos/McGurk3DFace.mov

Page 11: Language Comprehension Speech Perception Naming Deficits.

McGurk Effect

• McGurk effect in video: – lip movements = “ga” – speech sound = “ba”– speech perception = “da” (for 98% of adults)

• Demonstrates parallel & interactive processing: speech perception is based on multiple sources of information, e.g. lip movements, auditory information.

• Brain makes reasonable assumption that both sources are informative and “fuses” the information.

Page 12: Language Comprehension Speech Perception Naming Deficits.

Models of Spoken Word Identification

• The Cohort Model– Marslen-Wilson & Welsh, 1978– Revised, Marslen-Wilson, 1989

• The TRACE Model – Similar to the Interactive Activation model – McClelland & Elman, 1986

Page 13: Language Comprehension Speech Perception Naming Deficits.

Online word recognition: the cohort model

Page 14: Language Comprehension Speech Perception Naming Deficits.

Recognizing Spoken Words: The Cohort Model

• All candidates considered in parallel

• Candidates eliminated as more evidence becomes available in the speech input

• Uniqueness point occurs when only one candidate remains

Page 15: Language Comprehension Speech Perception Naming Deficits.

Analyzing speech perception with eye tracking

‘Point to the beaker’

Allopenna, Magnuson & Tanenhaus (1998)

Eye tracking device to measurewhere subjects are looking

Page 16: Language Comprehension Speech Perception Naming Deficits.

Human Eye Tracking Data

Plot shows the probability of fixating on an object as a function of time

Allopenna, Magnuson & Tanenhaus (1998)

Page 17: Language Comprehension Speech Perception Naming Deficits.

TRACE: a neural network model

• Similar to interactive activation model but applied to speech recognition

• Connections between levels are bi-directional and excitatory

top-down effects

• Connections within levels are inhibitory producing competition between alternatives

(McClelland & Elman, 1986)

Features over time

beaker

i s Rb i s Rb

speaker

Page 18: Language Comprehension Speech Perception Naming Deficits.

TRACE Model Predictions

Features over time

beaker

i s Rb i s Rb

speaker

(McClelland & Elman, 1986)

Page 19: Language Comprehension Speech Perception Naming Deficits.

Semantic Representations and Naming Deficits

Page 20: Language Comprehension Speech Perception Naming Deficits.

Representing Meaning

• Mental representation of meaning as a network of interconnected features

• Evidence comes from patients with category-specific impairments– more difficulty activating semantic representation for

some categories than for others

Page 21: Language Comprehension Speech Perception Naming Deficits.

Category Specific Semantic Deficits

Patients who have trouble naming living or non-living things

Page 22: Language Comprehension Speech Perception Naming Deficits.

Definitions giving by patient JBR and SBY

Farah and McClelland (1991)

Page 23: Language Comprehension Speech Perception Naming Deficits.

Explanation

• One possibility is that there are two separate systems for living and non-living things

• More likely explanation:– Different types of objects depend on different types of

encoding

perceptual information

functional information

Page 24: Language Comprehension Speech Perception Naming Deficits.

Chapter 12 24

Representing Meaning

Page 25: Language Comprehension Speech Perception Naming Deficits.

Sensory-Functional Approach

• Category specific effects on recognition result from a correlated factor such as the ratio of visual versus functional features of an object – living more visual and nonliving more functional.

• How do we know that?– Farah & McClelland (1991) report a dictionary

study showing the ratio of visual to functional features: 7.7:1 for living things and 1.4:1 for nonliving things

Page 26: Language Comprehension Speech Perception Naming Deficits.

A neural network model of category-specific impairments

Farah and McClelland (1991)

A single system with functional and visual features.

Model was trained to associate visual picture with the name of object using a distributed internal semantic representation

Page 27: Language Comprehension Speech Perception Naming Deficits.

A neural network model of category-specific impairments

Farah and McClelland (1991)

Simulate the effect of brain lesions

lesions lesions

Page 28: Language Comprehension Speech Perception Naming Deficits.

Simulating the Effects of Brain Damage by “lesioning” the model

Farah and McClelland (1991)

Visual Lesions: selective impairment of living things

Functional Lesions: selective impairment of non-living things