Laura A. Michaelis [email protected] Department of Linguistics Institute of Cognitive...

44
Laura A. Michaelis Laura A. Michaelis [email protected] [email protected] Department of Linguistics Institute of Department of Linguistics Institute of Cognitive Science Cognitive Science University of Colorado at Boulder University of Colorado at Boulder Why we Believe that Why we Believe that Syntax Syntax is Construction-Based is Construction-Based
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    215
  • download

    0

Transcript of Laura A. Michaelis [email protected] Department of Linguistics Institute of Cognitive...

Laura A. MichaelisLaura A. [email protected]@colorado.edu

Department of Linguistics Institute of Cognitive Science Department of Linguistics Institute of Cognitive Science

University of Colorado at BoulderUniversity of Colorado at Boulder

Why we Believe that Why we Believe that Syntax Syntax

is Construction-Basedis Construction-Based

History of the issue

Why would anyone think that syntax isn’t construction-based?

Constructions became embarrassing once grammar came to be seen as a mechanism for creating and combining phrases.

Phrase-structure rules do not augment or otherwise alter what the words within them denote.

This makes sense: By altering the associations in an arithmetic sequence you can alter the output, but not what the numbers themselves denote.

Head-based semantic composition

All conceptual content comes from the lexicon.The head is the mediator between syntax and

semantics: It specifies what elements can or must accompany it. It determines the denotation of its phrasal expansion. It determines the distribution of its phrasal expansion.

Semantic information from the head percolates up to the phrase level, but not down from the phrase level.

Creeping constructionality: current implementations of EGB sneak constructional effects into ‘functional projections’.

An alternative view

Syntactic assembly and semantic composition are driven by grammatical constructions.

Constructions are “neither pure form nor pure meaning but Saussurean signs—a linking of the two” (Zwicky 1994).

Constructions and words differ only via internal complexity; both are schemas and both exhibit frequency effects.

Words and morphosyntactic patterns live together in the constructicon:

The heterogeneous set of linguistic forms that occur in any natural language [are] acquired and processed by a unified processing system […] that obeys a common set of activation and learning principles. (Bates & Goodman 1997: 510)

The constructicon includes: linking patterns, phrase-building constructions, sentence types, instantiation constructions (e.g., left isolation, null complementation), combinations thereof.

An alternative view

Structures are licensed by unification of constructions, both lexically headed and otherwise.

Word meaning and construction meaning can conflict. Semantic or syntactic properties shared by two or more

constructions are captured not by derivations but by ‘shared representational real estate’.

Constructions may invoke other constructions, including words, inflectional markers, derivational morphology.

Semantic dependency is separated from phrase building; therefore:

A semantic licensor need not be a syntactic head. A semantic licensor may in fact be a skeletal pattern

rather than a daughter. Constructions are headless.

Linking constructions

Linking constructions are flat, verb-level templates. They denote coarse event types (Goldberg 1995). A verb must unify with one or more linking

constructions to ensure that each of its theta roles receives grammatical expression.

A linking construction may augment the valence set of the verb with which it combines.

Linking constructions are minimal: each determines a single linking of a grammatical function to a theta role.

They combine with one another, and with phrase-building constructions, to yield constructs like VPs and sentences.

Linking constructions

The Applicative construction Since Wednesday, Bush has been peppered with questions about

drug use after he was asked in an interview with the Dallas Morning News whether he could pass a background check for federal employees. (cnn.com 8/20/99)

syn cat Vlex +

⎡ ⎣ ⎢

⎤ ⎦ ⎥

sem

index #1

frame

COVERAGEarg 1 index #2arg 2 index #3arg 3 index #4

⎢ ⎢ ⎢

⎥ ⎥ ⎥

integrate ( vtype, ctype) {instance, means}

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

valence sem index #2

rel θ Agentrank DA +

⎡ ⎣ ⎢

⎤ ⎦ ⎥

⎣ ⎢ ⎢

⎦ ⎥ ⎥ ,

sem index#3rel θ Theme[ ]

⎡ ⎣ ⎢

⎤ ⎦ ⎥ ,

sem index #4

rel θ Goalgf Nonoblique

⎡ ⎣ ⎢

⎤ ⎦ ⎥

⎣ ⎢ ⎢

⎦ ⎥ ⎥

⎧ ⎨ ⎪

⎩ ⎪

⎫ ⎬ ⎪

⎭ ⎪

Linking constructions

The Applicative and Oblique Theme constructions Bush has been peppered with questions about drug use.

syn cat V

lex +

⎣ ⎢

⎦ ⎥

sem

index #1

frame

COVERAGE

arg 1 index #2

arg 2 index #3

arg 3 index #4

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

integrate (typev, typec) {instance, means}

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

valence

sem index #2

rel θ Agent

rank DA+

⎣ ⎢

⎦ ⎥

⎢ ⎢ ⎢

⎥ ⎥ ⎥ ,

sem index#3

rel θ Theme

gf Oblique/ø

⎣ ⎢

⎦ ⎥

syn P [with]

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

,

sem index #4

rel θ Goal

gf Nonoblique

⎣ ⎢

⎦ ⎥

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎪ ⎪

⎪ ⎪

⎪ ⎪

⎪ ⎪

Linking constructions

The Applicative, Oblique Theme, and Passive constructions Bush has been peppered with questions about drug use.

syn

cat V

lex +

mfm Past Participle

⎢ ⎢ ⎢

⎥ ⎥ ⎥

sem

index #1

frame

COVERAGE

arg 1 index #2

arg 2 index #3

arg 3 index #4

⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥

integrate ( vtype , ctype) {instance, means}

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

valence

sem index #2

rel

θ Agent

rank DA+

gf Oblique/ø

⎢ ⎢ ⎢

⎥ ⎥ ⎥

syn N

⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥

,

sem index#3

rel θ Theme

gf Oblique⎡

⎣ ⎢ ⎤

⎦ ⎥

syn P [with]

⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥

,

sem index #4

rel θ Goal

gf Nonoblique⎡

⎣ ⎢ ⎤

⎦ ⎥

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎪ ⎪ ⎪

⎪ ⎪ ⎪

⎪ ⎪ ⎪

⎪ ⎪ ⎪

Linking constructions

The Applicative, Oblique Theme, Passive, and Subject constructions Since Wednesday, Bush has been peppered with questions about drug use

syn cat Vlex +mfm Past Participle

⎣ ⎢ ⎢

⎦ ⎥ ⎥

sem

index #1

frame

COVERAGEarg 1 index #2arg 2 index #3arg 3 index #4

⎢ ⎢ ⎢

⎥ ⎥ ⎥

integrate ( vtype, ctype) {instance, means}

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

valence

sem index #2

rel θ Agentrank DA +gf Oblique

⎣ ⎢ ⎢

⎦ ⎥ ⎥

syn P [by]

⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥

,

sem index#3

rel θ Themegf Oblique

⎡ ⎣ ⎢

⎤ ⎦ ⎥

syn P [with]

⎢ ⎢ ⎢

⎥ ⎥ ⎥

, sem index #4

rel θ Goalgf Subject

⎡ ⎣ ⎢

⎤ ⎦ ⎥

⎣ ⎢ ⎢

⎦ ⎥ ⎥

⎨ ⎪ ⎪

⎩ ⎪ ⎪

⎬ ⎪ ⎪

⎭ ⎪ ⎪

A nominal construction

The Indefinite determination construction. Licenses constructs like a penguin, but apparently not *a beer. Builds a phrasal constituent. A lexical noun with the appropriate feature values can unify directly

with head-complement and subject-predicate constructions.. Therefore no non-branching domination (e.g., NP N’).

syn cat N

rtg +

⎣ ⎢

⎦ ⎥ sem

index #1

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

syn [cat det] syn cat N

rtg -

⎣ ⎢

⎦ ⎥

val

syn cat N

rtg -

⎣ ⎢

⎦ ⎥

sem

index #2

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

sem

index #1

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

phon a

What is the evidence? The arguments come from both linguistic and

extralinguistic phenomena. Linguistic phenomena to be discussed here:

Special cases, comsits (Fillmore & Kay 1999, Kay 2002, Zwicky 2002)

Paradigmatic effects in morphosyntax (Ackerman to appear) Failure of transconstructional filters (‘parameters’) (Zwicky &

Pullum 1991, Van Valin & LaPolla 2000) Nonlocality (Zwicky 1995) Category mismatch (Zwicky 1995, Michaelis & Lambrecht 1996) Product-oriented generalizations (Bybee 1995, 2001) Overlap (Fillmore 1999, Michaelis & Lambrecht 1996) Coercion (Jackendoff 1997, De Swart 1998, Michaelis to appear)

Special cases

Nominal extraposition (Michaelis & Lambrecht 1996): It’s AMAZING the DIFFERENCE, It’s REMARKABLE the THINGS you can DO.

A focal, argument-position NP is ungoverned. It is a ‘hidden exclamative’ (Grimshaw 1979)

Postquotes (Ruppenhofer & Michaelis in prep): “Do begin, Wilson,” he encouraged her, as though she had promised an entertainment (BNC)

“Well,” nodded Joe after thinking for a moment, “I will.” (BNC)

“Let’s see,” he pondered to himself. (BNC)

Postquotes differ from prequotes, making a focus-fronting analysis (Suñer 2000) untenable: Postquotes cannot be negated, questioned, etc. (*‘My word!’, she didn’t say); postquotes welcome a greater range of verbs, including posture verbs and VPs; postquotes are more likely to have pronominal subjects.

Special cases Comsits (Zwicky 2002): complex signs with polycentric

use conditions, maintained by frequency, lexically indexed, Assessments (Goodwin & Goodwin 1992, Brenier,

Michaelis & Girand in prep):THAT’s interesting.That’s TERRIBLE. What predicts tonic-accent placement? Nature of the evaluation (positive/negative)? Frequency of the adjective? Length of the adjective? Discourse function (non-turn response/turn initiator)?

Copula doubling (Brenier & Michaelis 2004, Massam 1999)[ [ I , + it , ] + it 's ] just that , the weird thing [ is, + is ] thatGorbachev is the one that opened the floodgates, as far as with glasnost and [ poistro- , + perestroika ] and stuff

The condition most favorable to copula doubling is prominent BE1 and NP subject the thing. This suggests that ISIS is not an elliptical Pseudo-Cleft (cf. Massam 1999).

Special cases

The Latin correlative conditional: A single syntactic pattern has two readings: linked variables and equal constants (Michaelis 1994):

Quanto diutius abest, magis cupio tanto. (Terence)‘The longer he is away the more I miss him.’Quanto altius elatus erat, tanto foedius conruit (Livy)‘To the extent that he had risen high, he fell badly.’

These two readings are not due to association or lexical ambiguities.

In the ‘equal constants’ reading comparative morphology makes no contribution to interpretation.

Since constructions mean what they mean in the way that words do, it makes sense that constructions, like words, might be polysemous.

Paradigmatic effects

Paradigmatic inference is central to the Gricean model.

For example, the speaker asserts

Leslie caused the train to stop.

The speaker could have asserted

Leslie stopped the train.

The speaker’s decision to employ the longer form implicates that the default situation (direct causation) did not apply.

The interpretation of the periphrastic form depends upon the existence of a synonymous unused form.

Paradigmatic effects Paradigm-based inference also plays a role in

morphosyntactic constraints and affordances:

The present perfect blocks past adverbial reference: *I have visited Rome in 1999 (Michaelis 1994, 1998). Not a necessary constraint but motivated by contrast with the simple past.

The frame [V PP] cannot express accompaniment to motion, but [V X’s way PP] can: She squinted *(her way) into the garden.

Inflection inside compounds: passers by, runners up, whoppers junior. The Head Application Principle (Ackerman & Stump forthcoming): when a word A is headed by a word B, each word in A’s inflectional/derivational paradigm will be headed by the corresponding word in B’s paradigm.

Such effects require a ‘constructicon’ (cf. transderivational constraints).

Places where parameters fail

The double-ing constraint (Pullum & Zwicky 1991). Examples which motivated the constraint include: *Robin was

starting going to concerts more frequently. But Ross (1967) and others noticed systematic exceptions:

Robin was enjoying going to concerts more frequently. Robin was not starting, nor did she intend to start, going to concerts.

Pullum & Zwicky propose that the double-ing constraint is not a ‘transconstructional filter’ but instead a constraint on a single constituent-defining rule:

“[The VP constituency construction] is inapplicable if its head V and an immediately following head of a complement VP are both in Present Participle form.” (P&Z p. 254)

Places where parameters fail

Neutralization via case (Van Valin & LaPolla 1997): the pattern of semantic neutralization which characterizes the pivotal syntactic argument in the clause varies not merely from language to language but also from construction to construction.

This fact accounts for ‘ergative undercurrents’ in nominative-accusative languages: Imperatives: the null instantiated element represents an

agent rather than a subject.Examine that patient!??Be examined by the doctor!

Resultatives: the argument of the secondary predicate must be an undergoer:The cake fell flat.

She ran *(herself) tired.

Deviations from locality

Syntax-centered theories identify semantic licensing relationships with sisterhood relationships, e.g., [V NP].

Certain phenomena are therefore problematic for such theories: Free word order (‘discontinuous constituency’):

Quanto in pectore hanc rem how-much:ABL in heart:ABL this:ACC matter:ACC meo magis voluto […]my:ABL more turn:1sg:pres:ind:act “The more I turn this matter over in my heart […]” Plautus, Captivi

Niece licensing (Zwicky 1995)

Deviations from locality

V’

NP

VP

V

V

V < >

isprog

beingpassive cleaned

V’

Your suite is being cleaned right now.*White wine is being preferred now.

Deviations from locality

Exceptions motivate syntactic mechanisms including:

Movement

‘Scrambling’

Feature passing

These mechanisms are potentially dispensable in

construction-based syntax because:

Valency and constituent-building are two different things. Constructions have daughters rather than a daughters

feature like COMPS or ARG-ST (the HPSG way of enforcing locality).

Mismatches between internal and external syntax

The ‘as…as’ construction The internal syntax is that of ‘degree phrase’:

He was as smart as she was. But the external syntax may be that of a concessive

‘clause’: As smart as she was, she couldn’t find work in the telecom industry.

The phrase’s syntactic matrix, and not its internal composition, determines its function and interpretation.

DegP

APDeg as

A smart

DegP

as she was

Product-oriented generalizations

Bybee’s schema-based model of inflection (1998, 2001): The rule-rote distinction is abandoned in favor of a

‘superpositional memory’ in which like forms overlap, e.g., past-tense forms sang/rang/drank.

Affixes, roots and stems do not have independent representations, but exist only as relations of similarity among words.

These relations are captured by product-oriented schemas. These schemas capture similarities among forms of a specific

category, but do not specify how to derive that category from some other.

The main determinant of productivity is the type frequency of the schema.

The ‘basic-derived’ asymmetry is captured via frequency too.

Product-oriented generalizations: inflection

Do product-oriented schemas miss source-oriented generalizations?

No. Superimposition can be used to capture similarities among schemas which participate in an opposition:

sing sang sung = sVN

Further, product-oriented schemas are not derailed when we cannot find generalizations across the putative source forms.

Bybee (2001: 126-127) on English past tenses in [√]: with the addition of new members (e.g., struck, stuck, dug, snuck), a source-oriented generalization becomes impossible—the present-tense counterparts lack a nasal coda and have a variety of vocalic nuclei ([i], [ai], [Q]).

However, a product-oriented generalization is possible, as captured by the schema: C+√C[velar].

Product-oriented generalizations:

argument structure

Construction grammarians (e.g., Goldberg 1995 and Michaelis & Ruppenhofer 2001) also use the lack of valid source-oriented generalizations to argue for product-oriented schemas.

In particular, they argue that verbal linking patterns are produced by constructions rather than by lexical rules. Their observations:

No uniform ‘source’. Lexical rules assume conservation of thematic structure, but the ‘input’ verb may (a) lack the necessary thematic roles (e.g., verbs of creation vis–à-vis the double-object pattern) or (b) lack thematic structure altogether (e.g.,denominals).

Potentially no uniform ‘product’. Schema members need only bear a family resemblance to one another. Example: applicative verbs denote a variety of relations, including coverage, intensive action, repeated action, benefaction (Demuth 1998, Michaelis & Ruppenhofer 2001).

Overlap

Taxonomic hierarchies capture one-to-many mappings, e.g., SAI (Fillmore 1999): Hortatives: May she live forever! Exclamations: Man, was I discouraged! Emphatic negative imperatives: Don’t you ever say that. Appellatives: Aren’t they beautiful.

They also capture many-to-one mappings, e.g., the exclamative sentence type (Michaelis & Lambrecht 1996): I can’t believe who they hired. I can’t believe the noise they make. The indignities the world heaps upon us!

Coercion effects

Coercion is reinterpretation triggered by the need to resolve semantic conflict between an operator and its argument.

Endocentric effects (selection-restriction violations, e.g., *I bent the glass.) contrast with exocentric effects (the element triggering the type shift is not a syntactic head, as in, e.g., a pudding).

Coercion is performed both by type-shifting operators (e.g., plural, progressive) and type-sensitive operators (e.g., the indefinite article).

What types trigger the effects?

Nominal constructions some pillowa water

Aspectual contructionsShe was severely depressed twice in her life.Fuimus Troes, fuit Illium. ‘We were (perf)Trojans. There was (perf) Troy.’ (Vergilius, Aeneid

2.325)

Linking constructionsA gruff police monk barked them back to work.Beside it sparkles the community pool.

Sentence typesI can’t believe who showed up. (exclamative)Your CAR is red. (sentence-focus construction; Lambrecht 1995)

The relevance of coercion effects

Coercion means different things to different theorists.

Modularist perspective (Jackendoff 1990, 1997). Coercion is evidence against strictly syntactic composition. It suggests a syntax-semantics dissociation.

Constructionist perspective (Goldberg 1995, Michaelis to appear) Coercion is evidence that syntactic rules denote types. It suggests an integrative (top-down/bottom-up) model of composition rather than a head-driven (bottom-up) one.

The modularist approach

Interpolation of operators. When X is not a suitable argument for a function F, a ‘coercing function’ G creates the structure F(G(X)), where X is a suitable argument for G, and G(X) is a suitable argument for F. (Jackendoff 1997: 53)

Indexation. A general mechanism for indexing the coercion operators to their linguistic triggers (DeSwart 1998)

A constructional approach

Override principle. If two schemas conflict, the less fully specified schema overrides conflicting values of the more fully specified schema.

Like the operator-based approach, the constructional approach is an improvement over lexical-head licensing.

It separates two head properties which prototypically accrue to a single daughter: Syntactic headedness: determination of external

syntax. Semantic headedness: licensing of arguments.

A constructional approach

Coercion is triggered by concord requirements: mutual semantic invocation by each daughter in a phrase-building construction, e.g., Indefinite Determination.

Crucially, coercion is also triggered by skeletal structures, e.g., linking constructions, since these have denotations just as words do.

syn cat N

rtg +

⎣ ⎢

⎦ ⎥ sem

index #1

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

syn [cat det] syn cat N

rtg -

⎣ ⎢

⎦ ⎥

val

syn cat N

rtg -

⎣ ⎢

⎦ ⎥

sem

index #2

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪

sem

index #1

frame

bounded +

config count

num sg

⎢ ⎢ ⎢

⎥ ⎥ ⎥

⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥

phon a

Virtues of this approach

No indexing problem: we don’t need to prevent random interpolation of coercion operators because coercion is just a by-product of ordinary construction-word combination.

The model is not limited to operator-based coercion, but can apply as well to template-based coercion: Valence augmentation and creation, e.g., nonce

denominals. Semantic effects of sisterhood:

I added apple to the recipe.Apple dries easily.

Supporting evidence for construction-based syntactic

representation

Sentence processing (Bencini & Goldberg 2000, Goldberg & Hare 2000, Karschak & Glenberg 2001)

Frame-based speech errors (Raymond 2000, Ferreira & Humpreys 2001)

Asphasic comprehension (Bates & Goodman 1997, Gahl et al. 2002, Piñango & Zurif 2001)

Frequency effects (Jurafsky 1996, Narayanan & Jurafsky 1998, Bybee & Thompson 1997, Bybee 1998, Bybee 2001)

Sentence processing: priming

Studies by Bock and colleagues suggests that syntactic priming is independent of semantics.

Using a picture-description task, Bock & Loebell (1990) found that oblique goals prime oblique recipients: The widow drove an old Mercedes to the church. (church=

goal) The widow gave an old Mercedes to the church. (church=

recipient) Goldberg & Hare observed that recipients and goals are in

fact semantically similar. They replicated the Bock & Loebell experiment, but added a third prime condition: The officer provided the soldiers with guns. (‘goal advancement’)

Use of ditransitive forms increased significantly after a ‘provide with’ prime. This suggests that a linking pattern is stored with its meaning.

Speech errors: are linking patterns in fact processed like

words?

A linking pattern, or functional frame, associates semantic roles with grammatical functions.

Raymond (2000) uses syntactic speech-error data to investigate two fundamental questions about the production of functional frames:

What is the input to verb-frame encoding? Lexical guidance: a word lemma supplies the frame (Levelt

1989) Conceptual guidance: frame selection is determined by

semantic and discourse-pragmatic factors, before lemma access (Schriefers et al. 1998)

How are verb frames accessed? Incremental access: each denotatum is assigned a grammatical

role in the order that its discourse-salience dictates, e.g., the discourse topic gets the subject role. (Bock & Levelt 1994)

Competitive access: functional relations are accessed as sets, as in, e.g., Goldberg’s (1995) argument-structure constructions.

Conceptual or lexical guidance?

Speech-error evidence suggests conceptual guidance. Restarts are as likely to involve a frame switch as a verb switch:

It’s not sure to me—clear to me (changed verb) Those types of research should learn us—should allow us to learn (same

verb) I think Dykstra fin-[ished] they finished the Dykstra in ‘91 or so (same

verb) Many frame errors correspond to no known predicator:

I don’t know why they watch him that (for ‘..they let him watch that’) I can’t be done that to (for ‘They can’t do that to me’)

Error frames are pragmatically appropriate: they tend to suppress predictable participants and promote accessible or animate ones: I want to say to you about x (for ‘I want to say something to you about x’)

Error frames are also biased toward valence reduction. These data jointly suggest that frames exist independently of verbs.

Competitive or incremental access?

Speech-error evidence points to competitive access. Many error frames involve dummy elements:

It’s glad you’ve marshalled your evidence.

If grammatical-function assignment relies on discourse salience, where would dummies come from?

Many frame errors don’t involve argument encoding per se but are instead splices of two incompatible sentence patterns: Raising spliced with extraposition

They seem they know where the problem is. Relative clause spliced with conjunction

To what extent am I responding to errors that I’m not conscious of it? VP ellipsis spliced with conjunction

She was severely injured as well as her assistant was too.

Functional frames therefore appear to map to constructions.

Aphasic comprehension

Gahl et al. (2001) investigated the effects of syntactic frame and verb bias on aphasic subjects’ comprehension of undergoer-subject sentences.

Their point of departure was Kegl’s (1995) prediction of processing difficulty for sentences involving NP movement, e.g., passives and unaccusatives.

They elicited plausibility judgements from 8 aphasic subjects, using 4 different types of syntactic frames (T, P, IU, IA).

The verbs used were selected for their statistical biases toward one of the 4 patterns. The stimuli included:

Dora opened the box. (match: open is biased toward T)

The box opened. (mismatch: IA frame)

Aphasic comprehension

Gahl et al. (2001) found that:

P-sentences were indeed harder to process than T-sentences. However, P-sentences were also harder to process than IU

sentences. IU sentences were easier to process than IA sentences. In every category, verb-bias mismatch, e.g., increased error

rate.

These results suggest that: There are canonical sentence forms, irrespective of verb bias. Verb bias does, however, modulate the effect of canonicity.

Therefore verbs and syntactic frames have distinct but intersecting, representations, as described by construction-based models.

Frequency effects

There is ample evidence for frequency effects (reduction, conservation) in collocations, idioms and words (Bybee 1998).

But is there evidence for constructional frequencies? Jurafsky (1996) and Narayanan & Jurafsky (1998) suggest that

there is, at least in comprehension. Parses are stored with their prior probabilities: the prior

probability of a parse is the product of the probabilities of each rule in the training set.

The rule introducing a reduced-relative structure has a much lower probability than the rule introducing a main-clause structure.

This asymmetry is also found in corpus studies: Tabossi et al. (1994) find that reduced relatives account for only 8 percent of -ed forms in the Brown corpus.

Frequency effects

The bias against reduced relatives accounts for garden-path effects in certain sentences: The horse raced past the barn fell.

Compare, however:The bird found in the room died.

There is no apparent garden path here. Why not? Narayanan & Jurafsky suggest that the lack of a garden-

path effect is a product of verb bias: found is preferably transitive, while raced is preferably intransitive.

The transitive preference of found promotes the reduced-relative parse above the main-verb interpretation, preempting the garden-path detour.

As in Gahl’s work on aphasic comprehension, construction preferences are modulated by lexical ones.

Conclusion

In Construction Grammar, syntax and semantics come together not via lexical projection but as the two poles of the sign.

While signs prototypically have phonetic substance, they can also take the form of templates, e.g., linking patterns.

If we assume that templates, like words, denote types, we can explain how templates might alter what words designate.

When we look at processes like coercion, we are looking at real linguistic creativity: not a property of the ‘generative engine’ but a property of people.

Sign-based syntax puts humans, and human achievement, back into the picture.

It does this by focusing on what humans do best: exploiting the expressive potentials inherent in form.