Linguistica: Unsupervised Learning of Natural Language Morphology Using MDL John Goldsmith...

69
Linguistica: Unsupervised Learning of Natural Language Morphology Using MDL John Goldsmith Department of Linguistics The University of Chicago
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    217
  • download

    0

Transcript of Linguistica: Unsupervised Learning of Natural Language Morphology Using MDL John Goldsmith...

Linguistica:Unsupervised Learning of Natural Language Morphology Using MDL

John GoldsmithDepartment of Linguistics The University of Chicago

The Goal:

To develop a program that learns the structure of words in any human language on the basis of a raw text.

No human supervision, except for the naïve creation of the text.

Value

To linguistic theory: reconstruct linguistic theory in a quantitative fashion

Practical value:– Information retrieval on data bases of

unrestricted languages– develop stochastic morphologies rapidly:

necessary for automatic speech recognition A step towards syntax

The product

Currently a C++ program that functions as a Windows-based tool for corpus-based linguistics.

Available in beta version on web site.

What do we want?

If you give the program a computer file containing Tom Sawyer, it should tell you that the language has a category of words that take the suffixes ing,s,ed, and NULL; another category that takes the suffixes 's, s, and NULL;

If you give it Jules Verne, it tells you there's a category with suffixes:

a aient ait ant (chanta, chantaient, chantait, chantant)

Immediate queries:

Do you tell it what language to expect? No.

Does it have access to meaning? No. Does that matter? No. How much data does it need. ...

How much data do you need? You get reasonable results fast, with

5,000 words, but results are much better with 50,000, and much better with 500,000 words (length of corpus).

100,000 word tokens ~ 12,000 distinct words.

Game plan Overview of MDL = Minimum Description

Length, where Description Length = Length of Analysis +

Length of Compressed Data Length of data as optimal compressed length

of the corpus, given probabilities derived from morphology

Length of morphology in information theoretic terms

MDL is dead without heuristics…(then again, heuristics without MDL lack all finesse.)

Game plan (continued)

Heuristic 1: discover basic candidate suffixes of the language using weighted mutual information

Heuristic 2: use these to find regular signatures;

Now use MDL to correct errors generated by heuristics

Game plan (end)

Why using MDL is closely related to measuring the (log of the) size of the space of possible vocabularies.

For the purposes of version 1 of Linguistica 1, I will restrict myself to Indo-European languages, and in general languages in which the average number of suffixes per word is not greater than 2. (We drop this requirement in Linguistica 2.)

Minimum Description Length (Rissanen 1989)

Basic idea:

A good analysis of a set of data is one that (1) extracts the structure found in the data, and (2) which does it without overfitting the data.

If you have a set of pointers to a bunch of objects, and a probability distribution over those pointers, then

You may act as if the information-length of each pointer =

-1* log prob (that pointer).

list

wordofLength

iii

corpusofLength

ii wordprobwordCountwordprob

11

)()(*1)(log*1

Total compressed length of the corpus is:

The length of the compressed size of each piece w is -log prob(w); so...

So for our entire corpus--

Overfitting the data:

The Gettysburg Address can be compressed to 2 bits if you choose an eccentric encoding scheme.

But that encoding scheme (1) will be long, and (2) will do more poorly than an encoding scheme that does not waste its probability mass on the Gettysburg Address.

Even scientific theories bow to the exigencies of MDL... in a sense. A theory is penalized if it does not

capture generalizations within the observational data (e.g., predicting future observations on the basis of the initial conditions);

It is penalized if it is more complex than it needs to be (Ockham’s Razor).

Minimum Description Length:

For a given set of data D, choose the analysis Ai to minimize the function:

Length(Compression of D using Ai)

+

Length (Ai)

Compressed length of the data using Ai?

The data is the corpus.

The compressed length of the corpus is just (summing over the words)

Ww

wfreqpredictedwCount )(log*)(

Our morphology has two necessary properties: It must assign a probability to every

word of the language (so that we can speak of its ability to compress the corpus) -- we’ll return to this immediately;

And it must have a well-defined length.

Morphology assigns a frequency:

If the morphology assigns no internal structure to a word (John, the, …), it assigns the observed frequency to the word.

If the morphology analyzes a word (dog+s), it assigns a frequency to that word on the basis of 3 things:

1. The frequency of the suffixal pattern in which the word is found (dog-s, dog-’s, dog-NULL);

2. The frequency of the stem (dog);

3. The frequency of the suffix (-s) within that pattern (-s, -’s, -NULL)

Terminology:

The pattern of suffixes that a stem takes is its signature:

NULL.ed.ing.s NULL.er.est.ness

Frequency of analyzed word

][

][*

][

][*

][

][

)|(*)|(*)(

)(

inFT

W

FFreqTFreqFreq

FTFreq

W is analyzed as belonging to Signature stem T and suffix F.

Actually what we care about is the log of this:

Where [W] is the total number of words.

[x] means thecount of x’sin the corpus(token count)

][

][log

][

][log

)|(log)(log

)(

inFT

W

FfreqTfreq

FTwordlengthCompressed

So far:

The behavior we demand of our morphology is that it assign a frequency to any given word; we need this so that we can evaluate the particular morphology’s goodness as an analysis, i.e., as a compressor.

Next, let’s see how to measurethe length of a morphology

A morphology is a set of 3 things: A list of stems; A list of suffixes; A list of signatures with the associated

stems.

Let’s measure the list of suffixes

A list of suffixes consists of: a piece of punctuation telling us how

long the list is (of length log (size) ); A list of pointers to the suffixes (each

pointer of size - log (freq (suffix)); A concatenation of the letters of the

suffixes (we could compress this, too, or just count number of letters).

4pointered

pointers

pointerNULL

pointering

eds

NULLing

of length 3, because p(ed) = 1/8

of length 2, because 2 letters long

~ of length log(4) punctuation

Same for stem list:

Indication of size of the list (of length log (size));

List of pointers to each stem, where each pointer is of length - log freq (stem);

Concatenation of stems (sum of lengths of stems in letters)

Size of the signature list

What is the size of an individual signature? It consists of two subparts:

a list of pointers to stems, and a list of pointers to suffixes.

And we already know how to measure the size of a list of pointers.

An individual signature

)(

)(

31:2:

3:2:

1:

)1(sptr

NULLptr

suffixrdstemstsigmComplexSte

SimpleStemSimpleStem

SimpleStem

sig

for the words dog, dogs, cat, cats, glove, gloves

Length of a signature

npunctuatioinft

N

ft

][

][log

][log

Sum of the lengths of the pointersto the stems

Sum of the lengthsof the pointersto the suffixes

I’m glossing over an importantnatural language complexity:recursive structure.

find ing s

word

word (Significant effectson distribution of probability mass

over all the words.)

So the total length of the morphology is...

signaturesstemssuffixesi logloglog)(

Suffixesf

A

f

WflistSuffixii

][

][log||*)(

Stemst t

WtlistStemiii )

][

][log(||*:)(

(iv) Signature component:

Signature component

Signatures

W

][

][log list of pointers to signatures

logstems( log Signatures

suffixes

)][

][log

][

][log(

)()(

SuffixesfSigs Stemst inft

W

<X> indicates the numberof distinct elements in X

MDL needs heuristics

MDL does only one thing: it tells you which of two analyses is better.

It doesn’t tell you how to find those analysis.

Overall strategy

Use initial heuristic to establish sets of signatures and sets of stems.

Use heuristics to propose various corrections.

Use MDL to decide on whether proposed corrections are to be accepted or refused.

Initial Heuristic

1. Take top 100 ngrams based on weighted mutual information as candidate morphemes of the language:

]][][[

][log*

]3[

][

gni

ing

grams

ing

If a word ends in a candidate morpheme, split it thusly, to form a candidate stem thereby:

sanity: sanit + y sanity san + ity

How to choose in ambiguous cases?

This turns out to be a lot harder than you’d think, given what I’ve said so far.

Short answer is a heuristic: maximize the objective function

]log[*)(]log[*)( suffixsuffixlengthstemstemlength

There’s no good short explanation for this, except this:the frequency of a single letter is a very badfirst approximation of its likelihood to be a morpheme.

For each stem, find the suffixes it appears with This forms its signature: NULL.ed.ing.s, for example.

Now eliminate all signatures that appear only once.

This gives us an excellent first guess for the morphology.

Stems with their signaturesabrupt NULL ly ness. abs ence ent. absent -minded NULL ia ly. absent-minded NULL ly absentee NULL ism(French:)

absolu NULL e ment.

absorb ait ant e er é ée

abus ait er

abîm e es ée.

Now build up signature collection...Top 10, 100K words1 .NULL.ed.ing. 65 1214 2 .NULL.ed.ing.s. 27 1464 3 .NULL.s. 290 8184 4 .'s.NULL.s. 27 2645 5 .NULL.ed.s. 26 541 6 .NULL.ly. 128 2124 7 .NULL.ed. 87 767 8 .'s.NULL. 75 3655 9 .NULL.d.s. 14 510 10 .NULL.ing. 62 983

Verbose signature....NULL.ed.ing. 58

heap check revolt

plunder look obtain

escort proclaim arrest

gain destroy stay

suspect kill consent

knock track succeed

answer frighten glitter.…\

Find strictly regular signatures:

A signature is strictly regular if it contains more than one suffix, and is found on more than one stem.

A suffix found in a strictly regular suffix is a regular suffix.

Keep only signatures composed of regular suffixes (=regular signatures).

Examples of non-regular signatures

Only one stem for this signature: ch.e.erial.erials.rimony.rons.uring el.ezed.nce.reupon.ther

Prefixes

Just the same, in mirror-image style. Perform either on stems or on words.

English prefixes.NULL.re. 8.NULL.dis. 7.NULL.de. 4.NULL.un. 4.NULL.con. 3.NULL.en. 3.NULL.al. 3.NULL.t. 3.NULL.con.ex. 2

French prefix signaturesNULL.d'.l'. NULL.d'.NULL.l'. NULL.dé.NULL.re. d'.l'.NULL.qu'. NULL.par.NULL.en. NULL.in.NULL.di. NULL.com.NULL.s'. NULL.l'en.NULL.n'. NULL.cou.NULL.pro. NULL.ent.NULL.ré. NULL.d'.s'.

Now use MDL to fix problems:Repair heuristics

Problems that arise:

1. “ments” problem: a suffix may really be two suffixes.

2. ted.ting.ts: a letter which occurs stem finally with high frequency may get wrongly parsed

(e.g., shou-ted, shou-ting, shou-ts).

3. Spurious signatures

4. Misplaced word-breaks

Original morphology+ Compressed data

Repair heuristics: using MDL

We could compute the entire MDL in one state of the morphology; make a change; compute the whole MDL in the proposed (modified) state; and compared the two lengths.

Revised morphology+

compressed data

<>

But it’s better to have a more thoughtful approach.

Let’s define 2

1logstate

state

x

xx

Then the change of the size of the punctuation in the lists:

signaturesstemssuffixesi logloglog)(

Then the size of the punctuation for the 3 lists is:

<Suffixes> + <Stems> + <Signatures>

Size of the suffix component, remember:

Suffixesf

A

f

WflistSuffixii

][

][log||*)(

Change in its size when we consider a modification to the morphology:1. Global effects of change of number of suffixes;2. Effects on change of size of suffixes in both states;3. Suffixes present only in state 1;4. Suffixes present only in state 2;

Suffix component change:

)2,1(~

)2~,1()2,1(

||*][

][log

||*][

][log*

2

1)2,1(

Suffixesf

A

Suffixesf

A

SuffixesfA

ff

W

ff

WfSuffixesW

Contribution of suffixesthat appear only in State1

Contribution of suffixesthat appear only in State 2

Global effect of change on all suffixes

Suffixes whose counts change

Why using MDL is closely related to measuring the complexity of the space

of possible vocabularies

Entropy, MDL, and morphology

Consider the space of all words of length L, built from an alphabet of size b.

How many ways are there to build a vocabulary of size N?Call that U(b,L,N).

Clearly,

!)!(

!),,(

NNb

b

N

bNLbU

L

LL

Compare that with the operation (choosing a set of N words of length L, alphabet size b) with the operation of choosing a set of T stems (of length t) and a set of F suffixes (of length f), where t + f = L.

If we take the complexity of each task to be measured by the log of its size, then we’re asking the size of:

F

b

T

b

N

b

FfbUTtbU

NLbUft

L

log),,(),,(

),,(log

NNbNL

StirlingNNbNL

NbN

NNb

b

NNb

b

N

b

L

L

L

L

LL

1loglog

)(loglog

!loglog

!log)!(

!log

!)!(

!loglog

is easy to approximate, however.

N

bL

log

bababa

babaaa

ba

athenbaif

1)...1)((

1)...)(1)...(1(

)!(

!remember:

NNbNL

1loglog

The number of bits neededto list all the words:

the analysis

The length of all the pointers

to all the words:

the compressed corpus

Thus the log of the number of vocabularies =description length of that vocabulary,

in the terms we’ve been using

That means that the differences in the sizes of the spacesof possible vocabularies is equal to the difference in the

description length in the two cases:hence,

Difference of complexity of “simplex word” analysisand complexity of analyzed word analysis=

log U(b,L,N) - U(b,t,T)-U(b,f,F)

)/1log()/1log()/1log(

))((log

FFTTNN

fFtTLNb

Difference in size of morphologies

Difference in size of compressed data

But we’ve (over)simplified in this case by ignoring the frequencies inherent in real corpora. What’s of great interest in real life is the fact that some suffixes are used often, others rarely, and similarly for stems.

We know something about the distribution of words, but nothing about distribution of stems and especially suffixes.

But suppose we wanted to think about the statistics of vocabulary choice in which words could be selected more than once….

We want to select N words of length L, and the same word can be selected. How many ways of doing this are there?

These are like bosons: you can have any number of occurrence of a word, and 2 sets of the same number of them are indistinguishable. How many such vocabularies are there, then?

N

i

iZ

NL

i

b

1

)()!(

)(

N

i

iZ

NL

i

bNLbU

1

)()!(

)(),,(

where Z(i) is the number of words of frequency i.

(‘Z’ stands for “Zipf”).

We don’t know much about frequencies of suffixes,but Zipf’s law says that

i

KiZ )(

hence for a morphemeset that obeyed

the Zipf distribution:

N

i

N

i

N

i

iKbNL

iZiibNL

iiZbNLNLbU

1

1

1

loglog

)(loglog

)!log()(log),,(log

CorpusSizeK *1.0

)ln(log

lnln

loglog1

NNNKbNL

CxxxxdxSince

iKbNLN

i

End