Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

52
Natural Language Processing Lecture 6—9/17/2013 Jim Martin

Transcript of Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

Page 1: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

Natural Language Processing

Lecture 6—9/17/2013

Jim Martin

Page 2: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 2

Today

More language modeling with N-grams Basic counting Probabilistic model

Independence assumptions

Simple smoothing methods

Page 3: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 3

N-Gram Models

We can use knowledge of the counts of N-grams to assess the conditional probability of candidate words as the next word in a sequence.

Or, we can use them to assess the probability of an entire sequence of words. Pretty much the same thing as we’ll

see...

Page 4: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 4

Language Modeling

Back to word prediction We can model the word prediction

task as the ability to assess the conditional probability of a word given the previous words in the sequence P(wn|w1,w2…wn-1)

We’ll call a statistical model that can assess this a Language Model

Page 5: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 5

Language Modeling

How might we go about calculating such a conditional probability? One way is to use the definition of

conditional probabilities and look for counts. So to get

P(the | its water is so transparent that) By definition that’s

P(its water is so transparent that the) P(its water is so transparent that)

Page 6: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 6

Very Easy Estimate

How to estimate? P(the | its water is so transparent that)

P(the | its water is so transparent that) =Count(its water is so transparent that the) Count(its water is so transparent that)

Page 7: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 7

Language Modeling

Unfortunately, for most sequences and for most text collections we won’t get good estimates from this method. What we’re likely to get is 0. Or worse 0/0.

Clearly, we’ll have to be a little more clever. Let’s first use the chain rule of probability And then apply a particularly useful

independence assumption

Page 8: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 8

The Chain Rule

Recall the definition of conditional probabilities

Rewriting:

For sequences... P(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C)

In general P(x1,x2,x3,…xn) = P(x1)P(x2|x1)P(x3|x1,x2)…P(xn|

x1…xn-1)

Page 9: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 9

The Chain Rule

P(its water was so transparent)=P(its)* P(water|its)* P(was|its water)* P(so|its water was)* P(transparent|its water was so)

Page 10: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 10

Unfortunately

That doesn’t really help since it relies on having N-gram counts for a sequence that’s only 1 shorter than what we started with Not likely to help with getting counts

In general, we will never be able to get enough data to compute the statistics for those longer prefixes Same problem we had for the strings

themselves

Page 11: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 11

Independence Assumption

Make a simplifying assumption P(lizard|

the,other,day,I,was,walking,along,and,saw,a) = P(lizard|a)

Or maybe P(lizard|

the,other,day,I,was,walking,along,and,saw,a) = P(lizard|saw,a)

That is, the probability in question is to some degree independent of its earlier history.

Page 12: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 12

Independence Assumption

This particular kind of independence assumption is called a Markov assumption after the Russian mathematician Andrei Markov.

Page 13: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 13

Replace each component in the product with a shorter approximation (assuming a prefix of N - 1)

Bigram (N=2) version

Markov Assumption

Page 14: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

Bigram ExampleP(its water was so transparent)=P(its)* P(water|its)* P(was|its water)* P(so|its water was)* P(transparent|its water was so)

P(its water was so transparent)=

P(its)* P(water|its)* P(was|water)* P(so|was)* P(transparent|so)

04/21/23 Speech and Language Processing - Jurafsky and Martin 14

Page 15: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 15

Estimating Bigram Probabilities

The Maximum Likelihood Estimate (MLE)

Page 16: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 16

An Example

<s> I am Sam </s> <s> Sam I am </s> <s> I do not like green eggs and ham </s>

Page 17: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 17

Maximum Likelihood Estimates

The maximum likelihood estimate of some parameter of a model M from a training set T Is the estimate that maximizes the likelihood of the

training set T given the model M Suppose the word “Chinese” occurs 400 times in a

corpus of a million words (Brown corpus) What is the probability that a random word from

some other text from the same distribution will be “Chinese”

MLE estimate is 400/1000000 = .004 This may be a bad estimate for some other corpus

But it is the estimate that makes it most likely that “Chinese” will occur 400 times in a million word corpus.

Page 18: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 18

Berkeley Restaurant Project Sentences

can you tell me about any good cantonese restaurants close by

mid priced thai food is what i’m looking for tell me about chez panisse can you give me a listing of the kinds of food that

are available i’m looking for a good place to eat breakfast when is caffe venezia open during the day

Page 19: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 19

Bigram Counts

Out of 9222 sentences Eg. “I want” occurred 827 times

Page 20: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 20

Bigram Probabilities

Divide bigram counts by prefix unigram counts to get probabilities.

Page 21: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 21

Bigram Estimates of Sentence Probabilities

P(<s> I want english food </s>) = P(i|<s>)*

P(want|I)* P(english|want)* P(food|english)* P(</s>|food)* =.000031

Page 22: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 22

Kinds of Knowledge

P(english|want) = .0011

P(chinese|want) = .0065

P(to|want) = .66 P(eat | to) = .28 P(food | to) = 0 P(want | spend) = 0 P (i | <s>) = .25

As crude as they are, N-gram probabilities capture a range of interesting facts about language.

World knowledge

Syntax

Discourse

Page 23: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 23

Shannon’s Game

Assigning probabilities to sentences is all well and good, but it’s not terribly entertaining. What if we turn these models around and use them to generate random sentences that are like the sentences from which the model was derived.

Page 24: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 24

Shannon’s Method Sample a random bigram (<s>, w) according to its probability Now sample a random bigram (w, x) according to its

probability Where the prefix w matches the suffix from the first choice.

And so on until we randomly choose a (y, </s>). Stop. Then string the words together <s> I I want

want to to eat eat Chinese

Chinese food food </s>

If the model is good, then the generated strings should be pretty reasonable.

Page 25: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 25

Shakespeare

Page 26: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 26

Shakespeare as a Corpus

N=884,647 tokens, V=29,066 Shakespeare produced 300,000 bigram

types out of V2= 844 million possible bigrams... So, 99.96% of the possible bigrams were

never seen (have zero entries in the table) This is the biggest problem in language

modeling; we’ll come back to it.

4-grams are worse... What's coming out looks like Shakespeare because it is Shakespeare

Page 27: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 27

Break

News Get the assignment in by Friday

I’ll post a test set later this week

First quiz (Chapters 1 to 6) pushed back to October 3 Quiz is based on the readings and what we do in

class. I will give specific readings for each chapter

And I’ll post some past quizzes on this material

Page 28: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 28

Model Evaluation

How do we know if our models are any good? And in particular, how do we know if one model

is better than another. Shannon’s game gives us an intuition.

The generated texts from the higher order models sure look better. That is, they sound more like the text the model was

obtained from. The generated texts from the various

Shakespeare models look different That is, they look like they’re based on different

underlying models.

But what does that mean? Can we make that notion operational?

Page 29: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 29

Evaluation Standard method

Train parameters of our model on a training set.

Look at the models performance on some new data This is exactly what happens in the real world; we

want to know how our model performs on data we haven’t seen

So use a test set. A dataset which is different than our training set, but is drawn from the same source

Then we need an evaluation metric to tell us how well our model is doing on the test set. One such metric is perplexity

Page 30: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 30

Perplexity

The intuition behind perplexity as a measure is the notion of surprise. How surprised is the language model

when it sees the test set? Where surprise is a measure of...

Gee, I didn’t see that coming...

The more surprised the model is, the lower the probability it assigned to the test set

The higher the probability, the less surprised it was

Page 31: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 31

Perplexity

Perplexity is the probability of a test set (assigned by the language model), normalized by the number of words:

Chain rule:

For bigrams:

Minimizing perplexity is the same as maximizing probability The best language model is one that best

predicts an unseen test set

Page 32: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 32

Lower perplexity means a better model

Training 38 million words, test 1.5 million words, WSJ

Page 33: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 33

Practical Issues

We do everything in log space Avoid underflow Also adding is faster than multiplying

Page 34: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 34

Unknown Words

In the training data, the vocabulary is what’s there. But once we start looking at test data, we will run into words that we haven’t seen in training With an Open Vocabulary task

Create an unknown word token <UNK> Create a fixed lexicon L, of size V

From a dictionary or A subset of terms from the training set

At text normalization phase, any training word not in L changed to <UNK>

Now we count that like a normal word At test time

Use UNK counts for any word not in training

Page 35: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 35

SmoothingDealing w/ Zero Counts

Back to Shakespeare Recall that Shakespeare produced 300,000

bigram types out of V2= 844 million possible bigrams...

So, 99.96% of the possible bigrams were never seen (have zero entries in the table)

Does that mean that any sentence that contains one of those bigrams should have a probability of 0?

For generation (shannon game) it means we’ll never emit those bigrams

But for analysis it’s problematic because if we run across a new bigram in the future then we have no choice but to assign it a probability of zero.

Page 36: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 36

Zero Counts Some of those zeros are really zeros...

Things that really aren’t ever going to happen On the other hand, some of them are just rare events.

If the training corpus had been a little bigger they would have had a count

What would that count be in all likelihood? Zipf’s Law (long tail phenomenon):

A small number of events occur with high frequency A large number of events occur with low frequency You can quickly collect statistics on the high frequency events You might have to wait an arbitrarily long time to get valid

statistics on low frequency events Result:

Our estimates are sparse! We have no counts at all for the vast number of things we want to estimate!

Answer: Estimate the likelihood of unseen (zero count) N-grams!

Page 37: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 37

Laplace Smoothing

Also called Add-One smoothing Just add one to all the counts! Very simple

MLE estimate:

Laplace estimate:

Page 38: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 38

Bigram Counts

Page 39: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 39

Laplace-Smoothed Bigram Counts

Page 40: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 40

Laplace-Smoothed Bigram Probabilities

Page 41: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 41

Reconstituted Counts

Page 42: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

Reconstituted Counts (2)

04/21/23 Speech and Language Processing - Jurafsky and Martin 42

Page 43: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 43

Big Change to the Counts!

C(want to) went from 608 to 238! P(to|want) from .66 to .26! Discount d= c*/c

d for “chinese food” =.10!!! A 10x reduction So in general, Laplace is a blunt instrument Could use more fine-grained method (add-k)

Because of this Laplace smoothing not often used for language models, as we have much better methods

Despite its flaws Laplace (add-1) is still used to smooth other probabilistic models in NLP and IR, especially For pilot studies In document classification In domains where the number of zeros isn’t so huge.

Page 44: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

Foreshadowing

Just a hint of some stuff we’ll get to later

With Add-1 we counted, noted all the zeros, then added 1 to everything to get our new counts

What if we did the Add-1 before we did any counting What would that model look like?

What kind of distribution is it?

What happens to that model as we start doing our counting?

04/21/23 Speech and Language Processing - Jurafsky and Martin 44

Page 45: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 45

Better Smoothing

An intuition used by many smoothing algorithms Good-Turing Kneser-Ney Witten-Bell

Is to use the count of things we’ve seen once to help estimate the count of things we’ve never seen

Page 46: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 46

Types, Tokens and Squirrels

Much of what’s coming up was first studied by field biologists who are often faced with 2 related problems Determining how many species occupy

a particular area (types) And determining how many individuals

of a given species are living in a given area (tokens)

Page 47: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 47

Good-Turing Josh Goodman Intuition

Imagine you are fishing You know there are 8 species: carp, perch, whitefish, trout,

salmon, eel, catfish, bass Not exactly sure where such a situation would arise...

You have caught up to now 10 carp, 3 perch, 2 whitefish, 1 trout, 1 salmon, 1 eel = 18

fish How likely is it that the next fish to be caught is an

eel?

Slide adapted from Josh Goodman

How likely is it that the next fish caught will be a member of previously uncaptured species?

Now how likely is it that the next fish caught will be an eel?

Page 48: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 48

Notation: Nx is the frequency-of-frequency-x So N10=1

Number of fish species seen 10 times is 1 (carp) N1=3

Number of fish species seen once is 3 (trout, salmon, eel)

To estimate total number of unseen species Use number of species seen once c0

* =c1 p0 = N1/N All other estimates (counts) are adjusted

downward to account for unseen probabilities

Good-Turing

Slide from Josh Goodman

Eel = c*(1) = (1+1) 1/ 3 = 2/3

Page 49: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 49

GT Fish Example

Page 50: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 50

GT Smoothed Bigram Probabilities

Page 51: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 51

In practice, assume large counts (c>k for some k) are reliable:

Also: we assume singleton counts c=1 are unreliable, so treat N-grams with count of 1 as if they were count=0

Also, need each N_k to be non-zero, so we need to smooth (interpolate) the N_k counts before computing c* from them

GT Complications

Page 52: Natural Language Processing Lecture 6—9/17/2013 Jim Martin.

04/21/23 Speech and Language Processing - Jurafsky and Martin 52

Next Time

On to Chapter 5 Parts of speech Part of speech tagging and HMMs