Text Categorization
-
Upload
imelda-matthews -
Category
Documents
-
view
33 -
download
0
description
Transcript of Text Categorization
Text Categorization
CSC 575
Intelligent Information Retrieval
2
Classification Task
i Given:4 A description of an instance, x X, where X is the instance
language or instance or feature space.i Typically, x is a row in a table with the instance/feature space
described in terms of features or attributes.
4 A fixed set of class or category labels: C={c1, c2,…cn}
i Classification task is to determine:4 The class/category of x: c(x) C, where c(x) is a function whose
domain is X and whose range is C.
3
Learning for Classification
i A training example is an instance xX, paired with its correct class label c(x): <x, c(x)> for an unknown classification function, c.
i Given a set of training examples, D.4 Find a hypothesized classification function, h(x), such that: h(x) =
c(x), for all training instances (i.e., for all <x, c(x)> in D). This is called consistency.
4
Example of Classification Learning
i Instance language: <size, color, shape>4 size {small, medium, large}4 color {red, blue, green}4 shape {square, circle, triangle}
i C = {positive, negative}
i D:
i Hypotheses? circle positive? red positive?
Example Size Color Shape Category
1 small red circle positive
2 large red circle positive
3 small red triangle negative
4 large blue circle negative
5
General Learning Issues(All Predictive Modeling Tasks)
i Many hypotheses can be consistent with the training datai Bias: Any criteria other than consistency with the training data that is used to
select a hypothesisi Classification accuracy (% of instances classified correctly).
4 Measured on independent test datai Efficiency Issues:
4 Training time (efficiency of training algorithm)4 Testing time (efficiency of subsequent classification)
i Generalization4 Hypotheses must generalize to correctly classify instances not in training data4 Simply memorizing training examples is a consistent hypothesis that does not
generalize4 Occam’s razor: Finding a simple hypothesis helps ensure generalization
i Simplest models tend to be the best modelsi The KISS principle
Intelligent Information Retrieval 6
Text Categorization
i Assigning documents to a fixed set of categories.i Applications:
4 Web pages i Recommendingi Yahoo-like classification
4 Newsgroup Messages i Recommendingi spam filtering
4 News articles i Personalized newspaper
4 Email messages i Routingi Folderizingi Spam filtering
Intelligent Information Retrieval 7
Learning for Text Categorization
i Manual development of text categorization functions is difficult.
i Learning Algorithms:4 Bayesian (naïve)4 Neural network4 Relevance Feedback (Rocchio)4 Rule based (Ripper)4 Nearest Neighbor (case based)4 Support Vector Machines (SVM)
Intelligent Information Retrieval 8
Using Relevance Feedback (Rocchio)
i Relevance feedback methods can be adapted for text categorization.
i Use standard TF/IDF weighted vectors to represent text documents (normalized by maximum term frequency).
i For each category, compute a prototype vector by summing the vectors of the training documents in the category.
i Assign test documents to the category with the closest prototype vector based on cosine similarity.
Intelligent Information Retrieval 9
Rocchio Text Categorization Algorithm(Training)
Assume the set of categories is {c1, c2,…cn}
For i from 1 to n let pi = <0, 0,…,0> (init. prototype vectors)
For each training example <x, c(x)> D Let d be the normalized TF/IDF term vector for doc x Let i = j: (cj = c(x))
(sum all the document vectors in ci to get pi)
Let pi = pi + d
Intelligent Information Retrieval 10
Rocchio Text Categorization Algorithm(Test)
Given test document x
Let d be the TF/IDF weighted term vector for xLet m = –2 (init. maximum cosSim)For i from 1 to n: (compute similarity to prototype vector) Let s = cosSim(d, pi) if s > m let m = s let r = ci (update most similar class prototype)Return class r
Intelligent Information Retrieval 11
Rocchio Properties
i Does not guarantee a consistent hypothesis.i Forms a simple generalization of the examples in each
class (a prototype).i Prototype vector does not need to be averaged or
otherwise normalized for length since cosine similarity is insensitive to vector length.
i Classification is based on similarity to class prototypes.
Intelligent Information Retrieval 12
Nearest-Neighbor Learning Algorithm
i Learning is just storing the representations of the training examples in D.
i Testing instance x:4 Compute similarity between x and all examples in D.4 Assign x the category of the most similar example in D.
i Does not explicitly compute a generalization or category prototypes.
i Also called:4 Case-based4 Memory-based4 Lazy learning
Intelligent Information Retrieval 13
K Nearest-Neighbor
i Using only the closest example to determine categorization is subject to errors due to:4 A single atypical example. 4 Noise (i.e. error) in the category label of a single training example.
i More robust alternative is to find the k most-similar examples and return the majority category of these k examples.
i Value of k is typically odd to avoid ties, 3 and 5 are most common.
Intelligent Information Retrieval 14
Similarity Metrics
i Nearest neighbor method depends on a similarity (or distance) metric.
i Simplest for continuous m-dimensional instance space is Euclidian distance.
i Simplest for m-dimensional binary instance space is Hamming distance (number of feature values that differ).
i For text, cosine similarity of TF-IDF weighted vectors is typically most effective.
Intelligent Information Retrieval 15
K Nearest Neighbor for Text
Training:For each each training example <x, c(x)> D Compute the corresponding TF-IDF vector, dx, for document x
Test instance y:Compute TF-IDF vector d for document yFor each <x, c(x)> D Let sx = cosSim(d, dx)Sort examples, x, in D by decreasing value of sx
Let N be the first k examples in D. (get most similar neighbors)Return the majority class of examples in N
Intelligent Information Retrieval 16
Nearest Neighbor with Inverted Index
i Determining k nearest neighbors is the same as determining the k best retrievals using the test document as a query to a database of training documents.
i Use standard VSR inverted index methods to find the k nearest neighbors.
i Testing Time: O(B|Vt|), where B is the average number of
training documents in which a test-document word appears.
i Therefore, overall classification is O(Lt + B|Vt|) 4 Typically B << |D|
Intelligent Information Retrieval 17
Bayesian Methods
i Learning and classification methods based on probability theory.
i Bayes theorem plays a critical role in probabilistic learning and classification.
i Uses prior probability of each category given no information about an item.
i Categorization produces a posterior probability distribution over the possible categories given a description of an item.
Intelligent Information Retrieval 18
Axioms of Probability Theory
i All probabilities between 0 and 1
i True proposition has probability 1, false has probability 0.
P(true) = 1 P(false) = 0
i The probability of disjunction is:
1)(0 AP
)()()()( BAPBPAPBAP
A BBA
Intelligent Information Retrieval 19
Conditional Probability
i P(A | B) is the probability of A given Bi Assumes that B is all and only information known.
i Defined by:
)(
)()|(
BP
BAPBAP
A BBA
Intelligent Information Retrieval 20
Independence
i A and B are independent iff:
i Therefore, if A and B are independent:
i Bayes’ Rule:
)()|( APBAP
)()|( BPABP
)()(
)()|( AP
BP
BAPBAP
)()()( BPAPBAP
These two constraints are logically equivalent
)(
)()|()|(
EP
HPHEPEHP
Intelligent Information Retrieval 21
Bayesian Categorization
i Let set of categories be {c1, c2,…cn}
i Let E be description of an instance.
i Determine category of E by determining for each ci
i P(E) can be determined since categories are complete and disjoint.
)(
)|()()|(
EP
cEPcPEcP ii
i
n
i
iin
ii EP
cEPcPEcP
11
1)(
)|()()|(
n
iii cEPcPEP
1
)|()()(
Intelligent Information Retrieval 22
Bayesian Categorization (cont.)
i Need to know:4 Priors: P(ci)
4 Conditionals: P(E | ci)
i P(ci) are easily estimated from data.
4 If ni of the examples in D are in ci,then P(ci) = ni / |D|
i Assume instance is a conjunction of binary features:
i Too many possible instances (exponential in m) to estimate all P(E | ci)
meeeE 21
Intelligent Information Retrieval 23
Naïve Bayesian Categorization
i If we assume features of an instance are independent given the category (ci) (conditionally independent)
i Therefore, we then only need to know P(ej | ci) for each feature and category.
)|()|()|(1
21
m
jijimi cePceeePcEP
Intelligent Information Retrieval 24
Naïve Bayes Example
i C = {allergy, cold, well}
i e1 = sneeze; e2 = cough; e3 = fever
i E = {sneeze, cough, fever}
Prob Well Cold Allergy
P(ci) 0.9 0.05 0.05
P(sneeze|ci) 0.1 0.9 0.9
P(cough|ci) 0.1 0.8 0.7
P(fever|ci) 0.01 0.7 0.4
Intelligent Information Retrieval 25
Naïve Bayes Example (cont.)
P(well | E) = (0.9)(0.1)(0.1)(0.99)/P(E)=0.0089/P(E)P(cold | E) = (0.05)(0.9)(0.8)(0.3)/P(E)=0.01/P(E)P(allergy | E) = (0.05)(0.9)(0.7)(0.6)/P(E)=0.019/P(E)
Most probable category: allergyP(E) = 0.089 + 0.01 + 0.019 = 0.0379P(well | E) = 0.24P(cold | E) = 0.26P(allergy | E) = 0.50
Probability Well Cold Allergy
P(ci) 0.9 0.05 0.05
P(sneeze | ci) 0.1 0.9 0.9
P(cough | ci) 0.1 0.8 0.7
P(fever | ci) 0.01 0.7 0.4
E={sneeze, cough, fever}
Estimating Probabilities
i Normally, probabilities are estimated based on observed frequencies in the training data.
i If D contains ni examples in class ci, and nij of these ni examples contains feature/attribute ej, then:
i If the feature is continuous-valued, P(ej|ci) is usually computed based on Gaussian distribution with a mean μ and standard deviation σ
and P(ej|ci) is
26
i
ijij n
nceP )|(
2
2
2
)(
2
1),,(
x
exg
),,()|(ii ccjj egcieP
Smoothing
i Estimating probabilities from small training sets is error-prone:4 If due only to chance, a rare feature, ek, is always false in the training data,
ci :P(ek | ci) = 0.
4 If ek then occurs in a test example, E, the result is that ci: P(E | ci) = 0 and ci: P(ci | E) = 0
i To account for estimation from small samples, probability estimates are adjusted or smoothed
i Laplace smoothing using an m-estimate assumes that each feature is given a prior probability, p, that is assumed to have been previously observed in a “virtual” sample of size m.
i For binary features, p is simply assumed to be 0.5.
27
mn
mpnceP
i
ijij
)|(
Intelligent Information Retrieval 28
Naïve Bayes Classification for Text
i Modeled as generating a bag of words for a document in a given category by repeatedly sampling with replacement from a vocabulary V = {w1, w2,…wm} based on the probabilities P(wj | ci).
i Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words 4 p = 1/|V|) and m = |V|4 Equivalent to a virtual sample of seeing each word in each category
exactly once.
Intelligent Information Retrieval 29
Text Naïve Bayes Algorithm(Train)
Let V be the vocabulary of all words in the documents in DFor each category ci C
Let Di be the subset of documents in D in category ci
P(ci) = |Di| / |D| Let Ti be the concatenation of all the documents in Di
Let ni be the total number of word occurrences in Ti
For each word wj V Let nij be the number of occurrences of wj in Ti
Let P(wi | ci) = (nij + 1) / (ni + |V|)
Intelligent Information Retrieval 30
Text Naïve Bayes Algorithm(Test)
Given a test document XLet n be the number of word occurrences in XReturn the category:
where ai is the word occurring the ith position in X
1
max ( ) ( | )n
i i ic Ci i
P c P a c
Intelligent Information Retrieval 31
Text Naïve Bayes - Example
t1 t2 t3 t4 t5 Spam
D1 1 1 0 1 0 noD2 0 1 1 0 0 noD3 1 0 1 0 1 yesD4 1 1 1 1 0 yesD5 0 1 0 1 0 yesD6 0 0 0 1 1 noD7 0 1 0 0 0 yesD8 1 1 0 1 0 yesD9 0 0 1 1 1 no
D10 1 0 1 0 1 yes
Term P(t|no) P(t|yes)
t1 1/4 4/6t2 2/4 4/6t3 2/4 3/6t4 3/4 3/6t5 2/4 2/6
P(no) = 0.4P(yes) = 0.6
TrainingData
New email x containing t1, t4, t5 x = <1, 0, 0, 1, 1>
Should it be classified as spam = “yes” or spam = “no”?Need to find P(yes | x) and P(no | x) …
Intelligent Information Retrieval 32
Text Naïve Bayes - ExampleTerm P(t|no) P(t|yes)
t1 1/4 4/6t2 2/4 4/6t3 2/4 3/6t4 3/4 3/6t5 2/4 2/6
P(no) = 0.4P(yes) = 0.6
New email x containing t1, t4, t5
x = <1, 0, 0, 1, 1>
P(yes | x) = [4/6 * (1-4/6) * (1-3/6) * 3/6 * 2/6] * P(yes) / P(x)= [0.67 * 0.33 * 0.5 * 0.5 * 0.33] * 0.6 / P(x) = 0.11 / P(x)
P(no | x) = [1/4 * (1-2/4) * (1-2/4) * 3/4 * 2/4] * P(no) / P(x)= [0.25 * 0.5 * 0.5 * 0.75 * 0.5] * 0.4 / P(x) = 0.019 / P(x)
To get actual probabilities need to normalize: note that P(yes | x) + P(no | x) must be 1
0.11 / P(x) + 0.019 / P(x) = 1 P(x) = 0.11 + 0.019 = 0.129
So: P(yes | x) = 0.11 / 0.129 = 0.853
P(no | x) = 0.019 / 0.129 = 0.147