Flat Clustering
description
Transcript of Flat Clustering
Prasad L16FlatCluster 1
Flat Clustering
Adapted from Slides by
Prabhakar Raghavan, Christopher Manning,
Ray Mooney and Soumen Chakrabarti
2
Today’s Topic: Clustering
Document clustering Motivations Document representations Success criteria
Clustering algorithms Partitional (Flat) Hierarchical (Tree)
Prasad L16FlatCluster
3
What is clustering?
Clustering: the process of grouping a set of objects into classes of similar objects
Documents within a cluster should be similar. Documents from different clusters should be dissimilar.
The commonest form of unsupervised learning Unsupervised learning = learning from raw
data, as opposed to supervised data where a classification of examples is given
A common and important task that finds many applications in IR and other places
Prasad L16FlatCluster
A data set with clear cluster structure
How would you design an algorithm for finding the three clusters in this case?
5
Classification vs. Clustering
Classification: supervised learningClustering: unsupervised learningClassification: Classes are human-defined and part of the input to the learning algorithm.Clustering: Clusters are inferred from the data without human input.
However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . .
5
7
Applications of clustering in IR
7
Application What isclustered?
Benefit
Search result clustering
searchresults
more effective informationpresentation to user
Scatter-Gather (subsetsof) collection
alternative user interface: “search without typing”
Collection clustering collection effective informationpresentation for exploratory browsing
Cluster-based retrieval
collection higher efficiency:faster search
8
Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering
dairycrops
agronomyforestry
AI
HCIcraft
missions
botany
evolution
cellmagnetism
relativity
courses
agriculture biology physics CS space
... ... ...
… (30)
www.yahoo.com/Science
... ...
Prasad L16FlatCluster
Introduction to Information RetrievalIntroduction to Information Retrieval
9
Global navigation: Yahoo
9
10
Global navigation: MESH (upper level)
10
11
Global navigation: MESH (lower level)
11
12
Navigational hierarchies: Manual vs. automatic creation
Note: Yahoo/MESH are not examples of clustering.But they are well known examples for using a global hierarchy for navigation.Some examples for global navigation/exploration based on clustering:
CartiaThemescapesGoogle News
12
13
Search result clustering for better navigation
13
Prasad L16FlatCluster 14
Prasad L16FlatCluster 15
Google News: automatic clustering gives an effective news presentation metaphor
Selection Metrics Google News taps into its own unique
ranking signals, which include user clicks, the estimated authority of a
publication in a particular topic (possibly taking location into account),
Freshness/recency, geography and more.
Prasad L16FlatCluster 18
Scatter/Gather: Cutting, Karger, and Pedersen
Sca
tter
/Gat
her
(S
catt
er/G
ath
er ( “
“ Sta
rS
tar ””
))
S/G Example: query on S/G Example: query on ““starstar””
Encyclopedia textEncyclopedia text14 sports14 sports
8 symbols8 symbols 47 film, tv47 film, tv 68 film, tv (p)68 film, tv (p) 7 music 7 music97 astrophysics97 astrophysics 67 astronomy(p)67 astronomy(p) 12 steller phenomena12 steller phenomena 10 flora/fauna10 flora/fauna 49 galaxies, stars 49 galaxies, stars
29 constellations29 constellations 7 miscelleneous7 miscelleneous
Clustering and Clustering and re-clusteringre-clustering is entirely automated is entirely automated
Scatter/GatherScatter/GatherCutting, Pedersen, Tukey & Karger 92, 93, Hearst & Cutting, Pedersen, Tukey & Karger 92, 93, Hearst &
Pedersen 95Pedersen 95
• How it worksHow it works– Cluster sets of documents into general
“themes”, like a table of contents – Display the contents of the clusters by
showing topical terms and typical titles– User chooses subsets of the clusters
and re-clusters the documents within – Resulting new groups have different
“themes”
For visualizing a document collection and its themes
Wise et al, “Visualizing the non-visual” PNNL ThemeScapes, Cartia
[Mountain height = cluster size]
24
For improving search recall
Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs
Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in
the cluster containing D Example: The query “car” will also return docs
containing automobile Because clustering grouped together docs
containing car with those containing automobile.
Why might this happen?Prasad
25
Issues for clustering
Representation for clustering Document representation
Vector space? Normalization? Need a notion of similarity/distance
How many clusters? Fixed a priori? Completely data driven?
Avoid “trivial” clusters - too large or small
Prasad L16FlatCluster
26
What makes docs “related”?
Ideal: semantic similarity. Practical: statistical similarity
Docs as vectors. For many algorithms, easier to think
in terms of a distance (rather than similarity) between docs.
We can use cosine similarity (alternatively, Euclidean Distance).
Prasad L16FlatCluster
More Applications of clustering … Image Processing
Cluster images based on their visual content Web
Cluster groups of users based on their access patterns on webpages
Cluster webpages based on their content Bioinformatics
Cluster similar proteins together (similarity w.r.t. chemical structure and/or functionality etc.)
…
Outliers Outliers are objects that do not belong to any cluster or
form clusters of very small cardinality
In some applications we are interested in discovering outliers, not clusters (outlier analysis)
cluster
outliers
33
Clustering Algorithms
Partitional (Flat) algorithms Usually start with a random (partial)
partition Refine it iteratively
K means clustering (Model based clustering)
Hierarchical (Tree) algorithms Bottom-up, agglomerative (Top-down, divisive)
Prasad L16FlatCluster
Hard vs. soft clustering
Hard clustering: Each document belongs to exactly one cluster More common and easier to do
Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating
browsable hierarchies You may want to put a pair of sneakers in two
clusters: (i) sports apparel and (ii) shoes
35
Partitioning Algorithms
Partitioning method: Construct a partition of n documents into a set of K clusters
Given: a set of documents and the number K Find: a partition of K clusters that optimizes
the chosen partitioning criterion Globally optimal: exhaustively enumerate all
partitions Effective heuristic methods: K-means and K-
medoids algorithms
Prasad L16FlatCluster
36
K-Means
Assumes documents are real-valued vectors. Clusters based on centroids (aka the center
of gravity or mean) of points in a cluster, c.
Reassignment of instances to clusters is based on distance to the current cluster centroids.
(Or one can equivalently phrase it in terms of similarities)
Prasad L16FlatCluster
cx
xc
||
1(c)μ
37
K-Means Algorithm
Select K random docs {s1, s2,… sK} as seeds.Until clustering converges or other stopping criterion: For each doc di: Assign di to the cluster cj such that dist(xi, sj) is
minimal. (Update the seeds to the centroid of each cluster) For each cluster cj
sj = (cj) Prasad L16FlatCluster
K-means algorithm
38
39
K Means Example(K=2)
Pick seeds
Reassign clusters
Compute centroids
xx
Reassign clusters
xx xx Compute centroids
Reassign clusters
Converged!
Prasad L16FlatCluster
Worked Example: Set to be clustered
40
41
Worked Example: Random selection of initial centroids
Exercise: (i) Guess what the optimal clustering into two clusters is in this case; (ii) compute the centroids of the clusters
41
Worked Example: Assign points to closest center
42
Worked Example: Assignment
43
Worked Example: Recompute cluster centroids
44
Worked Example: Assign points to closest centroid
45
Worked Example: Assignment
46
Worked Example: Recompute cluster centroids
47
Worked Example: Assign points to closest centroid
48
Worked Example: Assignment
49
Worked Example: Recompute cluster centroids
50
Worked Example: Assign points to closest centroid
51
Worked Example: Assignment
52
Worked Example: Recompute cluster centroids
53
Worked Example: Assign points to closest centroid
54
Worked Example: Assignment
55
Worked Example: Recompute cluster centroids
56
Worked Example: Assign points to closest centroid
57
Worked Example: Assignment
58
Worked Example: Recompute cluster centroids
59
Worked Example: Assign points to closest centroid
60
Worked Example: Assignment
61
Worked Example: Recompute cluster caentroids
62
Worked Example: Centroids and assignments after convergence
63
Two different K-means Clusterings
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Sub-optimal Clustering
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Optimal Clustering
Original Points
65
Termination conditions
Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change.
Does this mean that the docs in a cluster are unchanged?
Prasad
66
Convergence
Why should the K-means algorithm ever reach a fixed point? A state in which clusters don’t change.
K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large.
Prasad L16FlatCluster
67
Convergence of K-Means
Define goodness measure of cluster k as sum of squared distances from cluster centroid: Gk = Σi (di – ck)2 (sum over all di in
cluster k) G = Σk Gk
Intuition: Reassignment monotonically decreases G since each vector is assigned to the closest centroid.
Lower case
Prasad L16FlatCluster
Convergence of K-Means Recomputation monotonically decreases each
Gk since (mk is number of members in cluster k):
Σ (di – a)2 reaches minimum for:
Σ –2(di – a) = 0
Σ di = Σ a
mK a = Σ di
a = (1/ mk) Σ di = ck
K-means typically converges quickly
Sec. 16.4
70
Time Complexity
Computing distance between two docs is O(m) where m is the dimensionality of the vectors.
Reassigning clusters: O(Kn) distance computations, or O(Knm).
Computing centroids: Each doc gets added once to some centroid: O(nm).
Assume these two steps are each done once for I iterations: O(IKnm).
Prasad L16FlatCluster
71
Optimality of K-means
Convergence does not mean that we converge to the optimal clustering!This is the great weakness of K-means.If we start with a bad set of seeds, the resulting clustering can be horrible.
72
Seed Choice
Results can vary based on random seed selection.
Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a
heuristic (e.g., doc least similar to any existing mean)
Try out multiple starting points Initialize with the results of
another method.
In the above, if you startwith B and E as centroidsyou converge to {A,B,C}and {D,E,F}.If you start with D and Fyou converge to {A,B,D,E} {C,F}.
Example showingsensitivity to seeds
Prasad L16FlatCluster
73
K-means issues, variations, etc.
Recomputing the centroid after every assignment (rather than after all points are re-assigned) can improve speed of convergence of K-means
Assumes clusters are spherical in vector space
Sensitive to coordinate changes, weighting etc.
Prasad L16FlatCluster
74
How Many Clusters?
Number of clusters K is given Partition n docs into predetermined number of
clusters
Finding the “right” number of clusters is part of the problem E.g., for query results - ideal value of K not known
up front - though UI may impose limits.
Prasad L16FlatCluster
Prasad L17HierCluster 75
Hierarchical Clustering
Adapted from Slides by Prabhakar Raghavan, Christopher Manning,
Ray Mooney and Soumen Chakrabarti
76
“The Curse of Dimensionality”
Why document clustering is difficult While clustering looks intuitive in 2 dimensions,
many of our applications involve 10,000 or more dimensions…
High-dimensional spaces look different The probability of random points being close drops
quickly as the dimensionality grows. Furthermore, random pair of vectors are all almost
perpendicular.
Prasad L17HierCluster
77
Hierarchical Clustering
Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents.
One approach: recursive application of a partitional clustering algorithm.
animal
vertebrate
fish reptile amphib. mammal worm insect crustacean
invertebrate
Prasad L17HierCluster
78
• Clustering obtained by cutting the dendrogram at a desired level: each connectedconnected component forms a cluster.
Dendogram: Hierarchical Clustering
Prasad L17HierCluster
79
Agglomerative (bottom-up): Precondition: Start with each document as a separate
cluster. Postcondition: Eventually all documents belong to the
same cluster. Divisive (top-down):
Precondition: Start with all documents belonging to the same cluster.
Postcondition: Eventually each document forms a cluster of its own.
Does not require the number of clusters k in advance.
Needs a termination/readout condition
Hierarchical Clustering algorithms
80
Hierarchical agglomerative clustering (HAC)
HAC creates a hierarchy in the form of a binary tree.Assumes a similarity measure for determining the similarity of two clusters.
Up to now, our similarity measures were for documents.
We will look at four different cluster similarity measures.
80
81
Hierarchical Agglomerative Clustering (HAC) Algorithm
Start with all instances in their own cluster.Until there is only one cluster: Among the current clusters, determine the two clusters, ci and cj, that are most similar. Replace ci and cj with a single cluster ci cj
Prasad L17HierCluster
83
Dendrogram: Document Example
As clusters agglomerate, docs likely to fall into a hierarchy of “topics” or concepts.
d1
d2
d3
d4
d5
d1,d2 d4,d5 d3
d3,d4,d5
Prasad L17HierCluster
84
Key notion: cluster representative
We want a notion of a representative point in a cluster, to represent the location of each cluster
Representative should be some sort of “typical” or central point in the cluster, e.g., point inducing smallest radii to docs in cluster smallest squared distances. point that is the “average” of all docs in the cluster
Centroid or center of gravity Measure inter-cluster distances by distances of centroids.
Prasad L17HierCluster
85
Example: n=6, k=3, closest pair of centroids
d1 d2
d3
d4
d5
d6
Centroid after first step.
Centroid aftersecond step.
Prasad
86
Outliers in centroid computation
Can ignore outliers when computing centroid. What is an outlier?
Lots of statistical definitions, e.g. moment of point to centroid > M some cluster moment.
CentroidOutlier
Say 10.
Prasad L17HierCluster
Introduction to Information RetrievalIntroduction to Information Retrieval
88
Key question: How to define cluster similarity Single-link: Maximum similarity
Maximum similarity of any two documents Complete-link: Minimum similarity
Minimum similarity of any two documents Centroid: Average “intersimilarity”
Average similarity of all document pairs (but excluding pairs of docs in the same cluster)
This is equivalent to the similarity of the centroids. Group-average: Average “intrasimilarity”
Average similarty of all document pairs, including pairs of docs in the same cluster
88
Introduction to Information RetrievalIntroduction to Information Retrieval
89
Cluster similarity: Example
89
Introduction to Information RetrievalIntroduction to Information Retrieval
90
Single-link: Maximum similarity
90
Introduction to Information RetrievalIntroduction to Information Retrieval
91
Complete-link: Minimum similarity
91
Introduction to Information RetrievalIntroduction to Information Retrieval
92
Centroid: Average intersimilarityintersimilarity = similarity of two documents in different clusters
92
Introduction to Information RetrievalIntroduction to Information Retrieval
93
Group average: Average intrasimilarityintrasimilarity = similarity of any pair, including cases where thetwo documents are in the same cluster
93
Introduction to Information RetrievalIntroduction to Information Retrieval
94
Cluster similarity: Larger Example
94
Introduction to Information RetrievalIntroduction to Information Retrieval
95
Single-link: Maximum similarity
95
Introduction to Information RetrievalIntroduction to Information Retrieval
96
Complete-link: Minimum similarity
96
Introduction to Information RetrievalIntroduction to Information Retrieval
97
Centroid: Average intersimilarity
97
Introduction to Information RetrievalIntroduction to Information Retrieval
98
Group average: Average intrasimilarity
98
99
Single Link Agglomerative Clustering
Use maximum similarity of pairs:
After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:
),(max),(,
yxsimccsimji cycx
ji
)),(),,(max()),(( kjkikji ccsimccsimcccsim
Prasad L17HierCluster
100
Single Link Example
Prasad L17HierCluster
Introduction to Information RetrievalIntroduction to Information Retrieval
101
Single-link: Chaining
Single-link clustering often produces long, straggly clusters. For most applications, these are undesirable.
101
102
Complete Link Agglomerative Clustering
Use minimum similarity of pairs:
Makes “tighter,” spherical clusters that are typically preferable.
After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:
),(min),(,
yxsimccsimji cycx
ji
)),(),,(min()),(( kjkikji ccsimccsimcccsim
Ci Cj CkPrasad
103
Complete Link Example
Prasad L17HierCluster
Introduction to Information RetrievalIntroduction to Information Retrieval
104
Complete-link: Sensitivity to outliers
The complete-link clustering of this set splits d2 from its right neighbors – clearly undesirable.
The reason is the outlier d1. This shows that a single outlier can negatively affect the
outcome of complete-link clustering. Single-link clustering does better in this case.
104
105
Computational Complexity In the first iteration, all HAC methods need to
compute similarity of all pairs of n individual instances which is O(n2).
In each of the subsequent n2 merging iterations, it computes the distance between the most recently created cluster and all other existing clusters.
In order to maintain an overall O(n2) performance, computing similarity to each cluster must be done in constant time. Often O(n3) if done naively or O(n2 log n) if done
cleverly Prasad L17HierCluster
107
Group Average Agglomerative Clustering
Use average similarity across all pairs within the merged cluster to measure the similarity of two clusters.
Compromise between single and complete link.
Two options: Averaged across all ordered pairs in the merged cluster Averaged over all pairs between the two original clusters
No clear difference in efficacy
)( :)(
),()1(
1),(
ji jiccx xyccyjiji
ji yxsimcccc
ccsim
Prasad L17HierCluster
108
Computing Group Average Similarity
Assume cosine similarity and normalized vectors with unit length.
Always maintain sum of vectors in each cluster.
Compute similarity of clusters in constant time:
jcx
j xcs
)(
)1||||)(|||(|
|)||(|))()(())()((),(
jiji
jijijiji cccc
cccscscscsccsim
Prasad L17HierCluster
109
Medoid as Cluster Representative
The centroid does not have to be a document. Medoid: A cluster representative that is one of
the documents (E.g., the document closest to the centroid)
One reason this is useful Consider the representative of a large cluster (>1K docs)
The centroid of this cluster will be a dense vector The medoid of this cluster will be a sparse vector
Compare: mean/centroid vs. median/medoid
Prasad L17HierCluster
110
Efficiency: “Using approximations”
In standard algorithm, must find closest pair of centroids at each step
Approximation: instead, find nearly closest pair use some data structure that makes this
approximation easier to maintain simplistic example: maintain closest pair based on
distances in projection on a random line
Random line
Prasad
111
Major issue - labeling
After clustering algorithm finds clusters - how can they be useful to the end user?
Need pithy label for each cluster In search results, say “Animal” or “Car” in
the jaguar example. In topic trees (Yahoo), need navigational
cues. Often done by hand, a posteriori.
Prasad L17HierCluster
112
How to Label Clusters
Show titles of typical documents Authors create titles for quick scanning! But you can only show a few titles which may not
fully represent cluster E.g., Our NLDB-08 paper synthesizes titles for News
clusters
Show words/phrases prominent in cluster More likely to fully represent cluster (tag cloud) Use distinguishing words/phrases
Differential/contrast labeling
Prasad L17HierCluster
113
Labeling
Common heuristics - list 5-10 most frequent terms in the centroid vector. Drop stop-words; stem.
Differential labeling by frequent terms Within a collection “Computers”, clusters all have
the word computer as frequent term. Discriminant analysis of centroids.
Perhaps better: distinctive noun phrase
Prasad L17HierCluster
114
What is a Good Clustering?
Internal criterion: A good clustering will produce high quality clusters in which: the intra-class (that is, intra-cluster)
similarity is high the inter-class similarity is low
The measured quality of a clustering depends on both the document representation and the similarity measure used
Prasad L17HierCluster
115
External criteria for clustering quality
Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data
Assesses a clustering with respect to ground truth … requires labeled data Assume documents with C gold standard
classes, while our clustering algorithms produce K clusters, ω1, ω2, …, ωK with ni members.
Prasad L17HierCluster
116
External Evaluation of Cluster Quality
Simple measure: purity, the ratio between the dominant class in the cluster πi and the size of cluster ωi
Others are entropy of classes in clusters (or mutual information between classes and clusters)
Cjnn
Purity ijji
i )(max1
)(
Prasad L17HierCluster
117
Cluster I Cluster II Cluster III
Cluster I: Purity = 1/6 * (max(5, 1, 0)) = 5/6
Cluster II: Purity = 1/6 * (max(1, 4, 1)) = 4/6
Cluster III: Purity = 1/5 * (max(2, 0, 3)) = 3/5
Purity example
Rand Index
Number of points
Same cluster in clustering
Different clusters in clustering
Same class in ground truth A C
Different classes in ground truth
B D
119
Rand index: symmetric version
BA
AP
DCBA
DARI
CA
AR
Compare with standard Precision and Recall.
Prasad L17HierCluster
Rand Index example: 0.68
Number of points
Same Cluster in clustering
Different Clusters in clustering
Same class in ground truth 20 24
Different classes in ground truth
20 72
Final word and resources
In clustering, clusters are inferred from the data without human input (unsupervised learning)
However, in practice, it’s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . .