Web Search and Text Mining
description
Transcript of Web Search and Text Mining
![Page 1: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/1.jpg)
Web Search and Text Mining
Lecture 14: Clustering Algorithm
![Page 2: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/2.jpg)
Today’s Topic: Clustering
Document clustering Motivations Document representations Success criteria
Clustering algorithms Partitional Hierarchical
![Page 3: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/3.jpg)
What is clustering?
Clustering: the process of grouping a set of objects into classes of similar objects The commonest form of unsupervised
learning Unsupervised learning = learning from
raw data, as opposed to supervised data where a classification of examples is given
A common and important task that finds many applications in IR and other places
![Page 4: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/4.jpg)
Why cluster documents?
Whole corpus analysis/navigation Better user interface
For improving recall in search applications Better search results
For better navigation of search results Effective “user recall” will be higher
For speeding up vector space retrieval Faster search
![Page 5: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/5.jpg)
Yahoo! Hierarchy
dairycrops
agronomyforestry
AI
HCIcraft
missions
botany
evolution
cellmagnetism
relativity
courses
agriculture biology physics CS space
... ... ...
… (30)
www.yahoo.com/Science
... ...
![Page 6: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/6.jpg)
For improving search recall Cluster hypothesis - Documents with similar text are
related Therefore, to improve search recall:
Cluster docs in corpus a priori When a query matches a doc D, also return other
docs in the cluster containing D Hope if we do this: The query “car” will also return
docs containing automobile Because clustering grouped together docs
containing car with those containing automobile.
Why might this happen?
![Page 7: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/7.jpg)
For better navigation of search results
For grouping search results thematically clusty.com / Vivisimo
![Page 8: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/8.jpg)
Issues for clustering
Representation for clustering Document representation
Vector space? Probabilistic/Multinomials? Need a notion of similarity/distance
How many clusters? Fixed a priori? In search results, space constraints Completely data driven?
Avoid “trivial” clusters - too large or small In an application, if a cluster's too large, then for navigation
purposes you've wasted an extra user click without whittling down the set of documents much.
![Page 9: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/9.jpg)
What makes docs “related”?
Ideal: semantic similarity NLP. Practical: statistical similarity
We will use cosine similarity dist of normalized doc vectors. Docs as vectors.
![Page 10: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/10.jpg)
Clustering Algorithms
Partitional algorithms Usually start with a random (partial)
partitioning Refine it iteratively
K means clustering Model based clustering (finite mixture models)
Hierarchical algorithms Bottom-up, agglomerative Top-down, divisive
![Page 11: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/11.jpg)
Partitioning Algorithms
Partitioning method: Construct a partition of n documents into a set of K clusters
Given: a set of documents and the number K
Find: a partition of K clusters that optimizes the chosen partitioning criterion Globally optimal: exhaustively enumerate all
partitions (O(n^K) partitions) Effective heuristic methods: K-means and
K-medoids algorithms
![Page 12: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/12.jpg)
K-Means
Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of
gravity or mean) of points in a cluster, c:
Reassignment of instances to clusters is based on distance to the current cluster centroids.
(Or one can equivalently phrase it in terms of similarities)
![Page 13: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/13.jpg)
K-Means Algorithm
Select K random docs {c1, c2,… cK} as seeds.Until clustering converges For each doc xi: Assign xi to the cluster Cj such that dist(xi, cj) is
minimal. (Update the seeds to the centroid of each cluster) For each cluster Cj
cj = (Cj)
![Page 14: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/14.jpg)
K Means Example(K=2)
Pick seedsReassign clustersCompute centroids
xx
Reassign clusters
xx xx Compute centroids
Reassign clusters
Converged!
![Page 15: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/15.jpg)
Termination conditions
Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change.
Does this mean that the docs in a cluster are
unchanged?
![Page 16: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/16.jpg)
Convergence
Why should the K-means algorithm ever reach a fixed configuration? A state in which clusters don’t change.
K-means convergence in terms of cost function value: both steps do not increase objective function value.
More: if one can show each step strictly reduce objective function no cycles can occur converge to a fixed configuration
![Page 17: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/17.jpg)
Convergence of K-Means
Define goodness measure of cluster k as sum of squared distances from cluster centroid: Gk = Σi (xi – ck)2 (sum over all xi in
cluster k) G = Σk Gk
Reassignment monotonically decreases G since each vector is assigned to the closest centroid.
Lower case
![Page 18: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/18.jpg)
Convergence of K-Means
Recomputation monotonically decreases each Gk since (mk is number of members in cluster k): Σ (xi – a)2 reaches minimum for: Σ –2(xi – a) = 0 Σ xi = Σ a mK a = Σ xi a = (1/ mk) Σ xi = ck
![Page 19: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/19.jpg)
Relating two consecutive configurations
Illustration in class
![Page 20: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/20.jpg)
Time Complexity
Computing distance between two docs is O(m) where m is the dimensionality of the vectors.
Reassigning clusters: O(Kn) distance computations, or O(Knm).
Computing centroids: Each doc gets added once to some centroid: O(nm).
Assume these two steps are each done once for I iterations: O(IKnm).
However, fast algorithms exist (using KD-trees).
![Page 21: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/21.jpg)
NP-Hard Problem
Consider the problem of
min G = Σk Gk
for fixed k. It has been proved that even for k=2, the problem is NP-hard.
The total number of partitions is k^n, n number of points.
Kmeans is a local search method for min G, depending on the initial seed selection.
![Page 22: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/22.jpg)
Non-optimal stable configuration
Illustration using three points
![Page 23: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/23.jpg)
Seed Choice Results can vary based on
random seed selection. Some seeds can result in poor
convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a
heuristic (e.g., doc least similar to any existing mean)
Try out multiple starting points Initialize with the results of
another clustering methods.
In the above, if you startwith B and E as centroidsyou converge to {A,B,C}and {D,E,F}If you start with D and Fyou converge to {A,B,D,E} {C,F}
Example showingsensitivity to seeds
![Page 24: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/24.jpg)
How to Choose Initial Seeds?
Kmeans++: D(x) the shortest distance to the current centers
Choose an initial center c1 uniformly from X Choose the next center ci: choose ci=x’ from X
with probability D(x’)^2/ sum D(x)^2 Repeat the above until we have K centers Apply Kmeans to the chosen K centers
![Page 25: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/25.jpg)
Theorem on Kmeans++
Let G be the objective function value from Kmeans++, and G_opt the optimal objective function value, then
E(G) <= 8(ln k +2)G_opt
![Page 26: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/26.jpg)
How Many Clusters?
Number of clusters K is given Partition n docs into predetermined number of
clusters Finding the “right” number of clusters is part of
the problem Given docs, partition into an “appropriate” number
of subsets. E.g., for query results - ideal value of K not known
up front - though UI may impose limits.
![Page 27: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/27.jpg)
K not specified in advance
G(K) decreases as K increases Solve an optimization problem: penalize
having lots of clusters application dependent, e.g., compressed
summary of search results list. Tradeoff between having more clusters
(better focus within each cluster) and having too many clusters
![Page 28: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/28.jpg)
BIC-type criteria
This a difficult problem A common approach is to minimize
some kind of BIC-type criteria G(# of clusters) + a*(dimension)*(#
of clusters)*log(# of points)
![Page 29: Web Search and Text Mining](https://reader036.fdocuments.us/reader036/viewer/2022062811/56816163550346895dd0ece8/html5/thumbnails/29.jpg)
K-means issues, variations, etc.
Recomputing the centroid after every assignment (rather than after all points are re-assigned) can improve speed of convergence of K-means
Assumes clusters are spherical in vector space Sensitive to coordinate changes, weighting
etc. Disjoint and exhaustive
Doesn’t have a notion of “outliers”