Course on Data Mining (581550-4) - cs.helsinki.fi · Course on Data Mining (581550-4) 2 14.11.2001...

27
1 14.11.2001 Data mining: Clustering 1 Intro/Ass. Rules Episodes Text Mining Home Exam 24./26.10. 30.10. Clustering KDD Process Appl./Summary 14.11. 21.11. 7.11. 28.11. C o u r s e o n D a t a M i n i n g ( 5 8 1 5 5 0 - 4 ) 14.11.2001 Data mining: Clustering 2 Today 14.11.2001 Today's subject : o Classification, clustering Next week's program : o Lecture: Data mining process o Exercise: Classification, clustering o Seminar: Classification, clustering C o u r s e o n D a t a M i n i n g ( 5 8 1 5 5 0 - 4 )

Transcript of Course on Data Mining (581550-4) - cs.helsinki.fi · Course on Data Mining (581550-4) 2 14.11.2001...

1

14.11.2001 Data mining: Clustering 1

Intr o/Ass. Rules

Episodes

Text M ining

Home Exam

24./26.10.

30.10.

Cluster ing

KDD Process

Appl./Summary

14.11.

21.11.

7.11.

28.11.

Course on Data M ining (581550-4)

14.11.2001 Data mining: Clustering 2

Today 14.11.2001

• Today's subject: o Classification, clustering

• Next week's program: o Lecture: Data mining processo Exercise: Classification,

clusteringo Seminar: Classification,

clustering

Course on Data M ining (581550-4)

2

14.11.2001 Data mining: Clustering 3

I. Classification and prediction

I I . Clustering and similarity

Classification and cluster ing

14.11.2001 Data mining: Clustering 4

• What is cluster analysis?• Similarity and dissimilarity• Types of data in cluster analysis• M ajor clustering methods• Partitioning methods• Hierarchical methods• Outlier analysis• Summary

Cluster analysis

Overview

3

14.11.2001 Data mining: Clustering 5

• Cluster: a collection of data objects

o similar to one another within the same cluster

o dissimilar to the objects in the other clusters

• Aim of clustering: to group a set of data objects into clusters

What is cluster analysis?

14.11.2001 Data mining: Clustering 6

Typical uses of cluster ing

• As a stand-alone tool to get insight into data distr ibution

• As a preprocessing step for other algorithms

Used as?

4

14.11.2001 Data mining: Clustering 7

Applications of cluster ing

• M arketing: discovering of distinct customer groups in a purchase database

• Land use: identifying of areas of similar land use in an earth observation database

• Insurance: identifying groups of motor insurance policy holders with a high average claim cost

• City-planning: identifying groups of houses according to their house type, value, and geographical location

14.11.2001 Data mining: Clustering 8

What is good cluster ing?

• A good clustering method will produce high quality clusters with

o high intra-class similarity

o low inter-class similarity

• The quality of a clustering result depends on

o the similarity measure used

o implementation of the similarity measure

• The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns

5

14.11.2001 Data mining: Clustering 9

Requirements of cluster ing in data mining (1)

• Scalability

• Ability to deal with different types of attr ibutes

• Discovery of clusters with arbitrary shape

• M inimal requirements for domain knowledge to determine input parameters

14.11.2001 Data mining: Clustering 10

Requirements of cluster ing in data mining (2)

• Ability to deal with noise and outliers

• Insensitivity to order of input records

• High dimensionality• Incorporation of user-specified

constraints• Interpretability and usability

6

14.11.2001 Data mining: Clustering 11

Similar ity and dissimilar ity between objects (1)

• There is no single definition of similarity or dissimilarity between data objects

• The definition of similarity or dissimilarity between objectsdepends on

o the type of the data considered

o what kind of similarity we are looking for

14.11.2001 Data mining: Clustering 12

Similar ity and dissimilar ity between objects (2)• Similarity/dissimilarity between objects is

often expressed in terms of a distancemeasure d(x,y)

• Ideally, every distance measure should be a metric, i.e., it should satisfy the following conditions:

),(),(),( 4.

),(),( 3.

iff 0),( 2.

0),( 1.

zydyxdzxd

xydyxd

yxyxd

yxd

+≤=

==≥

7

14.11.2001 Data mining: Clustering 13

Type of data in cluster analysis

• Interval-scaled variables• Binary variables• Nominal, ordinal, and ratio

variables• Variables of mixed types• Complex data types

14.11.2001 Data mining: Clustering 14

Interval-scaled var iables (1)

• Continuous measurements of a roughly linear scale

• For example, weight, height and age

• The measurement unit can affect the cluster analysis

• To avoid dependence on the measurement unit, we shouldstandardize the data

8

14.11.2001 Data mining: Clustering 15

Interval-scaled var iables (2)

To standardize the measurements:

• calculate the mean absolute deviation

where and

• calculate the standardized measurement (z-score)

,)...21

1nffff

xx(xn m +++=

|)|...|||(|121 fnffffff

mxmxmxns −++−+−=

f

fi fi f s

mx z

−=

14.11.2001 Data mining: Clustering 16

Interval-scaled var iables (3)

• One group of popular distance measures for interval-scaled variables are M inkowski distances

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer

qq

pp

qq

jx

ix

jx

ix

jx

ixjid )||...|||(|),(

2211−++−+−=

9

14.11.2001 Data mining: Clustering 17

Interval-scaled var iables (4)

• If q = 1, the distance measure is M anhattan (orcity block) distance

• If q = 2, the distance measure is Euclidean distance

||...||||),(2211 pp jxixjxixjxixjid −++−+−=

)||...|||(|),( 22

22

2

11 pp jx

ix

jx

ix

jx

ixjid −++−+−=

14.11.2001 Data mining: Clustering 18

Binary var iables (1)

• A binary variable has only two states: 0 or 1

• A contingency table for binary data

pdbcasum

dcdc

baba

sum

++++

0

1

01

Object i

Object j

10

14.11.2001 Data mining: Clustering 19

Binary var iables (2)

• Simple matching coefficient (invariant similarity, if the binary variable is symmetric):

• Jaccard coefficient (noninvariant similarity, if the

binary variable is asymmetric):

dcbacb jid

++++=),(

cbacb jid++

+=),(

14.11.2001 Data mining: Clustering 20

Binary var iables (3)

Example: dissimilarity between binary variables:

• a patient record table

• eight attributes, of which

o gender is a symmetric attribute, and

o the remaining attributes are asymmetric binary

Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4Jack M Y N P N N NMary F Y N P N P NJim M Y P N N N N

11

14.11.2001 Data mining: Clustering 21

Binary var iables (4)

• Let the values Y and P be set to 1, and the value Nbe set to 0

• Compute distances between patients based on the asymmetric variables by using Jaccard coefficient

75.0211

21),(

67.0111

11),(

33.0102

10),(

=++

+=

=++

+=

=++

+=

maryjimd

jimjackd

maryjackd

14.11.2001 Data mining: Clustering 22

Nominal var iables

• A generalization of the binary variable in that it can take

more than 2 states, e.g., red, yellow, blue, green

• Method 1: simple matching

o m: # of matches, p: total # of variables

• Method 2: use a large number of binary variables

o create a new binary variable for each of the Mnominal states

pmpjid −=),(

12

14.11.2001 Data mining: Clustering 23

Ordinal var iables

• An ordinal variable can be discrete or continuous• Order of values is important, e.g., rank

• Can be treated like interval-scaled o replacing xif by their rank

o map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by

o compute the dissimilarity using methods for interval-scaled variables

11−

−=

f

ifif M

rz

} ..., ,1 { fif

Mr ∈

14.11.2001 Data mining: Clustering 24

Ratio-scaled var iables

• A positive measurement on a nonlinear scale, approximately at exponential scale

o for example, AeBt or Ae-Bt

• M ethods:o treat them like interval-scaled variables — not a

good choice! (why?)

o apply logarithmic transformation yif = log(xif)o treat them as continuous ordinal data and treat

their rank as interval-scaled

13

14.11.2001 Data mining: Clustering 25

Var iables of mixed types (1)

• A database may contain all the six types of variables

• One may use a weighted formulato combine their effects:

where

)(1

)()(1),(

fij

pf

fij

fij

pf

djid

δδ

=

=

ΣΣ

=

1 otherwise

;0or

missing, is or if 0)(

=

==

=

� (f)

i j

jfif

jfiff

i j

xx

xxδ

14.11.2001 Data mining: Clustering 26

Var iables of mixed types (2)

Contribution of variable f to distance d(i,j):• if f is binary or nominal:

• if f is interval-based: use the normalized distance

• if f is ordinal or ratio-scaledo compute ranks r if and

o and treat zif as interval-scaled

1

1

−=

f

i f

Mrz

if

1 otherwise ; if 0)()( === dd

f

ijjfif

f

ij xx

14

14.11.2001 Data mining: Clustering 27

Complex data types

• All objects considered in data mining are not relational => complex types of datao examples of such data are spatial data, multimedia

data, genetic data, time-series data, text data and data collected from World-Wide Web

• Often totally different similarity or dissimilarity measures than above

o can, for example, mean using of string and/or sequence matching, or methods of information retrieval

14.11.2001 Data mining: Clustering 28

Major cluster ing methods

• Partitioning methods• Hierarchical methods• Density-based methods• Grid-based methods • M odel-based methods

(conceptual clustering, neural networks)

15

14.11.2001 Data mining: Clustering 29

Par titioning methods

• A partitioning method: construct a partition of a database D of n objects into a set of k clusters such that

o each cluster contains at least one object

o each object belongs to exactly one cluster

• Given a k, find a partition of k clustersthat optimizes the chosen partitioning criterion

14.11.2001 Data mining: Clustering 30

Cr iter ia for judging the quality of par titions

• Global optimal: exhaustively enumerate all partitions

• Heuristic methods:

o k-means (MacQueen’67): each cluster is represented by the center of the cluster (centroid)

o k-medoids (Kaufman & Rousseeuw’87): each cluster is represented by one of the objects in the cluster (medoid)

16

14.11.2001 Data mining: Clustering 31

K-means cluster ing method (1)• Input to the algorithm: the number of clusters k, and a

database of n objects

• Algorithm consists of four steps:

1. partition object into k nonempty subsets/clusters

2. compute a seed points as the centroid (the mean of the objects in the cluster) for each cluster in the current partition

3. assign each object to the cluster with the nearest centroid

4. go back to Step 2, stop when there are no more new assignments

14.11.2001 Data mining: Clustering 32

K-means cluster ing method (2)

Alternative algorithm also consists of four steps:

1. arbitrarily choose k objects as the initial cluster centers (centroids)

2. (re)assign each object to the cluster with the nearest centroid

3. update the centroids

4. go back to Step 2, stop when there are no more new assignments

17

14.11.2001 Data mining: Clustering 33

K-means cluster ing method -Example

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10 0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

14.11.2001 Data mining: Clustering 34

Strengths of K-means cluster ing method

• Relatively scalable in processing large data sets

• Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n.

• Often terminates at a local optimum; the global optimum may be found using techniques such as genetic algorithms

18

14.11.2001 Data mining: Clustering 35

Weaknesses of K-means cluster ing method

• Applicable only when the mean of objects is defined

• Need to specify k, the number of clusters, in advance

• Unable to handle noisy data and outliers

• Not suitable to discover clusters with non-convex shapes, or clusters of very different size

14.11.2001 Data mining: Clustering 36

Var iations of K-means cluster ing method (1)

• A few variants of the k-means which differ ino selection of the initial k centroids

o dissimilarity calculations

o strategies for calculating cluster centroids

19

14.11.2001 Data mining: Clustering 37

Var iations of K-means cluster ing method (2)

• Handling categorical data: k-modes(Huang’98)

o replacing means of clusters with modes

o using new dissimilarity measures to deal with categorical objects

o using a frequency-based method to update modes of clusters

• A mixture of categorical and numerical data: k-prototype method

14.11.2001 Data mining: Clustering 38

K-medoids cluster ing method• Input to the algorithm: the number of clusters k, and a

database of n objects

• Algorithm consists of four steps:

1. arbitrarily choose k objects as the initial medoids(representative objects)

2. assign each remaining object to the cluster with the nearest medoid

3. select a nonmedoid and replace one of the medoids with it if this improves the clustering

4. go back to Step 2, stop when there are no more new assignments

20

14.11.2001 Data mining: Clustering 39

Hierarchical methods

• A hierarchical method: construct a hierarchy of clustering, not just a single partition of objects

• The number of clusters k is not required as an input

• Use a distance matrix as clustering criteria

• A termination condition can be used (e.g., a number of clusters)

14.11.2001 Data mining: Clustering 40

A tree of cluster ings

• The hierarchy of clustering is ofter given as a clustering tree, also called a dendrogram

o leaves of the tree represent the individual objects

o internal nodes of the tree represent the clusters

21

14.11.2001 Data mining: Clustering 41

Two types of hierarchical methods (1)

Two main types of hierarchical clustering techniques:• agglomerative (bottom-up):

o place each object in its own cluster (a singleton)o merge in each step the two most similar clusters

until there is only one cluster left or the termination condition is satisfied

• divisive (top-down):o start with one big cluster containing all the objectso divide the most distinctive cluster into smaller

clusters and proceed until there are n clusters or the termination condition is satisfied

14.11.2001 Data mining: Clustering 42

Two types of hierarchical methods (2)

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

a a b

d e

c d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative

divisive

22

14.11.2001 Data mining: Clustering 43

Inter -cluster distances

• Three widely used ways of defining the inter-cluster distance, i.e., the distance between two separate clusters, are

o single linkage method (nearest neighbor):

o complete linkage method (furthest neighbor):

o average linkage method (unweighted pair-group average):

{ } ),( min),( , yxdjidji CyCx ∈∈=

{ } ),( max),( , yxdjidji CyCx ∈∈=

{ } ),( ),( , yxdavgjidji CyCx ∈∈=

14.11.2001 Data mining: Clustering 44

Strengths of hierarchical methods

• Conceptually simple• Theoretical properties are well

understood• When clusters are merged/split,

the decision is permanent => the number of different alternatives that need to beexamined is reduced

23

14.11.2001 Data mining: Clustering 45

Weaknesses of hierarchical methods

• M erging/splitting of clusters ispermanent => erroneous decisions are impossible to correct later

• Divisive methods can becomputational hard

• Methods are not (necessarily)scalable for large data sets

14.11.2001 Data mining: Clustering 46

Outlier analysis (1)

• Outliers o are objects that are considerably

dissimilar from the remainder of the data

o can be caused by a measurement or execution error, or

o are the result of inherent data variability

• M any data mining algorithms tryo to minimize the influence of outliers

o to eliminate the outliers

24

14.11.2001 Data mining: Clustering 47

Outlier analysis (2)

• Minimizing the effect of outliers and/or eliminating the outliers may cause information loss

• Outliers themselves may be of interest=> outlier mining

• Applications of outlier miningo Fraud detection

o Customized marketing

o Medical treatments

14.11.2001 Data mining: Clustering 48

• Cluster analysis groups objects based on their similarity

• Cluster analysis has wide applications

• M easure of similarity can be computed for various type of data

• Selection of similarity measure is dependent on the data used and the type of similarity we are searching for

Summary (1)

25

14.11.2001 Data mining: Clustering 49

• Clustering algorithms can be categorized into o partitioning methods,o hierarchical methods, o density-based methods, o grid-based methods, and o model-based methods

• There are sti ll lots of research issues on cluster analysis

Summary (2)

14.11.2001 Data mining: Clustering 50

Seminar Presentations/Groups 7-8

Classification of spatial data

K. K operski, J. Han, N. Stefanovic: “ An Efficient Two-Step M ethod of Classification of Spatial Data" , SDH’98

26

14.11.2001 Data mining: Clustering 51

Seminar Presentations/Groups 7-8

WEBSOM

K. Lagus, T. Honkela, S. K aski, T. K ohonen: “ Self-organizing M aps of Document Collections: A New Approach to Interactive Exploration” , K DD’96

T. Honkela, S. Kaski, K . Lagus, T. K ohonen: “ WEBSOM – Self-Organizing M aps of Document Collections” , WSOM’97

14.11.2001 Data mining: Clustering 52

Thanks to Jiawei Han from Simon Fraser University

for his slides which greatly helpedin preparing this lecture!

Course on Data M ining

27

14.11.2001 Data mining: Clustering 53

References - cluster ing

• R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98

• M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.

• M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’99.

• P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996

• M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96.

• M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95.

• D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-172, 1987.

• D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’98.

14.11.2001 Data mining: Clustering 54

References - cluster ing

• S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98.

• A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.

• L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990.

• E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.

• G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988.

• P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.

• R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.

• E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.

• G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB’98.

• W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’97.

• T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96.