Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm

Click here to load reader

download Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm

of 27

description

Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm. Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIEL Department of Computer Science, University of Houston Talk Organization - PowerPoint PPT Presentation

Transcript of Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm

PAR-CLEVER : A Faster Clustering Algorithm

Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIELDepartment of Computer Science, University of Houston Talk OrganizationRandomized Hill ClimbingCLEVERA Prototype-based Clustering Algorithm which Supports Fitness FunctionsOpenMP and CUDA Versions of CleverExperimental ResultsSummary

1Design and Evaluation of a ParallelExecution Framework for the CLEVERClustering Algorithm1In this talk, a framework for region discovery in spatial datasets will be introduced. In the first part of the talk motivates the need for such a framework.Next, more details about the framework will be given and its use for solving hotspot discovery problems will be discussed. Finally, the generalizability of the framework will be discussed.1. Randomized Hill Climbing NeighborhoodRandomized Hill Climbing: Sample p points randomly in the neighborhood of the currently best solution; determine the best solution of the n sampled points. If it is better than the current solution, make it the new current solution and continue the search; otherwise, terminate returning the current solution.Advantages: easy to apply, does not need many resources, usually fast.Problems: How do I define my neighborhood; what parameter p should I choose?Eick et al., ParCo11, Ghent

2The last technology I like to introduce in todays presentation are shared ontologies. Shared ontologies are important to standardize communication, and for gathering information from different information sources. Ontologies play an important role for agent-based systems.

Ontologies basically describe...Maximize f(x,y,z)=|x-y-0.2|*|x*z-0.8|*|0.3-z*z*y| with x,y,z in [0,1]

Neighborhood Design: Create solutions 50 solutions s, such that:

s= (min(1, max(0,x+r1)), min(1, max(0,y+r2)), min(1, max(0, z+r3))

with r1, r2, r3 being random numbers in [-0.05,+0.05].Example Randomized Hill ClimbingEick et al., ParCo11, Ghent

2. CLEVER: Clustering with Plug-in Fitness FunctionsIn the last 5 years, the UH-DMML Research Group at the University of Houston developed families of clustering algorithms that find contiguous spatial clusters by maximizing a plug-in fitness function.This work is motivated by a mismatch between evaluation measures of traditional clustering algorithms (such as cluster compactness) and what domain experts are actually looking for.Plug-in Fitness Functions allow domain experts to instruct clustering algorithms with respect to desirable properties of good clusters the clustering algorithm should seek for.

4Eick et al., ParCo11, Ghent

4Region Discovery Framework

8Eick et al., ParCo11, Ghent

5Region Discovery Framework3The algorithms we currently investigate solve the following problem:Given:A dataset O with a schema RA distance function d defined on instances of RA fitness function q(X) that evaluates clusterings X={c1,,ck} as follows:

q(X)= cX reward(c)=cX i(c) *size(c) with b1

Objective:Find c1,,ck O such that:cicj= if ijX={c1,,ck} maximizes q(X)All cluster ciX are contiguous (each pair of objects belonging to ci has to be delaunay-connected with respect to ci and to d)c1ck O c1,,ck are usually ranked based on the reward each cluster receives, and low reward clusters are frequently not reported

10Eick et al., ParCo11, Ghent

6Our appoach employs reward-based fitness functions that have the following form; clusters receive rewards based on the interestingness and rewards increase nonlinearly with cluster-size dependent on the value of b, favoring clusters c with more objects. This requirement is important because we want to encourage region discovery algorithms to merge neighboring clusters if they have similar characteristics. The quality of a clustering is the sum of the rewards its individual clusters receive. Example1: Finding Regional Co-location Patterns in Spatial DataObjective: Find co-location regions using various clustering algorithms and novel fitness functions.

Applications:1. Finding regions on planet Mars where shallow and deep ice are co-located, using point and raster datasets. In figure 1, regions in red have very high co-location and regions in blue have anti co-location. 2. Finding co-location patterns involving chemical concentrations with values on the wings of their statistical distribution in Texas ground water supply. Figure 2 indicates discovered regions and their associated chemical patterns.Figure 1: Co-location regions involving deep andshallow ice on Mars

Figure 2: Chemical co-location patterns in Texas Water Supply12

7Example 2: Regional RegressionGeo-regression approaches: Multiple regression functions are used that vary depending on location.Regional Regression: To discover regions with strong relationships between dependent & independent variables Construct regional regression functions for each region When predicting the dependent variable of an object, use the regression function associated with the location of the object

13Eick et al., ParCo11, Ghent

8Representative-based ClusteringAttribute2Attribute11 234Objective: Find a set of objects OR such that the clustering X obtained by using the objects in OR as representatives minimizes q(X).Characteristic: cluster are formed by assigning objects to the closest representativePopular Algorithms: K-means, K-medoids/PAM, CLEVER, CLEVER,

9Eick et al., ParCo11, Ghent

9The objective of representative-based supervised clustering isThe CLEVER Algorithm10A prototype-based clustering algorithm which supports plug-in fitness functionUses a randomized hill climbing procedure to find a good set of prototype data objects that represent clustersgood maximize the plug-in fitness functionSearch for the correct number of clusterCLEVER is powerful but usually slow;

Hill Climbing Procedure

CLEVERPlug-in fitness functionNeighboring solutions generatorAssign cluster membersEick et al., ParCo11, Ghent

Inputs: Dataset O, k, neighborhood-size, p, q, , object-distance-function d or distance matrix D, i-maxOutputs: Clustering X, fitness q(X), rewards for clusters in X Algorithm: 1. Create a current solution by randomly selecting k representatives from O. 2. If i-max iterations have been done terminate with the current solution3. Create p neighbors of the current solution randomly using the given neighborhood definition. 4. If the best neighbor improves the fitness q, it becomes the current solution. Go back to step 2. 5. If the fitness does not improve, the solution neighborhood is re-sampled by generating p (more precisely, first 2*p solutions and then (q-2)*p solutions are re-sampled) more neighbors. If re-sampling does not lead to a better solution, terminate returning the current solution (however, clusters that receive a reward of 0 will be considered outliers and non-reward clusters are therefore not returned); otherwise, go back to step 2 replacing the current solution by the best solution found by re-sampling. Pseudo Code of CLEVERs) 11

B113. PAR-CLEVER : A Faster Clustering AlgorithmOpenMPCUDA (GPU computing)MPIMap/Reduce12Eick et al., ParCo11, Ghent

Benchmarks Data Sets Used 1310OvalsSize:3,359 Fitness function: purityEarthquakeSize: 330,561Fitness function: find clusters with high variance with respect to earthquake depthYahoo Ads Clicks full size: 3,009,071,396; subset:2,910,613Fitness function: minimum intra-cluster distanceEick et al., ParCo11, GhentParallelization targets14Assign cluster members: O(n*k)Data parallelizationHighly independentThe first priority for parallelizationFitness value calculation : ~ O(n)Neighboring solutions generation: ~ O(p)

n:= number of object in the datasetk:= number of clusters in the current solutionp:= sampling rate (how many neighbors of the current solution are sampled)Eick et al., ParCo11, GhentHardware Specification15crill-001 to crill-016 (OpenMP)Processor : 4 x AMD Opteron(tm) Processor 6174CPU cores : 48Core speed : 2200 MHzMemory : 64 GB

crill-101 and crill-102 (GPU ComputingNVIDIA CUDA)Processor : 2 x AMD Opteron(tm) Processor 6174CPU cores : 24Core speed : 2200 MHzMemory : 32 GBGPU Device : 4 x Tesla M2050,Memory : 3 Gb CUDA cores : 448

Eick et al., ParCo11, Ghent4. Experimental Results10Ovals(measured in seconds)16100val Dataset ( size = 3359 )p=100, q=27, k=10, = 1.1, th=0.6, = 1.6, Interestingness Function=PurityThreads16122448Loop-levelTime(sec)248.4950.5230.0920.5816.39Speedup1.004.928.2612.0715.16Efficiency1.000.820.690.500.32Loop-level + Incremental UpdatingTime(sec)229.8849.4329.9920.2815.61Speedup1.004.657.6711.3414.73Efficiency1.000.780.640.470.31Task-levelTime(sec)248.4941.8321.6711.446.40Speedup1.005.9411.4721.7238.84Efficiency1.000.990.960.900.81Iterations = 14, Evaluated neighbor solutions = 15200, k = 5, Fitness = 77187.7Eick et al., ParCo11, GhentExperimental Results continued 10Ovals17Eick et al., ParCo11, GhentExperimental ResultsEarthquake (measured in hours)18Earthquake Dataset ( size = 330,561 )p=50, q=12, k=100, =2, th=1.2, = 1.4, Interestingness Function=Variance HighThreads16122448Loop-levelTime(hours)185.3935.2723.1712.3810.20Speedup15.268.0014.9718.18Efficiency10.880.670.620.38Loop-level + Incremental UpdatingTime(hours)30.249.186.896.066.84Speedup13.294.394.994.42Efficiency10.550.370.210.09Task-levelTime(hours)185.3931.9517.199.766.14Speedup15.8010.7919.0030.18Efficiency10.970.900.790.63Iterations = 216, Evaluated neighbor solutions = 21,950, k = 115Eick et al., ParCo11, GhentExperimental Results continuedEarthquake 19Eick et al., ParCo11, GhentExperimental ResultsYahoo (measured in hours)20Yahoo Reduced Dataset ( size = 2910613 )p=48, q=7, k=80, =1.2, th=0, = 1.000001, Interestingness Function=Average Distance to MedoidThreads16122448Loop-levelTime(hours)154.6229.2516.7412.129.94Speedup15.299.2412.7515.55Efficiency10.880.770.530.32Loop-level + Incremental UpdatingTime(hours)28.308.156.715.555.68Speedup13.474.225.104.98Efficiency10.580.350.210.10Task-levelTime(hours)154.6225.7812.976.633.42Speedup16.0011.9223.3345.21Efficiency11.000.990.970.94Iterations = 10, Evaluated neighbor solutions = 480, k = 94Eick et al., ParCo11, GhentExperimental Results continuedYahoo21Eick et al., ParCo11, GhentCUDA Results10Ovals22100val Dataset ( size = 3359 )p=100, q=27, k=10, = 1.1, th=0.6, = 1.6, Interestingness Function=PurityRun Time (seconds)1.331.321.341.321.331.32Avg:1.327Iterations = 12, Evaluated neighbor solutions = 5100, k = 5CUDA version evaluate 5100 solutions in 1.327 seconds 15200 solutions in 3.95 secondsSpeed up = Time(CPU) / Time(GPU)63x speed up compares to sequential version1.62x speed up compares to 48 threads OpenMP OpenMP#threadsSequential6122448Task-levelTime(sec)248.4941.8321.6711.446.40Iterations = 14, Evaluated neighbor solutions = 15200, k = 5, Fitness = 77187.7vs.CUDA ResultsEarthquake (preliminary!)23Earthquake Dataset ( size = 330561 )p=50, q=12, k=100, =2, th=1.2, = 1.4, Interestingness Function=Variance HighRun Time (seconds)138.95146.56143.82139.10146.19147.03Avg:143.61Iterations = 158, Evaluated neighbor solutions = 28,900, k = 92OpenMP#threadsSequential6122448Task-levelTime(hours)185.3931.9517.199.766.14Iterations = 216, Evaluated neighbor solutions = 21950, k = 115CUDA version evaluate 28000 solutions in 143.61 seconds 21950 solutions in 109.07 secondsSpeed up = Time(CPU) / Time(GPU)6119x speed up compares to sequential version202x speed up compares to 48 threads OpenMP vs.Eick et al., ParCo11, GhentCUDA implementation Cache representatives in shared memoryThe representatives are read frequently in the computation that assigns objects to clusters. The results presented earlier cached the representatives into the shared memory for a faster access.The following table compares the performances between CLEVER with and without caching the representatives on the earthquake data set. The data size of the representatives being cached is 2MBThe result shows that caching the representatives has very little improvement on the runtime (0.09%) based on the

Earthquake Dataset ( size = 330561 )p=50, q=12, k=100, =2, th=1.2, = 1.4, Interestingness Function=Variance HighRun Time (seconds)Cache138.95146.56143.82139.10146.19147.03Avg:143.61No-cache144.63139.9144.27144.5144.71144.44Avg:143.74 Iterations = 158, Evaluated neighbor solutions = 28,900, k = 9224Eick et al., ParCo11, GhentThe difference between the OpenMP and CUDA implementationswhy?The OpenMP version uses a object oriented programming (OOP) design inherited from its original implementation but the redesigned CUDA version is more a procedural programming implementation.CUDA hardware has higher bandwidth which contributed to the speedup a littleCaching contributes little of the speedup (we already analyzed that)25Eick et al., ParCo11, Ghent5. Summary26CUDA and OpenMP results indicate good scalability parallel algorithm using multi-core processorscomputations which take days can now be performed in minutes/hours.OpenMPEasy to implementGood Speed upLimited by the number of cores and the amount of RAMCUDA GPUExtra attentions needed for CUDA programmingLower level of programming: registers, cache memoryGPU memory hierarchy is different from CPUOnly support for some data structures;Synchronization between threads in blocks is not possibleSuper speed up, some of which are still subject of investigation

Eick et al., ParCo11, Ghent

Future Work More work on the CUDA versionConduct more experiments which explain what works well and which doesnt and why it does/does not work wellAnalyze impact of the capability to search many more solutions on solution quality in more depth. Implement a version of CLEVER which conducts multiple randomized hill climbing searches in parallel and which employs dynamic load balancingmore resources are allocated to the more promising searchesReuse code for speeding up other data mining algorithms which uses randomized hill climbing.27Eick et al., ParCo11, Ghent