Customer analysis at scale

31
Customer Behavior Analysis with Large Scale k-means Analysis

description

Ted Dunning describes an implementation of recent results that provide high quality k-means clustering at very high speed. "For well clusterable data, this algorithm provides good bounds on quality, but practically speaking, it makes clustering practical in many applications by providing roughly 3 orders of magnitude speedup relative to the standard algorithm based on Lloyd's initial efforts. In addition, the algorithm is highly amenable to implementation using map-reduce and shows essentially linear speedup. Just as significant, this new algorithm allows clustering with a very large number of clusters which makes it practical to use as a feature extraction algorithm or set up for a nearest neighbor search. " - Ted Dunning

Transcript of Customer analysis at scale

Page 1: Customer analysis at scale

Customer Behavior Analysis with Large Scale k-means Analysis

Page 2: Customer analysis at scale

whoami – Ted Dunning

• Chief Application Architect, MapR Technologies• Committer, member, Apache Software Foundation

– particularly Mahout, Zookeeper and Drill

• Contact me [email protected]@[email protected]@ted_dunning

• Get slides and more info athttp://www.mapr.com/company/events/speaking/strata-10-2-12

Page 3: Customer analysis at scale

Agenda

• Nearest neighbor models• K-means algorithms– O(k d log n) per point for Lloyd’s algorithm – Surrogate (sketch) methods

• Results

Page 4: Customer analysis at scale

Context

• Digital transformation.

• Data helps us better serve our customers.

• Privacy is paramount.

Page 5: Customer analysis at scale

The Business Case

• Our customer has 100 million cards in circulation

• Quick and accurate decision-making is key.– Marketing offers– Fraud prevention

Page 6: Customer analysis at scale

Opportunity

• Demand of modeling is increasing rapidly

• So we are testing something simpler and more agile

• Like k-nearest neighbor

Page 7: Customer analysis at scale

What’s that?

• Find the k nearest training examples – lookalike customers

• This is easy … but hard– easy because it is so conceptually simple and you don’t have

knobs to turn or models to build– hard because of the stunning amount of math– also hard because we need top 50,000 results

• Initial rapid prototype was massively too slow– 3K queries x 200K examples takes hours– needed 20M x 25M in the same time

Page 8: Customer analysis at scale

K-Nearest Neighbor Example

Page 9: Customer analysis at scale

Required Scale and Speed and Accuracy

• Want 20 million queries against 25 million references in 10,000 s

• Should be able to search > 100 million references

• Should be linearly and horizontally scalable• Must have >50% overlap against reference

search• Evaluation by sub-sampling is viable, but tricky

Page 10: Customer analysis at scale

How Hard is That?

• 20 M x 25 M x 100 Flop = 50 P Flop

• 1 CPU = 5 Gflops

• We need 10 M CPU seconds => 10,000 CPU’s

• Real-world efficiency losses may increase that by 10x

• Not good!

Page 11: Customer analysis at scale

How Can We Search Faster?

• First rule: don’t do it– If we can eliminate most candidates, we can do less work– Projection search and k-means search

• Second rule: don’t do it– We can convert big floating point math to clever bit-wise integer

math– Locality sensitive hashing

• Third rule: reduce dimensionality– Projection search– Random projection for very high dimension

Page 12: Customer analysis at scale

Projection Search

total ordering!

Page 13: Customer analysis at scale

How Many Projections?

Page 14: Customer analysis at scale

LSH Search

• Each random projection produces independent sign bit• If two vectors have the same projected sign bits, they

probably point in the same direction (i.e. cos θ ≈ 1)• Distance in L2 is closely related to cosine

• We can replace (some) vector dot products with long integer XOR

Page 15: Customer analysis at scale

LSH Bit-match Versus Cosine

Page 16: Customer analysis at scale

Results with 32 Bits

Page 17: Customer analysis at scale

K-means Search

• First do clustering with lots (thousands) of clusters

• Then search nearest clusters to find nearest points

• We win if we find >50% overlap with “true” answer

• We lose if we can’t cluster super-fast– more on this later

Page 18: Customer analysis at scale

Lots of Clusters Are Fine

Page 19: Customer analysis at scale

Lots of Clusters Are Fine

Page 20: Customer analysis at scale

Some Details

• Clumpy data works better– Real data is clumpy

• Speedups of 100-200x seem practical with 50% overlap– Projection search and LSH can be used to accelerate that

(some)

• More experiments needed

• Definitely need fast search

Page 21: Customer analysis at scale

Lloyd’s Algorithm• Part of CS folk-lore• Developed in the late 50’s for signal quantization, published in 80’s

initialize k cluster centroids somehowfor each of many iterations:for each data point:assign point to nearest clusterrecompute cluster centroids from points assigned to clusters

• Highly variable quality, several restarts recommended

Page 22: Customer analysis at scale

Ball k-means

• Provably better for highly clusterable data• Tries to find initial centroids in the “core” of real clusters• Avoids outliers in centroid computation

initialize centroids randomly with distance maximizing tendencyfor each of a very few iterations:

for each data point:assign point to nearest cluster

recompute centroids using only points much closer than closest cluster

Page 23: Customer analysis at scale

Surrogate Method

• Start with sloppy clustering into κ = k log n clusters• Use this sketch as a weighted surrogate for the

data• Cluster surrogate data using ball k-means• Results are provably good for highly clusterable

data• Sloppy clustering is on-line• Surrogate can be kept in memory• Ball k-means pass can be done at any time

Page 24: Customer analysis at scale

Algorithm Costs

• O(k d log n) per point for Lloyd’s algorithm … not so good for k = 2000, n = 108

• Surrogate methods– fast, sloppy single pass clustering with κ = k log n– fast sloppy search for nearest cluster, O(d log κ) = O(d (log k + log

log n)) per point– fast, in-memory, high-quality clustering of κ weighted centroids– result consists of k high-quality centroids

• This is a big deal:– k d log n = 2000 x 10 x 26 = 50,000– log k + log log n = 11 + 5 = 17– 3000 times faster makes the grade as a bona fide big deal

Page 25: Customer analysis at scale

The Internals

• Mechanism for extending Mahout Vectors– DelegatingVector, WeightedVector, Centroid

• Searcher interface– ProjectionSearch, KmeansSearch, LshSearch, Brute

• Super-fast clustering– Kmeans, StreamingKmeans

Page 26: Customer analysis at scale

How It Works• For each point

– Find approximately nearest centroid (distance = d)– If d > threshold, new centroid– Else possibly new cluster– Else add to nearest centroid

• If centroids > K ~ C log N– Recursively cluster centroids with higher threshold

• Result is large set of centroids– these provide approximation of original distribution– we can cluster centroids to get a close approximation of clustering original– or we can just use the result directly

Page 27: Customer analysis at scale

Parallel Speedup?

Page 28: Customer analysis at scale

What About Map-Reduce

• Map-reduce implementation is nearly trivial– Compute surrogate on each split– Total surrogate is union of all partial surrogates– Do in-memory clustering on total surrogate

• Threaded version shows linear speedup already– Map-reduce speedup is likely, not entirely

guaranteed

Page 29: Customer analysis at scale

How Well Does it Work?

• Theoretical guarantees for well clusterable data– Shindler, Wong and Meyerson, NIPS, 2011

• Evaluation on synthetic data– Rough clustering produces correct surrogates– Possible issue in ball k-means initialization (still

produces good clustering on test data)

Page 30: Customer analysis at scale

Summary

• Nearest neighbor algorithms can be blazing fast

• But you need blazing fast clustering– Which we now have

Page 31: Customer analysis at scale

Contact Us!• We’re hiring at MapR in US and Europe

• Amex is hiring in Phoenix and New York

• Come get the slides at http://www.mapr.com/company/events/speaking/strata-10-2-12

• Contact Ted at [email protected] or @ted_dunning