ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct...

45
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Transcript of ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct...

Page 1: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

ECS289: Scalable Machine Learning

Cho-Jui HsiehUC Davis

Oct 25, 2016

Page 2: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Outline

Nearest Neighbor Search

Maximum Inner Product Search

Page 3: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Nearest Neighbor Classification

K-nearest neighbor classification

Predict the label by voting among K-nearest neighbors.

Prediction time: Find the nearest training sample

1 billion samples, each distance evaluation requires 1 micro second

⇒ 1000 secs per prediction

Page 4: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Nearest Neighbor Search

Given a database with n vectors {x1, . . . , xn ∈ Rd}Given a query point v , how to find the closest points in the database?

Linear search:

Compute ‖xi − v‖ for all iNeed O(nd) time

Can we do better?

Page 5: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Tree-based Nearest Neighbor Search

Page 6: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Tree-based Approaches

KD Tree (Bentley, 1975)Preprocessing:

Divide the points in half by a line perpendicular to one of the axesStop when there is only one or few points in a leaf

(Picture from

http://andrewd.ces.clemson.edu/courses/cpsc805/references/nearest_search.pdf)

Page 7: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

KD-tree: Construction

Sort the data points in each dimension

Requires O(dn) time to find the best split in each level

(Scan over the sorted list for each dimension)

Balanced construction: O(log n) levels

Total time: O(dn log n)

Page 8: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

KD-tree: Query

Recursively traverse down the tree.

Assume the current nearest distance is r and

|split value− x.split index| < r

⇒ Only go down to the current branch

Otherwise, need to check both branches.

May need to check O(n) nodes in high dimensional cases

Page 9: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

PCP Tree: Handling high dimensional case

Partition by PCA directions (not perpendicular to axes)Work much better on real data.

(Picture from

http://andrewd.ces.clemson.edu/courses/cpsc805/references/nearest_search.pdf)

Page 10: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Locality Sensitive Hashing

Page 11: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Locality Sensitive Hashing (LSH)

Main Idea:

Randomly hash data to a small number of binsNearest points will have high probability to lie in the same binFor a query point, search for points only within the same bin

Page 12: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Definition of LSH

Let F be a family of functions that map elements from input space tobucket {1, 2, . . . , s}.F is an LSH family: if a hash function h chosen uniformly at randomfrom F :

If d(p, q) ≤ R, then h(p) = h(q) with probability ≥ P1,If d(p, q) ≥ cR, then h(p) = h(q) with probability ≤ P2.

Such family F is called O(R, cR,P1,P2)-sensitive (P1 > P2).

Page 13: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH-amplification

Amplification:

AND operation of k functions:

ANDh1,...,hk (p) = ANDh1,...,hk (q) iff hi (p) = hi (q), ∀i = 1, . . . , k

(R, cR,P1,P2)-LSH ⇒ (R, cR,Pk1 ,P

k2 )-LSH

OR operation of m functions:

ORh1,...,hk (p) = ORh1,...,hk (q) iff hi (p) = hi (q) for some i

(R, cR,P1,P2)-LSH ⇒ (R, cR, 1− (1− P1)m, 1− (1− P2)m)-LSHCombined together:

(R, cR,P1,P2)− LSH ⇒ (R, cR, 1− (1− Pk1 )m, 1− (1− Pk

2 )m)− LSH

Example:

(0.8, 0.5)⇒ (1− (1− 0.88)15, 1− (1− 0.58)15) = (0.93, 0.057)

Page 14: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Hamming distance

Each point x = {0, 1}d

Hash function: sample a bit and map to 0/1

Page 15: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Hamming distance

For each hi , sample k elements and map to 2k bins

OR-function of h1, . . . , hm

Two points “collide” if hi (p) = hi (q) for some i = 1, . . . ,m

Linear search on all the points that collide with the query

Page 16: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Hamming distance

For each hi , sample k elements and map to 2k bins

OR-function of h1, . . . , hm

Two points “collide” if hi (p) = hi (q) for some i = 1, . . . ,m

Linear search on all the points that collide with the query

Page 17: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Hamming distance

For each hi , sample k elements and map to 2k bins

OR-function of h1, . . . , hm

Two points “collide” if hi (p) = hi (q) for some i = 1, . . . ,m

Linear search on all the points that collide with the query

Page 18: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Hamming distance

For each hi , sample k elements and map to 2k bins

OR-function of h1, . . . , hm

Two points “collide” if hi (p) = hi (q) for some i = 1, . . . ,m

Linear search on all the points that collide with the query

Page 19: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

LSH: Random Projections

Hash function that preserves Euclidean distance:

hu,b(x) = buTx + b

wc

where b·c is the floor operation, w is the width of a bucket, u sampledfrom N(0, 1), b sampled from (0,w)

Amplified by AND/OR operations.

Page 20: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Summary

Query Time Space Used Perprocessing Time

KD-Tree O(2d log n) O(n) O(nd log n)LSH O(kmd + linear search) O(mkn) O(mknd)

Page 21: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Maximum Inner Product Search (MIPS)

Page 22: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

MIPS: Motivation

Matrix factorization models for Recommender systems:ui : latent features for user i = 1, . . . ,mvj : latent features for item j = 1, . . . , nuTi vj : approximate the rating Rij

Prediction: Recommend a item to user 1Select j∗ = argmaxj=1,...,n uT

1 vjLinear search: O(nk) time

Similar application in multiclass classification.

Page 23: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Maximum Inner Product Search (MIPS)

Given a database with n candidate vectorsX := {xi ∈ Rd , i = 1, . . . , n}.Given a query vector v , find

arg maxxi∈X

vTxi

Linear search: O(nd) time

Can we reduce the query time?

Page 24: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Maximum Inner Product Search (MIPS)

Cannot directly use LSH for Euclidean distance

‖v − xi‖2 = ‖v‖2 + ‖xi‖2 − 2vTxi‖v‖2 is fixed for a query point, but ‖xi‖2 can varyNearest neighbor search: return

arg maxxi

2vTxi − ‖xi‖2

Assymetric reduction to NN: define the mappings P(·),Q(·) such that

‖P(xi )− Q(v)‖2 proportional to − xTi v

We can then find MIPS by

argminxi‖P(xi )− Q(v)‖2

Page 25: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Maximum Inner Product Search (MIPS)

Cannot directly use LSH for Euclidean distance

‖v − xi‖2 = ‖v‖2 + ‖xi‖2 − 2vTxi‖v‖2 is fixed for a query point, but ‖xi‖2 can varyNearest neighbor search: return

arg maxxi

2vTxi − ‖xi‖2

Assymetric reduction to NN: define the mappings P(·),Q(·) such that

‖P(xi )− Q(v)‖2 proportional to − xTi v

We can then find MIPS by

argminxi‖P(xi )− Q(v)‖2

Page 26: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Reduction to NN: L2-ALSH

Proposed in (Shrivastava and Li, “Asymmetric LSH (ALSH) forSublinear Time Maximum Inner Product Search (MIPS)”). NIPS 2014

Define

P(x) = [x ; ‖x‖2; ‖x‖4; · · · ; ‖x‖2m ] Q(v) = [v ;1

2;

1

2; · · · ;

1

2]

So we have

‖P(xi )− Q(v)‖2 = −2vTxi + ‖v‖2 +m

4+ ‖xi‖2

m+1

By normalizing the database, we can make maxi ‖xi‖ < 1

‖xi‖2m+1 → 0 when m→∞

Using this mapping, MIPS ≈ NN-search when m is large

(applying LSH on P(x1), · · · ,P(xn) and query Q(v))

Page 27: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Reduction to NN: A better approach

SIMPLE-LSH (discussed in (Neyshabur and Srebro, “On Symmetricand Asymmetric LSHs for Inner Product Search”. ICML 2015))

Original proposed in (Bachrach et al., “Speeding Up the XboxRecommender System Using a Euclidean Transformation forInner-Product Spaces”. Recsys, 2014. )

A simple way to define P(·) and Q(·):

P(x) = [x ;√M − ‖x‖2] Q(v) = [v ; 0]

where M = maxi ‖xi‖2

Easy to see ‖P(xi )− Q(v)‖2 = M + ‖v‖2 − 2xTi v

LSH with P(x1), · · · ,P(xn) and query Q(v)

Page 28: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Some Comparisons

Reduction to NN + PCA-tree (Bachrach et al., 2014).

(Figure from Bachrach et al., 2014)

Page 29: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Directly Solving MIPS: Sampling Approach

Assume v and {x1, · · · , xn} are all nonnegative.

Sampling approach: sample according to the probability

P(i) ∼ xTi v =

∑t

Xitvt

We can sample by the joint distribution

P(i , t) ∼ vtXit = vtXit/(∑i

∑t

Xitvt)

=vt(

∑i Xit)∑

t vt(∑

i Xit)

Xit∑i Xit

= Pv (t)Pt(i)

And then P(i) =∑

t P(i , t).

Page 30: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Directly Solving MIPS: Sampling Approach

Sampling MIPS: pre-processing

Compute Ri =∑

i Xit

Construct distributions Pt(i) ∼ Xit∑i Xit

for all t.

Query: (normalize v)

Construct distribution Pv (t) ∼ vtRt

Sample many (i , t) pairs by (1) sampling t by Pv (t) (2) sampling j byPt(i)Output the top-T sampled index i .

What’s the time complexity?

Page 31: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Diamond Sampling

Proposed in (Ballard et al., “Diamond Sampling for ApproximateMaximum All-pairs Dot-product (MAD) Search”. ICDM 2015. )

Sampling approach fro MAD problem:

maxi ,j|xT

i xj |

Can handle negative values in xi , xj .

Page 32: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Fast Sampling Algorithms

Page 33: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Sampling from a given distribution

Sampling from a discrete probability distribution with n indices withdistribution

p1, p2, . . . , pn

(Assume this is normalized, so∑

i pi = 1)

What’s the time complexity for generating one sample?

Page 34: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Linear Search

Randomly generate a number s ∈ [0, 1] uniformly

Constant Time

Construct a cummulated distribution

c0 = 0, c1 = c0 + p1, c2 = c1 + p2, . . . , cn = cn−1 + pn

Find the index i such that s ∈ [ci−1, ci ].

O(n) time

Page 35: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Binary Search

Preprocessing:

Construct cummulated distribution c0, c1, . . . , cn

O(n)

Sampling:

Randomly generate a number s ∈ [0, 1] uniformly

Binary search to find i such that s ∈ [ci−1, ci ]

O(log n) time

Page 36: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Preprocessing:

Create an Alias Table

O(n) time (Vose’s algorithm)

Sampling:

Generate a uniform random number in {1, 2, . . . , n}Generate another random number to determine the index

Constant time!

Page 37: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 38: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 39: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 40: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 41: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 42: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Alias Method

Page 43: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

F+ Tree

Sampling from distribution p1, . . . , pn

But one of pi will change at each time (or once every few iterations)

Alias Table does not work (need to reconstruct the whole table)

F+ Tree (Fenwick Tree)

Sampling: O(log n) timeUpdate: O(log n) time

Page 44: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

F+ Tree

Page 45: ECS289: Scalable Machine Learning · ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 25, 2016

Coming up

Questions?