Daniel A. Spielman - Yale University · Resistor networks Spring networks Graph drawing Clustering...
Transcript of Daniel A. Spielman - Yale University · Resistor networks Spring networks Graph drawing Clustering...
-
The Laplacian Matrices of Graphs
ISIT, July 12, 2016
Daniel A. Spielman
-
LaplaciansInterpolation on graphsResistor networksSpring networksGraph drawingClusteringLinear programming
Sparsification
Solving Laplacian EquationsBest resultsThe simplest algorithm
Outline
-
Interpolate values of a function at all verticesfrom given values at a few vertices.
Minimize
Subject to given values
1
0
ANAPC10
CDC27
ANAPC5 UBE2C
ANAPC2
CDC20
(Zhu,Ghahramani,Lafferty ’03)Interpolation on Graphs
X
(a,b)2E
(x(a)� x(b))2
-
Interpolate values of a function at all verticesfrom given values at a few vertices.
Minimize
Subject to given values
1
0
0.51
0.61
0.53
0.30
ANAPC10
CDC27
ANAPC5 UBE2C
ANAPC2
CDC20
(Zhu,Ghahramani,Lafferty ’03)Interpolation on Graphs
X
(a,b)2E
(x(a)� x(b))2
-
Interpolate values of a function at all verticesfrom given values at a few vertices.
Minimize
Subject to given values
1
0
0.51
0.61
0.53
0.30
ANAPC10
CDC27
ANAPC5 UBE2C
ANAPC2
CDC20
(Zhu,Ghahramani,Lafferty ’03)Interpolation on Graphs
X
(a,b)2E
(x(a)� x(b))2 = xTLx
-
Interpolate values of a function at all verticesfrom given values at a few vertices.
Minimize
Subject to given values
Takederivatives.MinimizebysolvingLaplacian
1
0
0.51
0.61
0.53
0.30
ANAPC10
CDC27
ANAPC5 UBE2C
ANAPC2
CDC20
(Zhu,Ghahramani,Lafferty ’03)Interpolation on Graphs
X
(a,b)2E
(x(a)� x(b))2 = xTLx
-
G = (V,E)X
(a,b)2E
(x(a)� x(b))2x : V ! R
1
0
0.51
0.61
0.53
0.30
The Laplacian Quadratic Form of
-
X
(a,b)2E
(x(a)� x(b))2x : V ! R
1
0
0.51
0.61
0.53
0.30
(0.53)2
(0.49)2
(0.39)2
(0.31)2
The Laplacian Quadratic Form of G = (V,E)
-
Graphs with positive edge weights
X
(a,b)2E
wa,b(x(a)� x(b))2 = xTLGx
-
View edges as resistors connecting vertices
Apply voltages at some vertices.Measure induced voltages and current flow.
1V
0V
Resistor Networks
-
Induced voltages minimize subject to constraints.
1V
0V
Resistor NetworksX
(a,b)2E
(x(a)� x(b))2
-
0V
0.5V
0.5V
0.625V0.375V
1V
X
(a,b)2E
(x(a)� x(b))2
0V
1V
Induced voltages minimize subject to constraints.
Resistor Networks
-
1V
0V
0.5V
0.5V
0.625V0.375V
(0.5)2
(0.5)2(0.
5)2
(0.5)2
(0.375) 2(0.125)2
(0.25)2
(0.125) 2
(0.375)2
1V
X
(a,b)2E
(x(a)� x(b))2
0V
1V
Induced voltages minimize subject to constraints.
Resistor Networks
-
X
(a,b)2E
(x(a)� x(b))2
1V
0V
0.5V
0.5V
0.625V0.375V
1V
0V
1V
Effective resistance = 1/(current flow at one volt)
Induced voltages minimize subject to constraints.
Resistor Networks
0.5
0.5 0.375
0.25
-
Nail down some vertices, let rest settle
View edges as rubber bands or ideal linear springs
Spring Networks
When stretched to length potential energy is
`
�2/2
-
Nail down some vertices, let rest settle
Physics: position minimizes total potential energy
subject to boundary constraints (nails)
Spring Networks
1
2
X
(a,b)2E
(x(a)� x(b))2
-
(Tutte ’63)Drawing by Spring Networks
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
If the graph is planar,then the spring drawinghas no crossing edges!
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
Drawing by Spring Networks (Tutte ’63)
-
A n-by-n symmetric matrix has nreal eigenvaluesand eigenvectors such that
�1 � �2 · · · � �nv1, ..., vn
These eigenvalues and eigenvectors tell usa lot about a graph!
Lvi = �ivi
Spectral Graph Theory
-
A n-by-n symmetric matrix has nreal eigenvaluesand eigenvectors such thatv1, ..., vn
These eigenvalues and eigenvectors tell usa lot about a graph!
(excluding )
Lvi = �ivi
Spectral Graph Theory
�1 = 0, v2 =
�1 � �2 · · · � �n
-
(Hall ’70)
31 2
4
56 7
8 9
Spectral Graph Drawing
OriginalDrawing
-
Plot vertex atdraw edges as straight lines
(v2(a), v3(a))a
31 2
4
56 7
8 9
12
4
5
6
9
3
8
7
(Hall ’70)Spectral Graph Drawing
OriginalDrawing
SpectralDrawing
-
(Hall ’70)Spectral Graph Drawing
OriginalDrawing
SpectralDrawing
-
OriginalDrawing
SpectralDrawing
(Hall ’70)Spectral Graph Drawing
-
Best embedded by first three eigenvectors
Dodecahedron
-
Erdos’s co-authorship graph
-
Most edges are shortVertices are spread out and don’t clump too much
is close to 0�2
When is big, say there is no nice picture of the graph
�2 > 10/ |V |1/2
When there is a “nice” drawing
-
SS
Measuring boundaries of sets
Boundary: edges leaving a set
-
x(a) =
(1 a in S
0 a not in S
Boundary: edges leaving a set
S
00
00
00
1
1 0
11
11 1
0
00
1
S
Characteristic Vector of S:
Measuring boundaries of sets
-
X
(a,b)2E
(x(a)� x(b))2
= |boundary(S)|
Boundary: edges leaving a set
S
00
00
00
1
1 0
11
11 1
0
00
1
S
Characteristic Vector of S:
Measuring boundaries of sets
x(a) =
(1 a in S
0 a not in S
-
Find large sets of small boundary
S
0.2
-0.200.4
-1.10.5
0.8
0.81.11.0
1.61.3
0.9
0.7-0.4
-0.3
1.3
S0.5
Heuristic to findx with small
Compute eigenvector
Consider the level sets
LGv2 = �2v2
x
TLGx
Spectral Clustering and Partitioning
-
for some
(Donath-Hoffman ‘72, Barnes ‘82, Hagen-Kahng ’92)
S = {a : v2(a) � t} tCheeger’s inequality implies good approximation
Spectral Partitioning
-
for some
(Donath-Hoffman ‘72, Barnes ‘82, Hagen-Kahng ’92)
S = {a : v2(a) � t} tCheeger’s inequality implies good approximation
Spectral Partitioning
-
2
1 43 5
6
0
BBBBBB@
3 �1 �1 �1 0 0�1 2 0 0 0 �1�1 0 3 �1 �1 0�1 0 �1 4 �1 �10 0 �1 �1 3 �10 �1 0 �1 �1 3
1
CCCCCCA
Symmetric
Non-positive off-diagonals
Diagonally dominant
The Laplacian Matrix of a Graph
-
LG =X
(a,b)2E
wa,bLa,b
x
TLGx =
X
(a,b)2E
wa,b(x(a)� x(b))2
L1,2 =
✓1 �1�1 1
◆
=
✓1�1
◆�1 �1
�
The Laplacian Matrix of a Graph
-
Quickly Solving Laplacian Equations
Where m is number of non-zeros and n is dimension
O(m logc n log ✏�1)
S,Teng ’04: Using low-stretch trees and sparsifiers
-
Where m is number of non-zeros and n is dimension
O(m logc n log ✏�1)
eO(m log n log ✏�1)Koutis, Miller, Peng ’11: Low-stretch trees and sampling
S,Teng ’04: Using low-stretch trees and sparsifiers
Quickly Solving Laplacian Equations
-
Where m is number of non-zeros and n is dimension
Cohen, Kyng, Pachocki, Peng, Rao ’14:
O(m logc n log ✏�1)
eO(m log n log ✏�1)
eO(m log1/2 n log ✏�1)
Koutis, Miller, Peng ’11: Low-stretch trees and sampling
S,Teng ’04: Using low-stretch trees and sparsifiers
Quickly Solving Laplacian Equations
-
Cohen, Kyng, Pachocki, Peng, Rao ’14:
O(m logc n log ✏�1)
eO(m log n log ✏�1)
eO(m log1/2 n log ✏�1)
Koutis, Miller, Peng ’11: Low-stretch trees and sampling
S,Teng ’04: Using low-stretch trees and sparsifiers
Quickly Solving Laplacian Equations
Good code:LAMG (lean algebraic multigrid) – Livne-BrandtCMG (combinatorial multigrid) – Koutis
-
An -accurate solution to is an satisfying
where
LGx = b✏
kvkLG =pvTLGv = ||L1/2G v||
Quickly Solving Laplacian Equations
O(m logc n log ✏�1)
S,Teng ’04: Using low-stretch trees and sparsifiers
kex� xkLG ✏ kxkLG
ex
-
An -accurate solution to is an satisfying
LGx = b✏
Quickly Solving Laplacian Equations
O(m logc n log ✏�1)
S,Teng ’04: Using low-stretch trees and sparsifiers
kex� xkLG ✏ kxkLG
ex
Allows fast computation of eigenvectorscorresponding to small eigenvalues.
-
Laplacians appear when solving Linear Programs onon graphs by Interior Point Methods
Maximum and Min-Cost Flow (Daitch, S ’08, Mądry ‘13)
Shortest Paths (Cohen, Mądry, Sankowski, Vladu ‘16)
Isotonic Regression (Kyng, Rao, Sachdeva ‘15)
Lipschitz Learning : regularized interpolation on graphs(Kyng, Rao, Sachdeva, S ‘15)
Laplacians in Linear Programming
-
Interior Point Method for Maximum s-t Flow
maximize
subject to
fout(s)
fout(a) = f in(a), 8a 62 {s, t}0 f(a, b) c(a, b), 8(a, b) 2 E
s t/1/3
/4/1
/1
-
Interior Point Method for Maximum s-t Flow
maximize
subject to
fout(s)
fout(a) = f in(a), 8a 62 {s, t}0 f(a, b) c(a, b), 8(a, b) 2 E
s t1/12/3
2/41/1
1/1
3 3
-
Interior Point Method for Maximum s-t Flow
maximize
subject to
fout(s)
fout(a) = f in(a), 8a 62 {s, t}0 f(a, b) c(a, b), 8(a, b) 2 E
maximize
subject to
fout(s)
fout(a) = f in(a), 8a 62 {s, t}X
(a,b)2E
wa,b
f(a, b)2 C
Multiple calls with varying weights wa,b
-
Spectral Sparsification
Every graph can be approximated by a sparse graph with a similar Laplacian
-
for all x
A graph H is an -approximation of G if ✏
1
1 + � x
TLHx
xTLGx 1 + �
Approximating Graphs
LH ⇡✏ LG
-
for all x
A graph H is an -approximation of G if ✏
1
1 + � x
TLHx
xTLGx 1 + �
Approximating Graphs
Preserves boundaries of every set
-
Solutions to linear equations are similar
As are effective resistances
for all x
A graph H is an -approximation of G if ✏
1
1 + � x
TLHx
xTLGx 1 + �
Approximating Graphs
LH ⇡✏ LG () L�1H ⇡✏ L�1G
-
Expanders Sparsify Complete Graphs
Yield good LDPC codes
Every set of vertices has large boundary
is large�2
Random regular graphs are usually expanders
-
Assign a probability pa,b to each edge (a,b)
Include edge (a,b) in H with probability pa,b.
If include edge (a,b), give it weight wa,b /pa,b
Sparsification by Random Sampling
E [ LH ] =X
(a,b)2E
pa,b(wa,b/pa,b)La,b = LG
-
Choose pa,b to be wa,b times the effective resistance between a and b.
Low resistance between a and b means thereare many alternate routes for current to flow andthat the edge is not critical.
Proof by random matrix concentration bounds(Rudelson, Ahlswede-Winter, Tropp, etc.)
Only need edges
Sparsification by Random Sampling
O(n log n/✏2)
(S, Srivastava ‘08)
-
For every , there is a s.t.G = (V,E,w) H = (V, F, z)
and
(Batson, S, Srivastava ‘09)
|F | (2 + ✏)2n/✏2LG ⇡✏ LH
Optimal Graph Sparsification?
Is within a factor of 2 of how well Ramanujan expanders approximate complete graphs
-
(Kyng & Sachdeva ‘16)Approximate Gaussian Elimination
Gaussian Elimination:compute upper triangular U so that
LG = UTU
Approximate Gaussian Elimination:compute sparse upper triangular U so that
LG ⇡ UTU
-
Find 𝑈, upper triangular matrix, s.t 𝑈"𝑈 = 𝐴0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA𝐴 =
Additive view of Gaussian Elimination
-
Additive view of Gaussian Elimination
�
���
16 �4 �8 �4�4 1 2 1�8 2 4 2�4 1 2 1
�
��� =
�
���
4�1�2�1
�
���
�
���
4�1�2�1
�
���
�Find the rank-1 matrix that agrees on the first row and column.
�
���
16 �4 �8 �4�4 1 2 1�8 2 4 2�4 1 2 1
�
���
�
���
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
�
���
-
Additive view of Gaussian Elimination�
���
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
�
���
Subtract the rank 1 matrix.We have eliminated the first variable.�
���
16 �4 �8 �4�4 1 2 1�8 2 4 2�4 1 2 1
�
���
�
���
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
�
��� � =
-
Additive view of Gaussian Elimination�
���
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
�
���
-
Additive view of Gaussian Elimination
Find the rank-1 matrix that agrees on the next row and column.
�
���
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
�
���
�
���
0 0 0 00 4 �2 �20 �2 1 10 �2 1 1
�
��� =
�
���
02
�1�1
�
���
�
���
02
�1�1
�
���
�
-
Additive view of Gaussian Elimination
Subtract the rank 1 matrix.
�
���
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
�
���
�
���
0 0 0 00 4 �2 �20 �2 1 10 �2 1 1
�
���
�
�
���
0 0 0 00 0 0 00 0 9 �30 0 �3 5
�
���=
We have eliminated the second variable.
-
Additive view of Gaussian Elimination
Running time proportional to sum of squaresof number of non-zeros in these vectors.
0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA
=
�
���
4�1�2�1
�
���
�
���
4�1�2�1
�
���
�
+
�
���
02
�1�1
�
���
�
���
02
�1�1
�
���
�
+
�
���
003
�1
�
���
�
���
003
�1
�
���
�
+
�
���
0002
�
���
�
���
0002
�
���
�
𝐴 =
-
Additive view of Gaussian Elimination
=
�
���
4 0 0 0�1 2 0 0�2 �1 3 0�1 �1 �1 2
�
���
�
���
4 �1 �2 �10 2 �1 �10 0 3 �10 0 0 2
�
���
0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA
=
�
���
4�1�2�1
�
���
�
���
4�1�2�1
�
���
�
+
�
���
02
�1�1
�
���
�
���
02
�1�1
�
���
�
+
�
���
003
�1
�
���
�
���
003
�1
�
���
�
+
�
���
0002
�
���
�
���
0002
�
���
�
𝐴 =
-
Additive view of Gaussian Elimination0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA
=
�
���
4�1�2�1
�
���
�
���
4�1�2�1
�
���
�
+
�
���
02
�1�1
�
���
�
���
02
�1�1
�
���
�
+
�
���
003
�1
�
���
�
���
003
�1
�
���
�
+
�
���
0002
�
���
�
���
0002
�
���
�
𝐴 =
=
�
���
4 �1 �2 �10 2 �1 �10 0 3 �10 0 0 2
�
���
� �
���
4 �1 �2 �10 2 �1 �10 0 3 �10 0 0 2
�
��� = 𝑈" 𝑈
-
Gaussian Elimination of Laplacians
0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA�
0
BB@
4�1�2�1
1
CCA
0
BB@
4�1�2�1
1
CCA
T
=
0
BB@
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
1
CCA
If this is a Laplacian, then so is this
-
0
BB@
16 �4 �8 �4�4 5 0 �1�8 0 14 0�4 �1 0 7
1
CCA�
0
BB@
4�1�2�1
1
CCA
0
BB@
4�1�2�1
1
CCA
T
=
0
BB@
0 0 0 00 4 �2 �20 �2 10 �20 �2 �2 6
1
CCA
Gaussian Elimination of Laplacians
If this is a Laplacian, then so is this
2
1 4
3 6
54
8
4 12
4
3 6
5
222
When eliminate a node, add a clique on its neighbors
-
Approximate Gaussian Elimination
2
1 4
3 6
54
8
4 12
4
3 6
5
222
1. when eliminate a node, add a clique on its neighbors
2. Sparsify that clique, without ever constructing it
(Kyng & Sachdeva ‘16)
-
(Kyng & Sachdeva ‘16)Approximate Gaussian Elimination
1. When eliminate a node of degree d,add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node
1
-
(Kyng & Sachdeva ‘16)Approximate Gaussian Elimination
0. Initialize by randomly permuting vertices, and making copies of every edgeO(log2 n)
1. When eliminate a node of degree d,add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node
Total time is O(m log3 n)
-
(Kyng & Sachdeva ‘16)Approximate Gaussian Elimination
0. Initialize by randomly permuting vertices, and making copies of every edgeO(log2 n)
Total time is O(m log3 n)
1. When eliminate a node of degree d,add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node
Can be improved by sacrificing some simplicity
-
(Kyng & Sachdeva ‘16)Approximate Gaussian Elimination
Analysis by Random Matrix Theory:
Write UTU as a sum of random matrices.
Random permutation and copying control the variances of the random matrices
Apply Matrix Freedman inequality (Tropp ‘11)
E⇥UTU
⇤= LG
-
Other families of linear systems
complex-weighted Laplacians
connection Laplacians
✓1 ei✓
e�i✓ 1
◆
✓I QQT I
◆
Recent Developments
(Kyng, Lee, Peng, Sachdeva, S ‘16)
Laplacians.jl
-
To learn more
My web page on:Laplacian linear equations, sparsification, local graph clustering, low-stretch spanning trees, and so on.
My class notes from “Graphs and Networks” and “Spectral Graph Theory”
Lx = b, by Nisheeth Vishnoi