Sums of Squares, Moments and Applications in Polynomial ...
Transcript of Sums of Squares, Moments and Applications in Polynomial ...
![Page 1: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/1.jpg)
Sums of Squares, Momentsand Applications in Polynomial
Optimization
Monique Laurent
Fields Distinguished Lecture Series - May 10, 2021
![Page 2: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/2.jpg)
What is polynomial optimization?
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
(P)
Minimize a polynomial function f over a region
K = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}
defined by polynomial inequalities (and equations)
![Page 3: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/3.jpg)
Some instances
![Page 4: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/4.jpg)
Testing nonnegativity of polynomials
The unconstrained quadratic case is easy
The quadratic form xTMx is nonnegative over Rn if and only if thematrix M is positive semidefinite (M � 0)
This can be tested in polynomial time, using Gaussian elimination
Constrained quadratic / unconstrained quartic is hard
Testing matrix copositivity: co-NP complete [Kabadi-Murty 1987]
A symmetric matrix M is copositive if xTMx =∑
i,j Mijxixj ≥ 0 ∀x ≥ 0
Equivalently, the quartic polynomial∑
i,j Mijx2i x
2j is nonnegative over Rn
Testing convexity: NP-hard [Ahmadi et al. 2013]
A polynomial f (x) is convex if and only if its Hessian matrix H(f )(x) ispositive semidefinite
Equivalently, g(x , y) = yTH(f )(x)y is nonnegative on Rn × Rn
![Page 5: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/5.jpg)
Testing nonnegativity of polynomials
The unconstrained quadratic case is easy
The quadratic form xTMx is nonnegative over Rn if and only if thematrix M is positive semidefinite (M � 0)
This can be tested in polynomial time, using Gaussian elimination
Constrained quadratic / unconstrained quartic is hard
Testing matrix copositivity: co-NP complete [Kabadi-Murty 1987]
A symmetric matrix M is copositive if xTMx =∑
i,j Mijxixj ≥ 0 ∀x ≥ 0
Equivalently, the quartic polynomial∑
i,j Mijx2i x
2j is nonnegative over Rn
Testing convexity: NP-hard [Ahmadi et al. 2013]
A polynomial f (x) is convex if and only if its Hessian matrix H(f )(x) ispositive semidefinite
Equivalently, g(x , y) = yTH(f )(x)y is nonnegative on Rn × Rn
![Page 6: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/6.jpg)
Testing nonnegativity of polynomials
The unconstrained quadratic case is easy
The quadratic form xTMx is nonnegative over Rn if and only if thematrix M is positive semidefinite (M � 0)
This can be tested in polynomial time, using Gaussian elimination
Constrained quadratic / unconstrained quartic is hard
Testing matrix copositivity: co-NP complete [Kabadi-Murty 1987]
A symmetric matrix M is copositive if xTMx =∑
i,j Mijxixj ≥ 0 ∀x ≥ 0
Equivalently, the quartic polynomial∑
i,j Mijx2i x
2j is nonnegative over Rn
Testing convexity: NP-hard [Ahmadi et al. 2013]
A polynomial f (x) is convex if and only if its Hessian matrix H(f )(x) ispositive semidefinite
Equivalently, g(x , y) = yTH(f )(x)y is nonnegative on Rn × Rn
![Page 7: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/7.jpg)
Testing nonnegativity of polynomials
The unconstrained quadratic case is easy
The quadratic form xTMx is nonnegative over Rn if and only if thematrix M is positive semidefinite (M � 0)
This can be tested in polynomial time, using Gaussian elimination
Constrained quadratic / unconstrained quartic is hard
Testing matrix copositivity: co-NP complete [Kabadi-Murty 1987]
A symmetric matrix M is copositive if xTMx =∑
i,j Mijxixj ≥ 0 ∀x ≥ 0
Equivalently, the quartic polynomial∑
i,j Mijx2i x
2j is nonnegative over Rn
Testing convexity: NP-hard [Ahmadi et al. 2013]
A polynomial f (x) is convex if and only if its Hessian matrix H(f )(x) ispositive semidefinite
Equivalently, g(x , y) = yTH(f )(x)y is nonnegative on Rn × Rn
![Page 8: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/8.jpg)
Testing nonnegativity of polynomials
The unconstrained quadratic case is easy
The quadratic form xTMx is nonnegative over Rn if and only if thematrix M is positive semidefinite (M � 0)
This can be tested in polynomial time, using Gaussian elimination
Constrained quadratic / unconstrained quartic is hard
Testing matrix copositivity: co-NP complete [Kabadi-Murty 1987]
A symmetric matrix M is copositive if xTMx =∑
i,j Mijxixj ≥ 0 ∀x ≥ 0
Equivalently, the quartic polynomial∑
i,j Mijx2i x
2j is nonnegative over Rn
Testing convexity: NP-hard [Ahmadi et al. 2013]
A polynomial f (x) is convex if and only if its Hessian matrix H(f )(x) ispositive semidefinite
Equivalently, g(x , y) = yTH(f )(x)y is nonnegative on Rn × Rn
![Page 9: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/9.jpg)
Example from distance geometry
Reconstruct the locations of objects (say) in 3D from partialmeasurements of mutual distances
Given (partial) pairwise distances d = (dij)ij∈E , find (if possible)locations u1, · · · , un ∈ Rk in given dimension k (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij for all {i , j} ∈ E
![Page 10: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/10.jpg)
Formulations via SDP and polynomial optimization
Find (if possible) vectors u1, · · · , un ∈ Rk (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij ({i , j} ∈ E )
m X = (〈ui , uj〉)Find (if possible) a solution X with rank ≤ k to the semidefinite program
X � 0, Xii + Xjj − 2Xij = dij ({i , j} ∈ E )
mDecide if pmin = 0 and find a global minimizer to the quartic polynomial
minx∈Rkn
p(x) =∑{i,j}∈E
(dij −
k∑h=1
(xih − xjh)2)2
Hard problem, already in dimension k = 1 when G is cycle Cn [Saxe’79]Given a1, . . . , an ∈ N, assign distance di,i+1 = ai to the edges of Cn. Then
∃ locations in R ⇐⇒ ∃ε ∈ {±1}n s.t.n∑
i=1
εiai = 0
hard partition problem
![Page 11: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/11.jpg)
Formulations via SDP and polynomial optimization
Find (if possible) vectors u1, · · · , un ∈ Rk (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij ({i , j} ∈ E )
m X = (〈ui , uj〉)Find (if possible) a solution X with rank ≤ k to the semidefinite program
X � 0, Xii + Xjj − 2Xij = dij ({i , j} ∈ E )
mDecide if pmin = 0 and find a global minimizer to the quartic polynomial
minx∈Rkn
p(x) =∑{i,j}∈E
(dij −
k∑h=1
(xih − xjh)2)2
Hard problem, already in dimension k = 1 when G is cycle Cn [Saxe’79]Given a1, . . . , an ∈ N, assign distance di,i+1 = ai to the edges of Cn. Then
∃ locations in R ⇐⇒ ∃ε ∈ {±1}n s.t.n∑
i=1
εiai = 0
hard partition problem
![Page 12: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/12.jpg)
Formulations via SDP and polynomial optimization
Find (if possible) vectors u1, · · · , un ∈ Rk (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij ({i , j} ∈ E )
m X = (〈ui , uj〉)Find (if possible) a solution X with rank ≤ k to the semidefinite program
X � 0, Xii + Xjj − 2Xij = dij ({i , j} ∈ E )
mDecide if pmin = 0 and find a global minimizer to the quartic polynomial
minx∈Rkn
p(x) =∑{i,j}∈E
(dij −
k∑h=1
(xih − xjh)2)2
Hard problem, already in dimension k = 1 when G is cycle Cn [Saxe’79]Given a1, . . . , an ∈ N, assign distance di,i+1 = ai to the edges of Cn. Then
∃ locations in R ⇐⇒ ∃ε ∈ {±1}n s.t.n∑
i=1
εiai = 0
hard partition problem
![Page 13: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/13.jpg)
Formulations via SDP and polynomial optimization
Find (if possible) vectors u1, · · · , un ∈ Rk (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij ({i , j} ∈ E )
m X = (〈ui , uj〉)Find (if possible) a solution X with rank ≤ k to the semidefinite program
X � 0, Xii + Xjj − 2Xij = dij ({i , j} ∈ E )
mDecide if pmin = 0 and find a global minimizer to the quartic polynomial
minx∈Rkn
p(x) =∑{i,j}∈E
(dij −
k∑h=1
(xih − xjh)2)2
Hard problem, already in dimension k = 1 when G is cycle Cn [Saxe’79]
Given a1, . . . , an ∈ N, assign distance di,i+1 = ai to the edges of Cn. Then
∃ locations in R ⇐⇒ ∃ε ∈ {±1}n s.t.n∑
i=1
εiai = 0
hard partition problem
![Page 14: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/14.jpg)
Formulations via SDP and polynomial optimization
Find (if possible) vectors u1, · · · , un ∈ Rk (k = 1, 2, 3, ..) such that
‖ui − uj‖2 = dij ({i , j} ∈ E )
m X = (〈ui , uj〉)Find (if possible) a solution X with rank ≤ k to the semidefinite program
X � 0, Xii + Xjj − 2Xij = dij ({i , j} ∈ E )
mDecide if pmin = 0 and find a global minimizer to the quartic polynomial
minx∈Rkn
p(x) =∑{i,j}∈E
(dij −
k∑h=1
(xih − xjh)2)2
Hard problem, already in dimension k = 1 when G is cycle Cn [Saxe’79]Given a1, . . . , an ∈ N, assign distance di,i+1 = ai to the edges of Cn. Then
∃ locations in R ⇐⇒ ∃ε ∈ {±1}n s.t.n∑
i=1
εiai = 0
hard partition problem
![Page 15: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/15.jpg)
Examples from combinatorial problems in graphs
α = 4 χ = 3
• stability number α(G ):
maximum cardinality of a set of pairwisenon-adjacent vertices (stable set)
• coloring number χ(G ):
minimum number of colors needed toproperly color the vertices of G
α(G ), χ(G ) are NP-complete [Karp 1972]
Chvatal’s reduction of coloring to the stability number:
χ(G ) is the smallest integer c such that α(G�Kc) = |V (G )|
![Page 16: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/16.jpg)
Examples from combinatorial problems in graphs
α = 4 χ = 3
• stability number α(G ):
maximum cardinality of a set of pairwisenon-adjacent vertices (stable set)
• coloring number χ(G ):
minimum number of colors needed toproperly color the vertices of G
α(G ), χ(G ) are NP-complete [Karp 1972]
Chvatal’s reduction of coloring to the stability number:
χ(G ) is the smallest integer c such that α(G�Kc) = |V (G )|
![Page 17: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/17.jpg)
Polynomial optimization formulations for α(G )
• Basic 0/1 formulation:
α(G ) = max∑i∈V
xi s.t. xixj = 0 ({i , j} ∈ E ), x2i = xi (i ∈ V )
• Motzkin-Straus formulation:
1
α(G )= min xT (I + AG )x s.t.
∑i∈V
xi = 1, xi ≥ 0 (i ∈ V )
1
α(G )= min (x◦2)T (I + AG )x◦2 s.t.
∑i∈V
x2i = 1
• Copositive formulation:
α(G ) = min λ s.t. λ(I + AG )− J is copositive
optimization over the boolean cube {0, 1}n, the standard simplex∆n, the unit sphere Sn−1, the copositive cone COPn
More in Lecture 3
![Page 18: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/18.jpg)
Polynomial optimization formulations for α(G )
• Basic 0/1 formulation:
α(G ) = max∑i∈V
xi s.t. xixj = 0 ({i , j} ∈ E ), x2i = xi (i ∈ V )
• Motzkin-Straus formulation:
1
α(G )= min xT (I + AG )x s.t.
∑i∈V
xi = 1, xi ≥ 0 (i ∈ V )
1
α(G )= min (x◦2)T (I + AG )x◦2 s.t.
∑i∈V
x2i = 1
• Copositive formulation:
α(G ) = min λ s.t. λ(I + AG )− J is copositive
optimization over the boolean cube {0, 1}n, the standard simplex∆n, the unit sphere Sn−1, the copositive cone COPn
More in Lecture 3
![Page 19: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/19.jpg)
Polynomial optimization formulations for α(G )
• Basic 0/1 formulation:
α(G ) = max∑i∈V
xi s.t. xixj = 0 ({i , j} ∈ E ), x2i = xi (i ∈ V )
• Motzkin-Straus formulation:
1
α(G )= min xT (I + AG )x s.t.
∑i∈V
xi = 1, xi ≥ 0 (i ∈ V )
1
α(G )= min (x◦2)T (I + AG )x◦2 s.t.
∑i∈V
x2i = 1
• Copositive formulation:
α(G ) = min λ s.t. λ(I + AG )− J is copositive
optimization over the boolean cube {0, 1}n, the standard simplex∆n, the unit sphere Sn−1, the copositive cone COPn
More in Lecture 3
![Page 20: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/20.jpg)
Polynomial optimization formulations for α(G )
• Basic 0/1 formulation:
α(G ) = max∑i∈V
xi s.t. xixj = 0 ({i , j} ∈ E ), x2i = xi (i ∈ V )
• Motzkin-Straus formulation:
1
α(G )= min xT (I + AG )x s.t.
∑i∈V
xi = 1, xi ≥ 0 (i ∈ V )
1
α(G )= min (x◦2)T (I + AG )x◦2 s.t.
∑i∈V
x2i = 1
• Copositive formulation:
α(G ) = min λ s.t. λ(I + AG )− J is copositive
optimization over the boolean cube {0, 1}n, the standard simplex∆n, the unit sphere Sn−1, the copositive cone COPn
More in Lecture 3
![Page 21: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/21.jpg)
Polynomial optimization formulations for α(G )
• Basic 0/1 formulation:
α(G ) = max∑i∈V
xi s.t. xixj = 0 ({i , j} ∈ E ), x2i = xi (i ∈ V )
• Motzkin-Straus formulation:
1
α(G )= min xT (I + AG )x s.t.
∑i∈V
xi = 1, xi ≥ 0 (i ∈ V )
1
α(G )= min (x◦2)T (I + AG )x◦2 s.t.
∑i∈V
x2i = 1
• Copositive formulation:
α(G ) = min λ s.t. λ(I + AG )− J is copositive
optimization over the boolean cube {0, 1}n, the standard simplex∆n, the unit sphere Sn−1, the copositive cone COPn
More in Lecture 3
![Page 22: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/22.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)T
X � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 23: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/23.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefinite
X ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 24: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/24.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 25: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/25.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 26: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/26.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 27: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/27.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 28: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/28.jpg)
Basic semidefinite bounds for α(G ) and χ(G )
S stable x = (1, 0, 0, 1, 0)T X =(1x
)(1x
)TX � 0 positive semidefiniteX ≥ 0 entry-wise nonnegative
Theta number: [Lovasz 1979]
ϑ(G ) = maxX�0
∑i∈V
X0i s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
Strengthen with non-negativity: [McEliece et al. 1978] [Schrijver 1979]
ϑ′(G ) = maxX�0,X≥0
∑i∈V
Xii s.t. X00 = 1, X0i = Xii (i ∈ V ), Xij = 0 ({i , j} ∈ E )
’Sandwich’ inequalities: α(G ) ≤ ϑ′(G ) ≤ ϑ(G ) ≤ χ(G )
Stronger bounds?
![Page 29: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/29.jpg)
Some key ideas
to get stronger bounds
![Page 30: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/30.jpg)
I Lift to higher dimensional space: add new variables modelingproducts of original variables, such as xixj , xixjxk , xixjxkxl ,...
I Use sums of squares of polynomials as a ’proxy’ fornon-negativity of polynomials to get tractable relaxations
Key fact: One can model sums of squares of polynomialsefficiently using semidefinite programming (SDP)
![Page 31: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/31.jpg)
I Lift to higher dimensional space: add new variables modelingproducts of original variables, such as xixj , xixjxk , xixjxkxl ,...
I Use sums of squares of polynomials as a ’proxy’ fornon-negativity of polynomials to get tractable relaxations
Key fact: One can model sums of squares of polynomialsefficiently using semidefinite programming (SDP)
![Page 32: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/32.jpg)
Model sums of squares of polynomials with SDP
f (x) =∑|α|≤2d
fαxα is a sum of squares of polynomials
f (x) =∑
i pi (x)2
[ write pi (x) = piT [x ]d , [x ]d = (xα)]
m
f (x) =∑i
[x ]Td pi piT [x ]d = [x ]Td
(∑i
pi piT
︸ ︷︷ ︸M�0
)[x ]d =
∑β,γ
Mβ,γxβ+γ
m
The SDP
∑
β,γ|β+γ=α
Mβ,γ = fα (|α| ≤ 2d)
M � 0
is feasible
Gram-matrix method [Powers-Wormann 1998]
![Page 33: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/33.jpg)
Model sums of squares of polynomials with SDP
f (x) =∑|α|≤2d
fαxα is a sum of squares of polynomials
f (x) =∑
i pi (x)2 [ write pi (x) = piT [x ]d , [x ]d = (xα)]
m
f (x) =∑i
[x ]Td pi piT [x ]d = [x ]Td
(∑i
pi piT
︸ ︷︷ ︸M�0
)[x ]d =
∑β,γ
Mβ,γxβ+γ
m
The SDP
∑
β,γ|β+γ=α
Mβ,γ = fα (|α| ≤ 2d)
M � 0
is feasible
Gram-matrix method [Powers-Wormann 1998]
![Page 34: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/34.jpg)
Model sums of squares of polynomials with SDP
f (x) =∑|α|≤2d
fαxα is a sum of squares of polynomials
f (x) =∑
i pi (x)2 [ write pi (x) = piT [x ]d , [x ]d = (xα)]
m
f (x) =∑i
[x ]Td pi piT [x ]d = [x ]Td
(∑i
pi piT
︸ ︷︷ ︸M�0
)[x ]d =
∑β,γ
Mβ,γxβ+γ
m
The SDP
∑
β,γ|β+γ=α
Mβ,γ = fα (|α| ≤ 2d)
M � 0
is feasible
Gram-matrix method [Powers-Wormann 1998]
![Page 35: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/35.jpg)
Model sums of squares of polynomials with SDP
f (x) =∑|α|≤2d
fαxα is a sum of squares of polynomials
f (x) =∑
i pi (x)2 [ write pi (x) = piT [x ]d , [x ]d = (xα)]
m
f (x) =∑i
[x ]Td pi piT [x ]d = [x ]Td
(∑i
pi piT
︸ ︷︷ ︸M�0
)[x ]d =
∑β,γ
Mβ,γxβ+γ
m
The SDP
∑
β,γ|β+γ=α
Mβ,γ = fα (|α| ≤ 2d)
M � 0
is feasible
Gram-matrix method [Powers-Wormann 1998]
![Page 36: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/36.jpg)
Linear Programming vs Semidefinite ProgrammingOptimize a linear function over
a polyhedron a convex set (spectrahedron)
aTj x = bj , x ≥ 0 〈Aj ,X 〉 = bj , X � 0
LP SDP
There are efficient algorithms to solve LPand SDP (up to any precision)
![Page 37: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/37.jpg)
Linear Programming vs Semidefinite ProgrammingOptimize a linear function over
a polyhedron a convex set (spectrahedron)
aTj x = bj , x ≥ 0 〈Aj ,X 〉 = bj , X � 0
LP SDP
There are efficient algorithms to solve LPand SDP (up to any precision)
![Page 38: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/38.jpg)
About the complexity of SDP
I 1980’s: There are efficient algorithms to find an almost optimalsolution, under some assumptions
Roughly: one needs a feasible point, an inscribed ball and acircumscribed ball to the feasible region
- Grotschel-Lovasz-Schrijver: based on Khachiyan ellipsoid method
- Karmarkar, Nesterov-Nemirovski: interior point algorithms
I Testing feasibility of SDP: Given rational Aj , bj , decide
(F) ∃X � 0 s.t. 〈Aj ,X 〉 = bj (j ∈ [m]) ?
- Ramana (1997): (F) ∈ NP ⇐⇒ (F) ∈ co-NP
- Porkolab-Khachiyan (1997): (F) ∈ P for fixed n or m
mnO(min{m,n2}) arithmetic operations on LnO(min{m,n2})-bit length numbers
I Well developed duality theory for LP, SDP, conic programs
(with no duality gap under some strict feasibility conditions)
![Page 39: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/39.jpg)
About the complexity of SDP
I 1980’s: There are efficient algorithms to find an almost optimalsolution, under some assumptions
Roughly: one needs a feasible point, an inscribed ball and acircumscribed ball to the feasible region
- Grotschel-Lovasz-Schrijver: based on Khachiyan ellipsoid method
- Karmarkar, Nesterov-Nemirovski: interior point algorithms
I Testing feasibility of SDP: Given rational Aj , bj , decide
(F) ∃X � 0 s.t. 〈Aj ,X 〉 = bj (j ∈ [m]) ?
- Ramana (1997): (F) ∈ NP ⇐⇒ (F) ∈ co-NP
- Porkolab-Khachiyan (1997): (F) ∈ P for fixed n or m
mnO(min{m,n2}) arithmetic operations on LnO(min{m,n2})-bit length numbers
I Well developed duality theory for LP, SDP, conic programs
(with no duality gap under some strict feasibility conditions)
![Page 40: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/40.jpg)
About the complexity of SDP
I 1980’s: There are efficient algorithms to find an almost optimalsolution, under some assumptions
Roughly: one needs a feasible point, an inscribed ball and acircumscribed ball to the feasible region
- Grotschel-Lovasz-Schrijver: based on Khachiyan ellipsoid method
- Karmarkar, Nesterov-Nemirovski: interior point algorithms
I Testing feasibility of SDP: Given rational Aj , bj , decide
(F) ∃X � 0 s.t. 〈Aj ,X 〉 = bj (j ∈ [m]) ?
- Ramana (1997): (F) ∈ NP ⇐⇒ (F) ∈ co-NP
- Porkolab-Khachiyan (1997): (F) ∈ P for fixed n or m
mnO(min{m,n2}) arithmetic operations on LnO(min{m,n2})-bit length numbers
I Well developed duality theory for LP, SDP, conic programs
(with no duality gap under some strict feasibility conditions)
![Page 41: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/41.jpg)
About the complexity of SDP
I 1980’s: There are efficient algorithms to find an almost optimalsolution, under some assumptions
Roughly: one needs a feasible point, an inscribed ball and acircumscribed ball to the feasible region
- Grotschel-Lovasz-Schrijver: based on Khachiyan ellipsoid method
- Karmarkar, Nesterov-Nemirovski: interior point algorithms
I Testing feasibility of SDP: Given rational Aj , bj , decide
(F) ∃X � 0 s.t. 〈Aj ,X 〉 = bj (j ∈ [m]) ?
- Ramana (1997): (F) ∈ NP ⇐⇒ (F) ∈ co-NP
- Porkolab-Khachiyan (1997): (F) ∈ P for fixed n or m
mnO(min{m,n2}) arithmetic operations on LnO(min{m,n2})-bit length numbers
I Well developed duality theory for LP, SDP, conic programs
(with no duality gap under some strict feasibility conditions)
![Page 42: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/42.jpg)
General approach topolynomial optimization
![Page 43: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/43.jpg)
Strategy
(P) fmin = minx∈K
f (x)
Approximate (P) by a hierarchy of convex (semidefinite) relaxations
These relaxations can be constructed using
sums of squares of polynomials
and
the dual theory of moments
Shor (1987), Nesterov (2000), Lasserre, Parrilo (2000–)
![Page 44: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/44.jpg)
Sums of squares
approach
![Page 45: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/45.jpg)
Strategy (use sums of squares)
(P)fmin = min
x∈Kf (x) = sup
λ∈Rλ s.t. f (x)− λ ≥ 0 ∀x ∈ K
Testing whether a polynomial f is nonnegative is hard
but one can test the sufficient condition:
f is a sum of squares of polynomials (SoS)
using semidefinite programming
![Page 46: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/46.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 47: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/47.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials
⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 48: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/48.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1,
or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 49: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/49.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2,
or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 50: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/50.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 51: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/51.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 52: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/52.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 53: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/53.jpg)
Are all nonnegative polynomials SoS?
Hilbert [1888]: Every nonnegative polynomialin n variables and even degree d is a sum ofsquares of polynomials ⇐⇒n = 1, or d = 2, or (n = 2 and d = 4)
Hilbert’s 17th problem [1900]: Is every nonneg-ative polynomial is a sum of squares of rationalfunctions?
Artin [1927]: Yes
−2−1.5
−1−0.5
00.5
11.5
2
−1.5
−1−0.5
00.5
1
1.50
1
2
3
4
5
6
7
8
9
Motzkin [1967]:
p = x4y2+x2y4+1−3x2y2
is nonnegative,
not a sum of squares,
but (x2 + y2)2p is SoS
![Page 54: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/54.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 55: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/55.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 56: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/56.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 57: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/57.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 58: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/58.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 59: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/59.jpg)
Positivity certificates over K
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0}
Quadratic module: Q(g) = {s0 + s1g1 + . . .+ smgm | sj SoS}
Preordering: P(g) = {∑
e∈{0,1}m sege11 · · · g em
m | se SoS} ⊃ Q(g)
Theorem: Assume K compact.
• [Schmudgen 1991] f > 0 on K =⇒ f ∈ P(g)
• [Putinar 1993] Archimedean condition: ∃R : R −∑
i x2i ∈ Q(g)
f > 0 on K =⇒ f ∈ Q(g)
Observation: If we know a ball of radius R containing K , then just addthe (redundant) constraint R2 −
∑i x
2i ≥ 0 to the description of K
![Page 60: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/60.jpg)
SoS relaxations for (P)
Truncated quadratic module:
Q(g)t := { s0︸︷︷︸deg≤2t
+ s1g1︸︷︷︸deg≤2t
+ . . .+ smgm︸ ︷︷ ︸deg≤2t
| sj SoS}
Replace
(P) fmin = infx∈K f (x) = sup λ s.t. f − λ ≥ 0 on K
by
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
I Each bound f sost can be computed with SDP
I f sost ≤ f sost+1 ≤ fmin
I Asymptotic convergence: limt→∞ f sost = fmin [Lasserre 2001]
![Page 61: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/61.jpg)
SoS relaxations for (P)
Truncated quadratic module:
Q(g)t := { s0︸︷︷︸deg≤2t
+ s1g1︸︷︷︸deg≤2t
+ . . .+ smgm︸ ︷︷ ︸deg≤2t
| sj SoS}
Replace
(P) fmin = infx∈K f (x) = sup λ s.t. f − λ ≥ 0 on K
by
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
I Each bound f sost can be computed with SDP
I f sost ≤ f sost+1 ≤ fmin
I Asymptotic convergence: limt→∞ f sost = fmin [Lasserre 2001]
![Page 62: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/62.jpg)
SoS relaxations for (P)
Truncated quadratic module:
Q(g)t := { s0︸︷︷︸deg≤2t
+ s1g1︸︷︷︸deg≤2t
+ . . .+ smgm︸ ︷︷ ︸deg≤2t
| sj SoS}
Replace
(P) fmin = infx∈K f (x) = sup λ s.t. f − λ ≥ 0 on K
by
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
I Each bound f sost can be computed with SDP
I f sost ≤ f sost+1 ≤ fmin
I Asymptotic convergence: limt→∞ f sost = fmin [Lasserre 2001]
![Page 63: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/63.jpg)
SoS relaxations for (P)
Truncated quadratic module:
Q(g)t := { s0︸︷︷︸deg≤2t
+ s1g1︸︷︷︸deg≤2t
+ . . .+ smgm︸ ︷︷ ︸deg≤2t
| sj SoS}
Replace
(P) fmin = infx∈K f (x) = sup λ s.t. f − λ ≥ 0 on K
by
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
I Each bound f sost can be computed with SDP
I f sost ≤ f sost+1 ≤ fmin
I Asymptotic convergence: limt→∞ f sost = fmin [Lasserre 2001]
![Page 64: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/64.jpg)
SoS relaxations for (P)
Truncated quadratic module:
Q(g)t := { s0︸︷︷︸deg≤2t
+ s1g1︸︷︷︸deg≤2t
+ . . .+ smgm︸ ︷︷ ︸deg≤2t
| sj SoS}
Replace
(P) fmin = infx∈K f (x) = sup λ s.t. f − λ ≥ 0 on K
by
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
I Each bound f sost can be computed with SDP
I f sost ≤ f sost+1 ≤ fmin
I Asymptotic convergence: limt→∞ f sost = fmin [Lasserre 2001]
![Page 65: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/65.jpg)
Moment approach
![Page 66: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/66.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 67: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/67.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 68: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/68.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 69: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/69.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 70: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/70.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 71: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/71.jpg)
fmin = infx∈K
f (x) = infµ
∫K
f (x)dµ s.t. µ is a probability measure on K
= infL∈R[x]∗
L(f ) s.t. L has a representing measure µ on K
Deciding if a linear functional L ∈ R[x ]∗ has a representing measure µ on K
is the (difficult) classical moment problem.
But one can use the necessary condition:
L is nonnegative on the quadratic module Q(g) = {s0 +∑
j sjgj : sj SOS}:
L(p2) ≥ 0 ∀p, i.e., M(L) = (L(xα+β))α,β∈Nn � 0
and L(gjp2) ≥ 0 ∀p, i.e., M(gjL) = (L(gjx
α+β))α,β∈Nn � 0
L(p2) = L((∑α pαx
α)2) =∑α,β pαpβL(xα+β) = pTM(L)p
M(L) is a moment matrix and M(gjL) are localizing moment matrices
![Page 72: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/72.jpg)
Moment relaxations for (P)
(P)fmin = inf
L∈R[x]∗L(f ) s.t. L has a representing measure µ on K
Truncate at degree 2t:
(MOMt)
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, L ≥ 0 on Q(g)t
i.e., Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
f sost ≤ f momt ≤ fmin dual sdp bounds
![Page 73: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/73.jpg)
Moment relaxations for (P)
(P)fmin = inf
L∈R[x]∗L(f ) s.t. L has a representing measure µ on K
Truncate at degree 2t:
(MOMt)
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, L ≥ 0 on Q(g)t
i.e., Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
f sost ≤ f momt ≤ fmin dual sdp bounds
![Page 74: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/74.jpg)
Moment relaxations for (P)
(P)fmin = inf
L∈R[x]∗L(f ) s.t. L has a representing measure µ on K
Truncate at degree 2t:
(MOMt)
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, L ≥ 0 on Q(g)t
i.e., Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
f sost ≤ f momt ≤ fmin dual sdp bounds
![Page 75: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/75.jpg)
Moment relaxations for (P)
(P)fmin = inf
L∈R[x]∗L(f ) s.t. L has a representing measure µ on K
Truncate at degree 2t:
(MOMt)
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, L ≥ 0 on Q(g)t
i.e., Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
f sost ≤ f momt ≤ fmin dual sdp bounds
![Page 76: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/76.jpg)
Moment relaxations for (P)
(P)fmin = inf
L∈R[x]∗L(f ) s.t. L has a representing measure µ on K
Truncate at degree 2t:
(MOMt)
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, L ≥ 0 on Q(g)t
i.e., Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
(SOSt) f sost = sup λ s.t. f − λ ∈ Q(g)t
f sost ≤ f momt ≤ fmin dual sdp bounds
![Page 77: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/77.jpg)
Some results on the full/truncated moment problem
Theorem [Putinar 1997]Assume L ∈ R[x ]∗ is nonnegative on the (archimedean) quadratic module Q(g).
• Then L has a representing measure µ supported by K : L(f ) =∫f (x)µ(dx)
• [Tchakaloff 1957] For any fixed degree k , the restriction of L to R[x ]k has arepresenting measure supported by K , which is finite atomic.
Theorem [Curto-Fialkow 1996 - L 2005: short algebraic proof]
Assume L ∈ R[x ]∗2t is nonnegative on Qt(g), i.e., Mt(L) � 0,and rank Mt(L) = rank Mt−1(L) [flatness condition]
Then L has a finite atomic representing measure µ on K .
Main steps of proof:
I Extend L to L ∈ R[x ]∗ with rank M(L) = rankMt(L) =: r
I M(L) � 0 with finite rank r =⇒ L has an r -atomic measure µ
![Page 78: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/78.jpg)
Some results on the full/truncated moment problem
Theorem [Putinar 1997]Assume L ∈ R[x ]∗ is nonnegative on the (archimedean) quadratic module Q(g).
• Then L has a representing measure µ supported by K : L(f ) =∫f (x)µ(dx)
• [Tchakaloff 1957] For any fixed degree k , the restriction of L to R[x ]k has arepresenting measure supported by K , which is finite atomic.
Theorem [Curto-Fialkow 1996 - L 2005: short algebraic proof]
Assume L ∈ R[x ]∗2t is nonnegative on Qt(g), i.e., Mt(L) � 0,and rank Mt(L) = rank Mt−1(L) [flatness condition]
Then L has a finite atomic representing measure µ on K .
Main steps of proof:
I Extend L to L ∈ R[x ]∗ with rank M(L) = rankMt(L) =: r
I M(L) � 0 with finite rank r =⇒ L has an r -atomic measure µ
![Page 79: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/79.jpg)
Optimality criterion for moment relaxation (MOMt)
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0} dK = maxjddeg(gj)/2e
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
Theorem [CF 2000 + Henrion-Lasserre 2005 + Lasserre-L-Rostalski 2008]
Assume L is an optimal solution of (MOMt) such that
rank Ms(L) = rank Ms−dK (L) for some dK ≤ s ≤ t.
• Then the relaxation is exact: f momt = fmin.
• Moreover, one can compute the global minimizers:
V (KerMs(L)) ⊆ { global minimizers of f on K},
with equality if rank Mt(L) is maximum (rank = # minimizers).
![Page 80: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/80.jpg)
Optimality criterion for moment relaxation (MOMt)
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0} dK = maxjddeg(gj)/2e
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
Theorem [CF 2000 + Henrion-Lasserre 2005 + Lasserre-L-Rostalski 2008]
Assume L is an optimal solution of (MOMt) such that
rank Ms(L) = rank Ms−dK (L) for some dK ≤ s ≤ t.
• Then the relaxation is exact: f momt = fmin.
• Moreover, one can compute the global minimizers:
V (KerMs(L)) ⊆ { global minimizers of f on K},
with equality if rank Mt(L) is maximum (rank = # minimizers).
![Page 81: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/81.jpg)
Optimality criterion for moment relaxation (MOMt)
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0} dK = maxjddeg(gj)/2e
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
Theorem [CF 2000 + Henrion-Lasserre 2005 + Lasserre-L-Rostalski 2008]
Assume L is an optimal solution of (MOMt) such that
rank Ms(L) = rank Ms−dK (L) for some dK ≤ s ≤ t.
• Then the relaxation is exact: f momt = fmin.
• Moreover, one can compute the global minimizers:
V (KerMs(L)) ⊆ { global minimizers of f on K},
with equality if rank Mt(L) is maximum (rank = # minimizers).
![Page 82: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/82.jpg)
Optimality criterion for moment relaxation (MOMt)
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0} dK = maxjddeg(gj)/2e
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
Theorem [CF 2000 + Henrion-Lasserre 2005 + Lasserre-L-Rostalski 2008]
Assume L is an optimal solution of (MOMt) such that
rank Ms(L) = rank Ms−dK (L) for some dK ≤ s ≤ t.
• Then the relaxation is exact: f momt = fmin.
• Moreover, one can compute the global minimizers:
V (KerMs(L)) ⊆ { global minimizers of f on K},
with equality if rank Mt(L) is maximum (rank = # minimizers).
![Page 83: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/83.jpg)
Optimality criterion for moment relaxation (MOMt)
K = {x | g1(x) ≥ 0, . . . , gm(x) ≥ 0} dK = maxjddeg(gj)/2e
f momt = inf
L∈R[x]∗2tL(f ) s.t. L(1) = 1, Mt(L) � 0, Mt−dj (gjL) � 0 ∀j
Theorem [CF 2000 + Henrion-Lasserre 2005 + Lasserre-L-Rostalski 2008]
Assume L is an optimal solution of (MOMt) such that
rank Ms(L) = rank Ms−dK (L) for some dK ≤ s ≤ t.
• Then the relaxation is exact: f momt = fmin.
• Moreover, one can compute the global minimizers:
V (KerMs(L)) ⊆ { global minimizers of f on K},
with equality if rank Mt(L) is maximum (rank = # minimizers).
![Page 84: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/84.jpg)
Some properties
I Interior point algos for SDP give a maximum rank optimal solution
I Finite convergence holds in finite variety case [L 2007, Nie 2013]in convex case [Lasserre 2009, de Klerk-L 2011]generically [Nie 2014]
I Can exploit structure (like sparsity, symmetry, equations) to designmore economical SDP relaxations
I Algorithm for computing the (finitely many) real roots ofpolynomial equations (and real radical ideals)
[Lasserre-L-Rostalski 2008,2009][Lasserre-L-Mourrain-Rostalski-Trebuchet 2013]
large literature, surveys, monographs
![Page 85: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/85.jpg)
Some properties
I Interior point algos for SDP give a maximum rank optimal solution
I Finite convergence holds in finite variety case [L 2007, Nie 2013]in convex case [Lasserre 2009, de Klerk-L 2011]generically [Nie 2014]
I Can exploit structure (like sparsity, symmetry, equations) to designmore economical SDP relaxations
I Algorithm for computing the (finitely many) real roots ofpolynomial equations (and real radical ideals)
[Lasserre-L-Rostalski 2008,2009][Lasserre-L-Mourrain-Rostalski-Trebuchet 2013]
large literature, surveys, monographs
![Page 86: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/86.jpg)
Some properties
I Interior point algos for SDP give a maximum rank optimal solution
I Finite convergence holds in finite variety case [L 2007, Nie 2013]in convex case [Lasserre 2009, de Klerk-L 2011]generically [Nie 2014]
I Can exploit structure (like sparsity, symmetry, equations) to designmore economical SDP relaxations
I Algorithm for computing the (finitely many) real roots ofpolynomial equations (and real radical ideals)
[Lasserre-L-Rostalski 2008,2009][Lasserre-L-Mourrain-Rostalski-Trebuchet 2013]
large literature, surveys, monographs
![Page 87: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/87.jpg)
Some properties
I Interior point algos for SDP give a maximum rank optimal solution
I Finite convergence holds in finite variety case [L 2007, Nie 2013]in convex case [Lasserre 2009, de Klerk-L 2011]generically [Nie 2014]
I Can exploit structure (like sparsity, symmetry, equations) to designmore economical SDP relaxations
I Algorithm for computing the (finitely many) real roots ofpolynomial equations (and real radical ideals)
[Lasserre-L-Rostalski 2008,2009][Lasserre-L-Mourrain-Rostalski-Trebuchet 2013]
large literature, surveys, monographs
![Page 88: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/88.jpg)
Application for boundingmatrix factorization ranks
using the moment approach
![Page 89: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/89.jpg)
Matrix factorization ranks
I Nonnegative factorization of A ∈ Rm×n+ :
A =∑r`=1 a`b
T` , where a` ∈ Rm
+, b` ∈ Rn+ [atomic decomposition]
A = (〈ui , vj〉)i∈[m],j∈[n], where ui , vj ∈ Rr+ [Gram factorization]
Smallest such r : rank+(A) nonnegative rank
I CP-factorization of A ∈ Sn: symmetric nonnegative factorization:
restrict to a` = b` ∀` and to ui = vi ∀iSmallest such r : rankcp(A) cp-rank
rankcp(A) <∞ when A is completely positive
I PSD factorization of A ∈ Rm×n+ :
A = (〈Ui ,Vj〉)i∈[m],j∈[n], where Ui ,Vj ∈ Sr+ [Gram factorization]
Smallest such r : rankpsd(A) psd-rank
Symmetric analogue: require Ui = Vi ∀i cpsd-rank
Applications: extended formulations (LP/SDP) of polytopes(quantum) communication complexity
![Page 90: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/90.jpg)
Matrix factorization ranks
I Nonnegative factorization of A ∈ Rm×n+ :
A =∑r`=1 a`b
T` , where a` ∈ Rm
+, b` ∈ Rn+ [atomic decomposition]
A = (〈ui , vj〉)i∈[m],j∈[n], where ui , vj ∈ Rr+ [Gram factorization]
Smallest such r : rank+(A) nonnegative rank
I CP-factorization of A ∈ Sn: symmetric nonnegative factorization:
restrict to a` = b` ∀` and to ui = vi ∀iSmallest such r : rankcp(A) cp-rank
rankcp(A) <∞ when A is completely positive
I PSD factorization of A ∈ Rm×n+ :
A = (〈Ui ,Vj〉)i∈[m],j∈[n], where Ui ,Vj ∈ Sr+ [Gram factorization]
Smallest such r : rankpsd(A) psd-rank
Symmetric analogue: require Ui = Vi ∀i cpsd-rank
Applications: extended formulations (LP/SDP) of polytopes(quantum) communication complexity
![Page 91: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/91.jpg)
Matrix factorization ranks
I Nonnegative factorization of A ∈ Rm×n+ :
A =∑r`=1 a`b
T` , where a` ∈ Rm
+, b` ∈ Rn+ [atomic decomposition]
A = (〈ui , vj〉)i∈[m],j∈[n], where ui , vj ∈ Rr+ [Gram factorization]
Smallest such r : rank+(A) nonnegative rank
I CP-factorization of A ∈ Sn: symmetric nonnegative factorization:
restrict to a` = b` ∀` and to ui = vi ∀iSmallest such r : rankcp(A) cp-rank
rankcp(A) <∞ when A is completely positive
I PSD factorization of A ∈ Rm×n+ :
A = (〈Ui ,Vj〉)i∈[m],j∈[n], where Ui ,Vj ∈ Sr+ [Gram factorization]
Smallest such r : rankpsd(A) psd-rank
Symmetric analogue: require Ui = Vi ∀i cpsd-rank
Applications: extended formulations (LP/SDP) of polytopes(quantum) communication complexity
![Page 92: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/92.jpg)
Matrix factorization ranks
I Nonnegative factorization of A ∈ Rm×n+ :
A =∑r`=1 a`b
T` , where a` ∈ Rm
+, b` ∈ Rn+ [atomic decomposition]
A = (〈ui , vj〉)i∈[m],j∈[n], where ui , vj ∈ Rr+ [Gram factorization]
Smallest such r : rank+(A) nonnegative rank
I CP-factorization of A ∈ Sn: symmetric nonnegative factorization:
restrict to a` = b` ∀` and to ui = vi ∀iSmallest such r : rankcp(A) cp-rank
rankcp(A) <∞ when A is completely positive
I PSD factorization of A ∈ Rm×n+ :
A = (〈Ui ,Vj〉)i∈[m],j∈[n], where Ui ,Vj ∈ Sr+ [Gram factorization]
Smallest such r : rankpsd(A) psd-rank
Symmetric analogue: require Ui = Vi ∀i cpsd-rank
Applications: extended formulations (LP/SDP) of polytopes(quantum) communication complexity
![Page 93: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/93.jpg)
Nonnegative/psd rank and extended formulations
Theorem [Yannakakis 1991 - Gouveia-Parrilo-Thomas 2013]
For a polytope P = conv(V ) = {x ∈ Rn : aTi x ≤ bi ∀i ∈ [m]}
its slack-matrix is S = (bi − aTi v)v∈V ,i∈[m] ∈ R|V |×m+
• Smallest r s.t. P is projection of affine section of Rr+ is rank+(S)
• Smallest r s.t. P is projection of affine section of Sr+ is rankpsd(S)
![Page 94: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/94.jpg)
Nonnegative/psd rank and extended formulations
Theorem [Yannakakis 1991 - Gouveia-Parrilo-Thomas 2013]
For a polytope P = conv(V ) = {x ∈ Rn : aTi x ≤ bi ∀i ∈ [m]}
its slack-matrix is S = (bi − aTi v)v∈V ,i∈[m] ∈ R|V |×m+
• Smallest r s.t. P is projection of affine section of Rr+ is rank+(S)
• Smallest r s.t. P is projection of affine section of Sr+ is rankpsd(S)
![Page 95: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/95.jpg)
Nonnegative/psd rank and extended formulations
Theorem [Yannakakis 1991 - Gouveia-Parrilo-Thomas 2013]
For a polytope P = conv(V ) = {x ∈ Rn : aTi x ≤ bi ∀i ∈ [m]}
its slack-matrix is S = (bi − aTi v)v∈V ,i∈[m] ∈ R|V |×m+
• Smallest r s.t. P is projection of affine section of Rr+ is rank+(S)
• Smallest r s.t. P is projection of affine section of Sr+ is rankpsd(S)
![Page 96: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/96.jpg)
Nonnegative/psd rank and extended formulations
Theorem [Yannakakis 1991 - Gouveia-Parrilo-Thomas 2013]
For a polytope P = conv(V ) = {x ∈ Rn : aTi x ≤ bi ∀i ∈ [m]}
its slack-matrix is S = (bi − aTi v)v∈V ,i∈[m] ∈ R|V |×m+
• Smallest r s.t. P is projection of affine section of Rr+ is rank+(S)
• Smallest r s.t. P is projection of affine section of Sr+ is rankpsd(S)
![Page 97: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/97.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 98: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/98.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 99: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/99.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 100: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/100.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 101: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/101.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 102: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/102.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 103: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/103.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 104: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/104.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)
(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 105: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/105.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)
(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 106: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/106.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)
(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),
and in finitely many steps under flatness.
![Page 107: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/107.jpg)
Bounds for cp-rank via polynomial optimization
Assume A =∑r`=1 a`a
T` , where a` ∈ Rn
+, r = rankcp(A).
1
rA ∈ R := conv(xxT : x ∈ Rn
+, A− xxT ≥ 0, A− xxT � 0}
τcp(A) := min{λ : 1
λA ∈ R}≤ rankcp(A) [Fawzi-Parrilo 2016]
Moment approach: Define L ∈ R[x1, . . . , xn]∗ by L(p) =∑r`=1 p(a`).
(0) L(1) = r (model rankcp(A))
(1) L(xixj) = Aij (recover A)
(2) L ≥ 0 on Q(√Aiixi − x2i , Aij − xixj) (model x ≥ 0, A− xxT ≥ 0)
(3a) L((xxT )⊗k) � A⊗k for k ≥ 2 (model A− xxT � 0)
(3b) L((A− xxT)⊗ [x ][x ]T) � 0 (model A− xxT � 0)[Gribling-L-Steenkamp 2021]
Theorem [Gribling-de Laat-L 2019]The bounds ξcpt , obtained by minimizing L(1) over L ∈ R[x ]∗2t satisfying thetruncated versions of (1)-(3a), converge asymptotically to τcp(A),and in finitely many steps under flatness.
![Page 108: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/108.jpg)
Extension to other factorization ranks
The moment view point for polynomial optimization offers a systematic,common approach to treat many factorization ranks
I Extension to the nonnegative rank, by using two sets of variablesx , y ; extends also to the more general tensor setting
I Extension to the psd-rank and cpsd-rank, by taking the Gramfactorization view point and using noncommutative variables
Currently working (with Gribling and Steenkamp) on bounds for theseparable rank of a linear operator ρ acting on Cn ⊗ Cn, asking for thesmallest decomposition of the form
ρ =r∑`=1
a`aT` ⊗ b`b
T`
Understanding separable states is a fundamental question in quantuminformation
![Page 109: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/109.jpg)
Extension to other factorization ranks
The moment view point for polynomial optimization offers a systematic,common approach to treat many factorization ranks
I Extension to the nonnegative rank, by using two sets of variablesx , y ; extends also to the more general tensor setting
I Extension to the psd-rank and cpsd-rank, by taking the Gramfactorization view point and using noncommutative variables
Currently working (with Gribling and Steenkamp) on bounds for theseparable rank of a linear operator ρ acting on Cn ⊗ Cn, asking for thesmallest decomposition of the form
ρ =r∑`=1
a`aT` ⊗ b`b
T`
Understanding separable states is a fundamental question in quantuminformation
![Page 110: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/110.jpg)
Concluding remarks
I The two (dual) approaches via moments and sums-of-squaresprovide interesting complementary information
I This extends to the problem of moments (optimize over measures)and to polynomial optimization in noncommutative variables(optimize over matrix-valued variables), with many applications
I What about the quality of the relaxations? (see Lecture 2)
I Approximation hierarchies for graph problems (see Lecture 3)
Thank you !
![Page 111: Sums of Squares, Moments and Applications in Polynomial ...](https://reader034.fdocuments.us/reader034/viewer/2022051114/62773740eea63631b8689250/html5/thumbnails/111.jpg)
Some references
P. Parrilo: Structured semidefinite programs and semialgebraic geometrymethods in robustness and optimization, PhD thesis, CalTech, 2000.
J.B. Lasserre: Global optimization with polynomials and the problem ofmoments, SIAM J. Optimization, 2001.
J.B. Lasserre, M. Laurent, P. Rostalski: Semidefinite characterization andcomputation of real radical ideals, Foundations Comput. Math., 2008.
J.B. Lasserre: Moments, Positive Polynomials and their Applications,Imperial College Press, 2009.
M. Laurent: Sums of squares, moment matrices and optimization overpolynomials, in IMA volume 149, 2009.
M. Anjos and J.-B. Lasserre (eds): Handbook of Semidefinite, Conic andPolynomial Optimization, Springer, 2011
G. Blekherman, P. Parrilo, R. Thomas (eds): Semidefinite Optimizationand Convex Algebraic Geometry, MOS-SIAM Series Optimization, 2011