arXiv:1712.01792v1 [math.OC] 5 Dec 2017 · PDF filewhere the underlying cone is a Cartesian...

download arXiv:1712.01792v1 [math.OC] 5 Dec 2017 · PDF filewhere the underlying cone is a Cartesian product of SOS and WSOS cones. For simplicity,

If you can't read please download the document

Transcript of arXiv:1712.01792v1 [math.OC] 5 Dec 2017 · PDF filewhere the underlying cone is a Cartesian...

  • arX

    iv:1

    712.

    0179

    2v1

    [m

    ath.

    OC

    ] 5

    Dec

    201

    7

    SUM-OF-SQUARES OPTIMIZATION WITHOUT SEMIDEFINITE

    PROGRAMMING

    DAVID PAPP AND SERCAN YILDIZ

    Abstract. We propose a homogeneous primal-dual interior-point method to solve sum-of-squares optimization problems by combining non-symmetric conic optimization tech-niques and polynomial interpolation. The approach optimizes directly over the sum-of-squares cone and its dual, circumventing the semidefinite programming (SDP) reformula-tion which requires a large number of auxiliary variables. As a result, it has substantiallylower theoretical time and space complexity than the conventional SDP-based approach.Although our approach circumvents the semidefinite programming reformulation, an op-timal solution to the semidefinite program can be recovered with little additional effort.Computational results confirm that for problems involving high-degree polynomials, theproposed method is several orders of magnitude faster than semidefinite programming.

    Keywords. sum-of-squares optimization, non-symmetric conic optimization, poly-nomial interpolation, polynomial optimization, semidefinite programming

    AMS subject classifications. 90C25, 90C51, 65D05, 90C22

    1. Introduction

    We propose a homogeneous primal-dual interior-point algorithm to solve optimizationproblems over sum-of-squares polynomials. Sum-of-squares polynomials, which are definedprecisely below, are instrumental in the theory and practice of polynomial optimization andin the closely related problem of deciding whether a polynomial is nonnegative. These prob-lems are of fundamental importance in many areas of applied mathematics and engineeringincluding discrete geometry [6, 7, 67], probability theory [9], control theory [26, 5, 2, 28, 17],signal processing [19], power systems engineering [29, 21], computational algebraic geome-try [35, 38, 53, 30], design of experiments [18, 49], and statistical estimation [3]. Additionalapplications of sum-of-squares optimization are described in [10].

    In the fundamental problem of polynomial optimization, we are given n-variate polyno-mials g1, . . . , gm and p over the reals, and we are interested in determining the minimumvalue of p on the basic closed semialgebraic set

    S def= {t Rn | gi(t) 0 i = 1, . . . ,m} . (1)That is to say, we would like to compute

    inftRn

    {p(t) | t S} . (2)

    Date: Wednesday 6th December, 2017, 01:29. This material is based upon work supported by the NationalScience Foundation under Grant No. DMS-1719828. Additionally, this material was based upon workpartially supported by the National Science Foundation under Grant DMS-1638521 to the Statistical andApplied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendationsexpressed in this material are those of the authors and do not necessarily reflect the views of the NationalScience Foundation.

    1

    http://arxiv.org/abs/1712.01792v1

  • 2 DAVID PAPP AND SERCAN YILDIZ

    Equivalently, one may seek the largest constant c R which can be subtracted from p suchthat p c is nonnegative on the set S. Thus, the polynomial optimization problem (2) canbe reduced to the problem of checking polynomial nonnegativity.

    In many of the applications mentioned above, the goal is not to compute the minimumvalue of a given polynomial, but rather to find an optimal polynomial satisfying certainshape constraints that impose bounds on certain linear functionals of the polynomial. Inthis setting, even the case of polynomials with only a few variables is of great interest; infact, most of the references cited above are concerned with univariate polynomials of highdegree.

    These problems are most naturally formulated as conic optimization problems: Let K R

    N be a closed and convex cone. A conic optimization problem is a problem of the form

    minimizexRN

    cTx

    subject to Ax = b

    x K(3)

    where A is a kN real matrix, and c and b are real vectors of appropriate dimensions. Itsdual problem is

    maximizeyRk, sRN

    bTy

    subject to ATy + s = c

    s K.(4)

    Here K def= {s RN | sTx 0 x K} denotes the dual cone of K, which is also closedand convex. In the literature on conic optimization, it is usually assumed that the cone Kis pointed and has nonempty interior in addition to being closed and convex; in this case,K is called a proper cone. When K is proper, its dual cone K is also proper.

    In optimization problems involving polynomials, we are typically interested in the spaceof n-variate polynomials of total degree at most r, which we denote with Vn,r in this paper,and the closed convex cone PSn,r of polynomials that are nonnegative on S:

    PSn,rdef= {p Vn,r | p(t) 0 t S} .

    In the case S = Rn, we use the lighter notation Pn,r to represent the cone of polynomialsthat are nonnegative everywhere. The dual cone of PSn,r is known as the moment conecorresponding to S.

    Throughout the paper, all polynomials are n-variate polynomials over the real numberfield. The degree of a polynomial should always be understood in the sense of total degree,and all vectors should be interpreted as column vectors unless stated otherwise. We representvectors and matrices in boldface type to distinguish them from scalars. We let 0 and 1 denotethe all-zeros and all-ones vectors, and we let ei denote the i-th standard unit vector whoseonly nonzero entry is at the i-th position and equal to 1. We represent the arguments of ann-variate polynomial with t = (t1, . . . , tn) when necessary. We let C denote the interior ofa set C RN .

    1.1. Sum-of-squares polynomials: basic definitions and notation. A polynomial p Vn,2d is said to be sum-of-squares (SOS) if it can be expressed as a finite sum of squared poly-nomials. More precisely, the polynomial p Vn,2d is SOS if there exist q1, . . . , qM Vn,d suchthat p =

    Mj=1 q

    2j . We let n,2d denote the set consisting of these SOS polynomials. This set

  • SUM-OF-SQUARES OPTIMIZATION WITHOUT SEMIDEFINITE PROGRAMMING 3

    is a proper cone in Vn,2d [43, Thm. 17.1]. Let g def= (g1, . . . , gm) and d def= (d1, . . . , dm) forsome given nonzero polynomials g1, . . . , gm and nonnegative integers d1, . . . , dm. Considerthe space Vgn,2d of polynomials p for which there exist s1 Vn,2d1 , . . . , sm Vn,2dm suchthat p =

    mi=1 gisi. A polynomial p V

    gn,2d is said to be weighted sum-of-squares (WSOS)

    if there exist s1 n,2d1 , . . . , sm n,2dm such that p =m

    i=1 gisi. We let gn,2d denote

    the set consisting of these WSOS polynomials. This set is a convex cone with nonemptyinterior in Vgn,2d, but it is not always closed or pointed. Proposition 11 below characterizeswhen gn,2d is a proper cone. An SOS optimization problem is a conic optimization problemwhere the underlying cone is a Cartesian product of SOS and WSOS cones. For simplicity,we limit our initial presentation in this paper to optimization problems over SOS conesfor the most part and discuss the more general case of optimization over WSOS cones inSection 6.

    Let Ldef= dimVn,d =

    (

    n+dn

    )

    and Udef= dimVn,2d =

    (

    n+2dn

    )

    denote the dimensions of thespaces of n-variate polynomials of degree at most d and 2d, respectively. The space Vn,2dis isomorphic to RU ; therefore, n,2d can equivalently be seen as a cone in R

    U . In viewof this connection, given an ordered basis q = (q1, . . . , qU ) of Vn,2d, we say that a vectors = (s1, . . . , sU ) RU satisfies s n,2d if the polynomial

    Uu=1 suqu is SOS. We let S

    L

    denote the space of L L real symmetric matrices and let SL+ (resp. SL++) denote the coneof positive semidefinite (resp. positive definite) matrices in the same space. When the sizeof the matrices is clear from the context, we write X < 0 (resp. X 0) to mean thatthe real symmetric matrix X is positive semidefinite (resp. positive definite). For matrices

    S,X SL, the notation S X = Li=1L

    j=1 SijXij represents the Frobenius inner product

    of S and X, and XF =X X represents the Frobenius norm of X.

    An SOS decomposition provides a simple certificate demonstrating the global nonnega-tivity of a polynomial. The key observation behind modern polynomial optimization ap-proaches is that while deciding whether a polynomial is nonnegative is NP-hard (outside ofa few very special cases), the cone of SOS polynomials admits a semidefinite representation[52, 43, 33, 35]. Using this representation, optimization problems over SOS cones can bereformulated as semidefinite programming (SDP) problems. The following theorem is dueto Nesterov [43]; we present it here in our notation for completeness.

    Proposition 1 ([43, Thm. 17.1]). Fix ordered bases p = (p1, . . . , pL) and q = (q1, . . . , qU )of Vn,d and Vn,2d, respectively. Let : RU SL be the unique linear mapping satisfying(q) = ppT, and let denote its adjoint. Then s n,2d if and only if there exists amatrix S < 0 satisfying

    s = (S).

    Additionally, the dual cone of n,2d admits the characterization

    n,2d ={

    x RU |(x) < 0}

    .

    We emphasize that the operator in Proposition 1 depends explicitly on the specific basesp and q chosen to represent Vn,d and Vn,2d. In particular, these basis choices determinewhich linear slice of the positive semidefinite cone is used to characterize n,2d. We shallrevisit this observation in more detail in Section 3.

    Nesterov [43] also gave an analogous semidefinite representation for WSOS cones. Wepostpone the precise statement of this result to Proposition 12 below and mention here onlythat withm polynomial weights, the semidefinite representation of gn,2d requiresm positive

  • 4 DAVID PAPP AND SERCAN YILDIZ