SOLUTION OF MONOTONE COMPLEMENTARITY AND GENERAL CONVEX PROGRAMMING...
Transcript of SOLUTION OF MONOTONE COMPLEMENTARITY AND GENERAL CONVEX PROGRAMMING...
SOLUTION OF MONOTONE COMPLEMENTARITY ANDGENERAL CONVEX PROGRAMMING PROBLEMS USING A
MODIFIED POTENTIAL REDUCTION INTERIOR POINT METHOD∗
KUO-LING HUANG AND SANJAY MEHROTRA†
Abstract. We present a homogeneous algorithm equipped with a modified potential functionfor the monotone complementarity problem. We show that this potential function is reduced by atleast a constant amount if a scaled Lipschitz condition is satisfied. A practical algorithm based onthis potential function is implemented in a software package named iOptimize. The implementa-tion in iOptimize maintains global linear polynomial-time convergence properties while achievingpractical performance. When compared with a mature software package MOSEK (barrier solver ver-sion 6.0.0.106), iOptimize solves convex quadratic programming problems, convex quadraticallyconstrained quadratic programming problems, and general convex programming problems in feweriterations. Moreover, several problems for which MOSEK fails are solved to optimality. We also findthat iOptimize seems to detect infeasibility more reliably than general nonlinear solvers Ipopt (ver-sion 3.9.2) and Knitro (version 8.0).
Key words. quadratic programs, quadratically constrained quadratic programs, convex pro-grams, homogeneous algorithms, interior point methods
AMS subject classifications. 90C06, 90C20, 90C25, 90C30, 90C33, 90C51, 49M15
1. Introduction. In this paper we consider a monotone complementarity prob-lem (MCP) of the following form
s = f(x)(1.1)
0 ≤ s ⊥ x ≥ 0,(1.2)
where x, s ∈ Rn, f(x) is a continuous monotone mapping fromRn+ := x ∈ Rn | x ≥ 0to Rn, and the notation 0 ≤ s ⊥ x ≥ 0 means that (x, s) ≥ 0 and that xT s = 0.The requirement xT s = 0 is called complementarity condition. Then, for everyx1, x2 ∈ Rn+, we have
(x1 − x2)T (f(x1)− f(x2)) ≥ 0.(1.3)
Let ∇f(x) denote the Jacobian matrix of f(x). If ∇f(x) is positive semidefinite forall x > 0, that is,
hT∇f(x)h ≥ 0,∀x > 0, h ∈ Rn,(1.4)
then f(x) is a continuous monotone mapping.(x, s) ⊂ R2n
+ is called a feasible solution of MCP if (1.1) is satisfied. It is calledan optimal or complementary solution if both (1.1) and (1.2) are satisfied. If MCPhas a complementary solution (x∗, s∗), then it has a maximal complementary solutionwhere the number of positive components of (x∗, s∗) is maximal.
Given an initial point (x0, s0) ⊂ R2n++ := x ∈ Rn | x > 0 , we would like to
compute a bounded sequence 〈xk, sk〉 ⊂ R2n++, k = 0, 1, . . . , such that
limk→∞
sk − f(xk)→ 0, and limk→∞
(xk)T sk → 0.(1.5)
∗The research of both authors was partially supported by grant DOE-SP0011568 and ONRN00014-09-10518, N00014-210051†Dept. of Industrial Engineering and Management Sciences, Northwestern University, Evanston,
IL, 60208, Email: [email protected]
1
2 K.-L. HUANG AND S. MEHROTRA
Let X := diag(x1, . . . , xn), and let
νf (·) : (0, 1) 7→ (1,∞)
be a monotone increasing function. We say that f satisfies a scaled Lipschitz condition(SLC) if
‖X(f(x+ h)− f(x)−∇f(x)h)‖1 ≤ νf (β)hT∇f(x)h,(1.6)
whenever
h ∈ Rn, x ∈ Rn++, ‖X−1h‖∞ ≤ β < 1,
where β ∈ (0, 1) is a constant. The analysis in this paper assumes that f satisfies SLC.Such a condition for MCP was used by Potra and Ye [31] to develop an interior pointmethod (IPM) for MCP. In their analysis Potra and Ye assumed that the set of strictlyfeasible points x ∈ Rn | x > 0, f(x) > 0 is nonempty, and a feasible starting pointx0 > 0 and s0 = f(x0) > 0 is available. Under these assumptions Potra and Ye [31]developed an IPM that achieves a constant reduction of the primal-dual potentialfunction
φρ(x, s) = ρ log xT s−n∑j=1
log xjsj ,(1.7)
for n+√n ≤ ρ ≤ 2n.
This type of potential function was introduced by Tanabe [35] and Todd andYe [36] for LP and further extended for linear complementarity problems by Kojimaet al. [22]. It satisfies the following inequality
(ρ− n) log xT s ≤ φρ(x, s)− n log n,
and hence it follows that
xT s ≤ ε whenever φρ(x, s) ≤ n log n− (ρ− n)| log ε|.
Therefore, starting from a feasible point, if this potential function is reduced by at leasta positive constant at each interior point iteration, then we will have xT s ≤ ε after atmost O(φρ(x
0, s0)−n log n+(ρ−n) |log ε|) iterations, where ε is the complementarityerror.
However, in practice an initial feasible point x0 for optimization problems may notbe available. To handle this issue, Ye et al. [39] developed a homogeneous IPM for lin-ear programming (LP) that can start from any (infeasible) interior point and achievesthe best known convergence complexity without introducing a big M constant. More-over, it detects the infeasibility of problems unambiguously. Xu et al. [38] developeda simplified (and equivalent) version of the homogeneous IPM, and showed that withsimilar computational efforts, their method can be more reliable than conventionalprimal-dual IPM while solving infeasible problems in Netlib [8] LP benchmark li-brary.
Andersen et al. [10] implemented a homogeneous IPM for solving second orderconic programming problems based on the work of Nesterov and Todd [28]. Luo [23],Sturm [34] and Zhang [40] extended the homogeneous IPM to general conic program-ming, including semidefinite programming. Recently, Skajaa et al. [33] implementeda homogeneous IPM for solving nonsymmetric conic programming problems.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 3
Andersen and Ye [11, 12] extended the homogeneous IPM to the general MCP.They developed a homogeneous IPM that converges to a maximal complementarysolution in at most O(
√n log 1/ε) iterations if f satisfies SLC. Although the global
linear polynomial time convergence is provided in their analysis, exact search direc-tions and restrictive step size computations are required, and hence their algorithmis not practical.
Motivated by the apparent gap between the theoretical complexity results andlong-step practical implementations in IPM, Mehrotra and Huang [26] recently in-troduced a homogeneous IPM equipped with a continuously differentiable potentialfunction for LP. Their method ensures a global linear polynomial time convergencewhile providing the flexibility to integrate heuristics for generating the search direc-tions and the step size computations. More importantly, they showed that it is possibleto implement a potential reduction IPM that uses similar number of iterations as analgorithm that ignores global linear polynomial time convergence.
In this paper the Mehrotra-Huang potential function is extended to the con-text of MCP. Our goal is to develop a stable interior point solver that ensures theglobal linear polynomial time convergence without compromising the computationalefficiency. Two guiding principles are used in our implementation: (1) ensure a reduc-tion in the potential function; (2) use the Mehrotra “corrector” direction to improvethe reduction in the potential function. Our algorithm is implemented in a softwarepackage named iOptimize. iOptimize is developed based on a variant of Mehrotra’spredictor-corrector framework [25]. Computational results on the standard test prob-lems [2, 5, 6, 24] show that iOptimize appears to perform more robustly and solvesproblems with fewer average iterations than a mature commercial solver MOSEK (bar-rier solver version 6.0.0.106; MOSEK for short) [7]. In particular, all problems in testsets [2, 5, 6, 24] were solved by iOptimize using default settings, while MOSEK failed onseveral problems. Moreover, the average number of iterations used by iOptimize arefewer than those taken by MOSEK for convex quadratic programming (QP) problems,convex quadratically constrained quadratic programming (QCQP) problems, and gen-eral convex programming (CP) problems in the test sets. We further created someinfeasible problems to check the robustness of our approach. While iOptimize solvedall these problems (except one, where the failure was due to an AMPL error message),MOSEK failed on several problems. We also found that iOptimize seems to detect in-feasibility more reliably than general nonlinear solvers Ipopt (version 3.9.2) [3] andKnitro (version 8.0) [4].
The rest of this paper is organized as follows. In Section 2 we outline the knownresults for the homogeneous formulation for MCP. In Section 3 we describe our poten-tial function and show that our potential reduction algorithm converges to a maximalcomplementary solution in polynomial time. In Section 4 we provide additional imple-mentation details of iOptimize. Computational results are presented and discussedin Section 5. Concluding remarks are made in Section 6.
2. Homogeneous formulation for the monotone complementarity prob-lem. In this section we summarize the main results of Andersen and Ye’s [11, 12,Theorem 1] homogeneous IPM for MCP. Andersen and Ye consider an augmentedhomogeneous model related to MCP (HMCP):
s = τf(x/τ),(2.1)
κ = −xT f(x/τ),(2.2)
4 K.-L. HUANG AND S. MEHROTRA
0 ≤ (s, κ) ⊥ (x, τ) ≥ 0.(2.3)
The right-hand side in (2.1) is closely related to the recession function in convexanalysis of Rockafellar [32, Section 8]. Given a maximal complementary solution(x∗, τ∗, s∗, κ∗) that satisfies (2.1) and (2.2), suppose τ∗ > 0 and hence κ∗ = 0. Thenfrom (2.1) and (2.2) we have
s∗/τ∗ = f(x∗/τ∗) and κ∗/τ∗ = −(x∗/τ∗)T (s∗/τ∗) = 0.
This implies that (x∗/τ∗, s∗/τ∗) is a complementary solution for MCP. On the otherhand, if κ∗ > 0 and hence τ∗ = 0, then the complementarity of MCP at (x∗, s∗) is notzero. In this case MCP is said to be infeasible, and x∗ is a direction of a separatinghyperplane that separates 0 and the set s− f(x) ∈ Rn | (x, s) > 0 .
Let
F (x, τ) :=
(τf(x/τ)−xT f(x/τ)
): Rn+1
++ → Rn+1.
Andersen and Ye [12] showed the following theorem.Theorem 2.1. Consider the MCP and the associated HMCP.1. If ∇f is positive semidefinite in Rn+, then ∇F is also positive semidefinite in
Rn+1++ .
2. F is a continuous homogeneous function in Rn+1++ with degree 1 and for any
(x, τ) ∈ Rn+1++ , (x; τ)TF (x, τ) = 0, and (x, τ)T∇F (x, τ) = −F (x, τ)T .
3. If f is a continuous monotone mapping from Rn+ to Rn, then F is a contin-
uous monotone mapping from Rn+1++ to Rn+1.
4. If f is scaled Lipschitz with νf , then F is also scaled Lipschitz; that is, itsatisfies condition (1.6) with
νF (β) =
(1 +
2νf (2β/(1− β))
1− β
)(1
1− β
),
where β < 1/3.5. There exists a bounded sequence 〈xk, τk, sk, κk〉 ⊂ R2n+2
++ , k = 0, 1, . . . thatsolves HMCP in a limit.
6. Let (x∗, τ∗, s∗, κ∗) be a maximal complementary solution or ray for HMCP.MCP has a solution if and only if τ∗ > 0. In this case, (x∗/τ∗, s∗/τ∗) is acomplementary solution for MCP.
7. MCP is (strongly) infeasible if and only if κ∗ > 0. In this case, (x∗/κ∗, s∗/κ∗)is a certificate (ray) to prove (strong) infeasibility.
We note that if f is an affine function, then there is a strictly complementarysolution for HMCP. On the other hand, if f is a monotone function, then the maximalcomplementary solution (x∗, τ∗, s∗, κ∗) may not be a strictly complementary solution.However, in this case Andersen and Ye [12, proof for Theorem 1] showed that τ∗+κ∗ >0. This implies (x∗, s∗) must either generate a finite solution for HMCP, or give acertificate that proves the infeasibility of MCP.
2.1. The path following homogeneous algorithm. For simplicity in thefollowing we let n := n+ 1, x := (x; τ) ∈ Rn+ and s := (s;κ) ∈ Rn+. Let
rk := sk − F (xk),
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 5
and let (x0, s0) = e be the starting point. Let C(ϑ) denote a continuous trajectorysuch that
C(ϑ) :=
(xk, sk) | sk − F (xk) = ϑr0, Xksk = ϑe, 0 < ϑ ≤ 1.
Note that such a trajectory always exists (Guler [19]). Moreover, Andersen and Ye [12,Theorem 2] showed that this continuous trajectory is bounded and any limit point is amaximal complementary solution for HMCP. At iteration k with iterate (xk, sk) > 0,the algorithm computes the search direction (dx, ds) by solving the following systemof linear equations:
ds −∇F (xk)dx = −ηrk,(2.4)
Xkds + Skdx = γµke−Xksk.(2.5)
Here η and γ are parameters between 0 and 1, and µk := (xk)T sk
n . For a step sizeα > 0, let the new iterate be
x+ := xk + αdx > 0 and
s+ := sk + αds + (F (x+)− F (xk)− α∇F (xk)dx)) = F (x+) + (1− αη)rk > 0.(2.6)
Andersen and Ye [12] showed the following lemmas.Lemma 2.2. The direction (dx, ds) satisfies
dTx ds = dTx∇F (xk)dx + η(1− γ − η)nµk.
Lemma 2.3. Let r+ = s+ − F (x+), and consider the new iterate (x+, s+) givenby (2.4)–(2.6). Then,
1. r+ = (1− αη)rk.2. (x+)T s+ = (xk)T sk(1− α(1− γ)) + α2η(1− η − γ)nµk.
Note that from Lemma 2.3, for η = 1 − γ the infeasibility residual and thecomplementarity are reduced at the same rate. Based on these lemmas, Andersenand Ye [12] showed that with suitable chosen parameters α and β, their homogeneousalgorithm will generate a sequence 〈xk, sk〉 > 0 such that 〈xk, sk〉 ≤ ‖XKsk −µke‖ ≤ϑµk, where ϑ ∈ (0, 1] is a constant parameter related to νF (β). Moreover, theyshowed that both ‖rk‖ and (xk)T sk converge to zero at a global rate γ = 1− ϑ/
√n.
As a result, if f satisfies the SLC, then the homogeneous algorithm converges inO(√n log(1/ε)) iterations. However, it requires exact search directions and uses short
step computations, and thus is not a practical algorithm.
3. A practical potential reduction homogeneous algorithm. We considerthe following potential function for the HMCP.
Φρ(x, s) := (ρ/2) log
(xT s)2 + θ‖r‖2−
n∑j=1
log xjsj ,(3.1)
where θ is a constant parameter with positive value and ρ ≥ n +√n. A potential
function of this form was first introduced by Goldfarb and Mehrotra [17], and wasextended to the homogeneous LP model by Mehrotra and Huang [26]. It is similar to
6 K.-L. HUANG AND S. MEHROTRA
(1.7) except that the residual vector r is considered in the function. Without loss ofgenerality we choose θ = 1 in (3.1).
We need to analyze the worst case decrease in (3.1) at each iteration of ourpotential reduction IPM. Let
Sk := diag(sk1 , . . . , skn), D := (XkSk)0.5, Dmin := minj=1,...,n
(xkj s
kj
)0.5,
p := nρµe−D
2e = xkTsk
ρ e−D2e, and
α := β Dmin
‖D−1p‖ ,
(3.2)
where β ∈ (0, 1) is a positive constant that will be determined later. Suppose (dx, ds)is computed by solving the linear system of equations (2.4) and (2.5) with η = 1− γand γ = n
ρ . Let
px := αdx,
ps := αds + (F (x+)− F (xk)− α∇F (xk)dx),(3.3)
z := Xk(F (x+)− F (xk)−∇F (xk)px), and
q := z/α,
and let x+ := xk + px > 0, and s+ := sk + ps > 0. We have
p = γµe−D2e = Xkds + Skdx,(3.4)
and
Xkps + Skpx − αp = z = αq.(3.5)
In Section 3.1 we show that if f satisfies SLC (and consequently so does F ), thenat iteration k the theoretical direction (dx, ds) with an appropriate step size α resultsin
Φρ(x+, s+)− Φρ(x
k, sk) ≤ ζ,
where ζ is a negative constant. Let (x+, s+) be an iterate computed by using anyfavored search direction with inexact step sizes in the primal and dual spaces. Start-ing from an initial solution (x0, s0) > 0, our potential reduction IPM is given inAlgorithm 1. This algorithm is designed to have the freedom in choosing any favoredsearch direction with inexact and different step sizes in the primal and dual spaces.
In Section 3.2 we show that if Φρ(xk, sk) is reduced by at least a constant amount
at each iteration and Φn(xk, sk) (potential function 3.1 with parameter ρ = n) isupper bounded by a constant amount ιn log n (ι > 0 is a constant that will be givenlater), and iterates are maintained while satisfying Assumption 3.1 given below, thenAlgorithm 1 generates a maximal complementary solution of desired precision in poly-nomial time.
Assumption 3.1.
(i) Ω(1) ≤ (x0)T s0|rkj |(xk)T sk|r0j |
≤ O(1), j = 1, . . . , n.
(ii) Ω(1) ≤ (x+)T s+
(x+)T s+≤ O(1).
Assumption 3.1 (i) requires that the overall decrease in the infeasibility to com-plementarity ratio is maintained in Algorithm 1. Assumption 3.1 (ii) requires thatthe complementarity at the iterate (x+, s+) generated using a favored direction and
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 7
Algorithm 1 Practical potential reduction IPM
Input: An initial solution (x0, s0) > 0, constants ζ < 0 and ι > 0.Output: An (asymptotically) maximal complementary solution (xk, sk).
1: Let k = 0.2: while Termination criteria is not met do3: Compute (x+, s+) using any favored search direction and step size for xk and
sk.4: if Φρ(x
+, s+)− Φρ(xk, sk) ≤ ζ, Φn(xk, sk) ≤ ιn log n, and if Assumption 3.1 is
satisfied then5: Set (xk+1, sk+1) := (x+, s+).6: else7: Compute (x+, s+) using the theoretical direction (dx, ds) with an appropriate
step size α.8: Set (xk+1, sk+1) := (x+, s+).9: end if
10: Set k := k + 1.11: end while
the theoretical iterate (x+, s+) are of the same order. Note that if η = 1 − γ, thenfrom Lemma 2.3 we have (x+)T s+ = (1− αη)(xk)T sk. As a result if only theoreticaldirection is chosen in the algorithm, Assumption 3.1 is satisfied with constant equalto 1, and the complementarity is decreased monotonically. Convergence propertiesfor an algorithm that has additional flexibility of choosing more aggressive directionsand step size are still ensured under Assumption 3.1.
3.1. Potential function reduction using the theoretical direction. Thefollowing analysis follows some of the steps in Kojima et al. [22] and Potra and Ye [31,Sections 2 and 3]. We drop the index k for an iteration to simplify the presentation.
Proposition 3.2 (Kojuma et al. [22, Lemma 2.3]). Let u, v,and w ∈ Rn satisfyu+ v = w and uT v ≥ 0. Then
‖u‖ ≤ ‖w‖, ‖v‖ ≤ ‖w‖, and uT v ≤ 1
2‖w‖2.
Lemma 3.3. Let px be defined as in (3.3). Then, ‖X−1px‖∞ ≤ β.Proof. We use the technique in Kojima et al. [22] for the proof. We first rewrite
equation (3.4) as
D−1Xds‖D−1p‖
+D−1Sdx‖D−1p‖
=D−1p
‖D−1p‖.(3.6)
From Lemma 2.2, we have
(D−1Xds)TD−1Sdx
‖D−1p‖‖D−1p‖=
dTx ds‖D−1p‖2
≥ 0.(3.7)
Now using Proposition 3.2 in (3.6) and (3.7), we have
‖D−1Sdx‖‖D−1p‖
≤ ‖D−1p‖
‖D−1p‖= 1.(3.8)
8 K.-L. HUANG AND S. MEHROTRA
Using (3.3) and (3.8), we deduce the following relation
‖X−1dx‖‖D−1p‖
=‖D−2Sdx‖‖D−1p‖
≤ D−1min.(3.9)
Finally, using (3.2) and (3.9), we have
‖X−1px‖∞ = α‖X−1dx‖∞ ≤ βDmin‖X−1dx‖‖D−1p‖
≤ β.
Potra and Ye [31] showed the following lemma.Lemma 3.4 (Potra and Ye [31, Lemma 3.2]). If dx and ds are the solution of the
linear system (3.5), and if condition (1.6) and Lemma (3.3) are satisfied, then
‖z‖ ≤ ξβ2D2min,
where ξ = 0.25νF (β) and z is defined in (3.3).From the above lemma and (3.2) we have
‖D−1q‖ ≤ ‖q‖D−1min =
‖z‖αD−1
min ≤ξβ2
αDmin = ξβ‖D−1p‖.(3.10)
Kojima et al. showed the following lemma.Lemma 3.5 (Kojima et al. [22, Lemma 2.5]). Suppose the search direction (dx, ds)
is computed by solving the linear system of equations (2.4) and (2.5) with η = 1 − γand γ = n
ρ (ρ ≥ n+√n), then
ρ
xT s‖D−1p‖ =
∥∥∥D−1e− ρ
xT sDe∥∥∥ ≥ √3
2D−1
min.
In the following we define two real valued functions g1 and g2, and establish resultsthat upper bound them. These two functions and the related upper bounds will beused later in our analysis. Let
g1 := −αρη + eT(X−1px + S−1ps
)and(3.11)
g2 :=‖X−1px‖2 + ‖S−1ps‖2
2(1− β).(3.12)
Lemma 3.6. Let g1 be defined as in (3.11). With a suitably chosen parameterβ ∈ (0, 1), suppose the search direction (dx, ds) is computed by solving the linearsystem of equations (2.4) and (2.5) with η = 1− γ and γ = n
ρ (ρ ≥ n+√n). Let the
step size α be defined as in (3.2). If condition (1.6) is satisfied, then
g1 ≤ −√
3
2(1− ξβ)β + ξβ2 ρ√
n.
Proof. From (2.5) and γ = 1− η, we have
−αρη = αρxT ds + sT dx
nµ.(3.13)
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 9
An upper bound of g1 is obtained in the following set of inequalities.
g1 = αρ
(xT ds + sT dx
xT s
)− eT
(X−1px + S−1ps
)= α
ρ
xT seT (Xds + Sdx)− eTD−2(Xps + Spx)
= αρ
xT seT(p− xT s
ρD−2(p+ q)
)(using (3.4) and (3.5))
= αρ
xT seT((
D − xT s
ρD−1
)D−1p− xT s
ρD−2q
)= −α ρ
xT s
(pTD−1D−1p
)− α ρ
xT s
(D−1p+De
)TD−1q (using (3.2))
= −α ρ
xT s
(pTD−1D−1(p+ q)
)− α ρ
xT seT q
= −α ρ
xT s
(‖D−1p‖2 + pTD−1D−1q
)− ρ
xT seT z (using (3.3))
≤ −α ρ
xT s
(‖D−1p‖2 − ‖D−1p‖‖D−1q‖
)+
ρ
xT s‖z‖1
≤ −α ρ
xT s
(‖D−1p‖2 − ξβ‖D−1p‖2
)+
ρ
xT s
√n‖z‖ (using (3.10))
≤ −α ρ
xT s(1− ξβ)‖D−1p‖2 +
ρ
xT s
√nξβ2D2
min (using Lemma 3.4)
≤ −√
3
2(1− ξβ)αD−1
min‖D−1p‖+
ρ√nξβ2 (using Lemma 3.5)
= −√
3
2(1− ξβ)β +
ρ√nξβ2. (using (3.2))
Lemma 3.7. Let g2 be defined as in (3.12). Under the hypothesis of Lemma 3.7,we have
g2 ≤β2(1 + ξβ)2
2(1− β).
Proof. The following proof is based on techniques in Potra and Ye [31]. We have
2(1− β)g2 = ‖X−1px‖2 + ‖S−1ps‖2(3.14)
= ‖D−2Xps‖2 + ‖D−2Spx‖2
≤ D−2min
(‖D−1Xps‖2 + ‖D−1Spx‖2
)= D−2
min
(‖D−1(Xps + Spx)‖2 − 2pTx ps
)= D−2
min
(α2‖D−1(p+ q)‖2 − 2pTx ps
)(using (3.5))
≤ D−2min
(α2(‖D−1p‖+ ‖D−1q‖)2 − 2pTx ps
)≤ D−2
min
(α2 (1 + ξβ)2|D−1p‖2 − 2pTx ps
)(using (3.10))
=(αD−1
min‖D−1p‖
)2(1 + ξβ)2 − 2D−2
minpTx ps
= β2(1 + ξβ)2 − 2D−2minp
Tx ps. (using (3.2))
The upper bound on pTx ps is obtained as follows.
pTx ps = pTx (αds + (F (x+)− F (x)−∇F (x)px)) (using (3.3))(3.15)
10 K.-L. HUANG AND S. MEHROTRA
= (x+ − x)T (F (x+)− F (x)) + α2(dTx ds − dTx∇F (x)dx) (using (3.3))
= (x+ − x)T (F (x+)− F (x)) + α2η(1− η − γ)nµ (using Lemma 2.2)
≥ 0 (from the monotonicity of F ).
The result follows from using (3.14) and (3.15).The following proposition is frequently used in the IPM analysis.
Proposition 3.8 (Karmarker [21]).1. If 1− δ > 0, then log 1− δ ≤ −δ.2. If δ ∈ Rn satisfies ‖δ‖∞ ≤ β < 1, then
∑nj=1 log 1− δj ≥ −eT δ − ‖δ‖2
2(1−β) .
Now we are ready to show a bound on the difference between the potential functionvalues Φρ(x
+, s+) and Φρ(x, s). The next theorem bounds the improvement in thepotential function from taking the theoretical direction with a controllable step size.
Theorem 3.9. Let ξ = 0.25νF (β) ≥ 0.25. Suppose the search direction (dx, ds)is computed by solving the linear system of equations (2.4) and (2.5) with η = 1 − γand γ = n
ρ (ρ ≥ n+√n). Let the step size α be defined as in (3.2). If condition (1.6)
is satisfied, and β ∈ (0, 0.303) is chosen such that
ξβ ≤√
3
4(√
3 + 2ρ/√n), and(3.16)
β√ξ(1 + ξβ) ≤
√3
2(√
3 + 2ρ/√n)1/2 .(3.17)
Then, (x+, s+) is always strictly positive. Moreover,
Φρ(x+, s+)− Φρ(x, s) ≤ −0.278
β
1− β.
Proof. We first show that (x+, s+) is strictly positive. From Lemma 3.7 and(3.17), we have
max‖X−1px‖∞, ‖S−1ps‖∞
≤ β(1 + ξβ) ≤
√3
2(ξ(√
3 + 2ρ/√n))1/2 < 1.
Hence, (px, ps) is a valid direction. We note that from Lemma 3.3, we have
‖X−1px‖∞ ≤ β ≤√
3
4ξ(√
3 + 2ρ/√n)≤
√3
2(ξ(√
3 + 2ρ/√n))1/2 .
Therefore, the upper bound from Lemma 3.3 is still valid (and tighter). Moreover,β < 0.303 implies that the condition in Theorem 2.1 [4] is satisfied. Now,
Φρ(x+, s+)− Φρ(x, s)
= (ρ/2) log
(1− αη)2((nµ)2 + ‖r‖2
)−
n∑j=1
log (xj + (px)j) (sj + (ps)j)
− (ρ/2) log
(nµ)2 + ‖r‖2
+
n∑j=1
log xjsj (using Lemma 2.3)
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 11
= ρ log 1− αη −n∑j=1
log
1 +
(px)jxj
1 +
(ps)jsj
≤ −αρη − eT(X−1px + S−1ps
)− ‖X
−1px‖2 + ‖S−1ps‖2
2(1− β)(using Proposition 3.8)
= g1 + g2.
From Lemmas 3.6 and 3.7, we write
Φρ(x+, s+)− Φρ(x, s) ≤ −
√3
2(1− ξβ)β + ξβ2ρ/
√n+
β2(1 + ξβ)2
2(1− β)
=β2(1 + ξβ)2
2(1− β)+ β
(−√
3
2+ ξβ
(√3
2+
ρ√n
))
=β2(1 + ξβ)2
2(1− β)−√
3
2β
(1− ξβ
(√3 + 2(ρ/
√n)√
3
))=: ζ.
It follows that
−21− ββ
ζ = −β(1 + ξβ)2 +√
3(1− β)
(1− ξβ
(√3 + 2(ρ/
√n)√
3
)).
Numerically, it is easy to verify that
ζ ≤ −0.278β
1− β.
Since νF (β) : (0, 1) 7→ (1,∞) is a monotone increasing function depending onβ ∈ (0, 0.303), we can always choose a sufficiently small β that satisfies (3.16) and(3.17). As a result, a reduction in the potential function (ζ < −0.278 β
1−β ) is guaran-
teed. However, choosing β is dependent on νF (β), finding a maximal β that satisfies(3.16) and (3.17) requires the knowledge of the scaled Lipschitz constant νF (β), whichmay not be available in practice for an arbitrary convex function. Problems satisfyingthe scaled Lipschitz condition are discussed in [27, 30, 41]. Nevertheless, the analysissuggests that for an appropriate step size, the potential function value will reduce aslong as the scaled Lipschitz constant is bounded. Consequently, the potential functioncan be used as a guide to solve monotone complementarity and convex optimizationproblems satisfying this property. If the condition is not satisfied, then lack of suffi-cient reduction in the potential function indicates that the algorithm proposed hereis not appropriate for the problem being solved. Finally, we note that Theorem 3.9 issimilar to the result by Potra and Ye [31], but with a difference that it allows us tochoose a larger ρ, when β is sufficiently small.
Remark. Note that by performing simple block elimination on the system of linearequations (2.4) and (2.5), we obtain the following alternative formulation:
X(α∇F (x)dx − αηr) + αSdx − αp = 0.(3.18)
Hence, dx is a direction obtained by applying a single Newton step to the equation
z(h) := X(F (x+ αh)− F (x)− αηr) + αSh− αp = 0,(3.19)
12 K.-L. HUANG AND S. MEHROTRA
with starting point h = 0. If F is linear (for example, LP formulated as a HMCPmodel), then the linear system (3.19) can be solved exactly. In this case Theorem 3.9with ξ = 0 reduces to the result of Mehrotra and Huang [26].
Suppose F is nonlinear and does not satisfy SLC. Given positive constants ξ and βthat satisfy conditions (3.16) and (3.17). If one can compute an approximate solutionδx such that
‖z(δx)‖ = ‖X(F (x+ αδx)− F (x)− αηr) + αSδx − αp‖ ≤ ξβ2 minj=1,...,n
(xjsj),(3.20)
then the result of Theorem 3.9 is still satisfied. However, finding a direction δx mayrequire many Newton steps, and an explicit bound on the number of Newton steps isunknown.
3.2. Convergence of the practical potential reduction homogeneous al-gorithm. The following theorem shows that if potential function (3.1) can be reducedby at least a constant amount at each iteration, then the Algorithm 1 generates anε−complementary solution in polynomial time. Our analysis follows some of the stepsin Kojima et al. [22] and Porta and Ye [29].
Theorem 3.10. Let ε > 0. Starting from a solution (x0, s0) > 0, if the value ofpotential function (3.1) can be reduced by at least a constant amount and if Assumption3.1 is satisfied at each iteration of Algorithm 1, then after at most O(Φρ(x
0, s0) −n log n+ (ρ− n) |log ε|) iterations, we will have a solution such that
xT s ≤ ε and|rj |∣∣r0j
∣∣ ≤ O(1)xT s
(x0)T s0, j = 1, . . . , n.(3.21)
Proof. Using Assumption 3.1, we have
(ρ/2) log
(xT s)2
1 +
(Ω(1)
∥∥r0∥∥
(x0)T s0
)2−
n∑j=1
log xjsj ≤ Φρ(x, s),
which implies
(ρ− n) log xT s−n∑j=1
log nxjsjxT s
+ (ρ/2) log
1 +
(Ω(1)
∥∥r0∥∥
(x0)T s0
)2(3.22)
≤ Φρ(x, s)− n log n.
Since the second and the third terms of (3.22) are always nonnegative, it follows that
(ρ− n) log xT s ≤ Φρ(x, s)− n log n.
Therefore, Φρ(x, s) ≤ n log n− (ρ− n) |log ε| , then we have
xT s ≤ ε.
Moreover, Assumption 3.1 implies that
|rj |∣∣r0j
∣∣ ≤ O(1)xT s
(x0)T s0, j = 1, . . . , n.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 13
We now show that any limit point of a sequence 〈xk, sk〉 generated by the potentialreduction algorithm of Section 3.2 is a maximal complementary solution. Our analysishere follows some of the steps in Guler and Ye [20, Sections 2 and 4] and Potra andYe [31, Section 4].
Lemma 3.11. Given an initial positive point (x0, s0). Recall that (x+, s+) isa tentative point computed using any favored search direction using an inexact anddifferent step size in the primal and dual spaces, and it satisfies Assumption 3.1;whereas (x+, s+) is computed using the theoretical direction (dx, ds) and a fixed stepsize α. Let
C1 :=ρ− n
2log
((x0)T s0
)2+O(1)2
∥∥r0∥∥2
((x0)T s0)2
+ Ω(1)2 ‖r0‖2
≥ 0, and
C2 := C1 + max 0,−(ρ− n) log Ω(1) ≥ 0,
then,
Φn(x+, s+)− Φn(xk, sk) ≤ ζ − (ρ− n) log
1−D2
min
√n
(xk)T sk
, and(3.23)
Φn(x+, s+)− Φn(xk, sk) ≤ ζ − (ρ− n) log
1−D2
min
√n
(xk)T sk
+ C2,(3.24)
where Φn(xk, sk) is defined as in (3.1) with ρ = n.Proof. To simplify the presentation we drop the index k. Using Assumption 3.1,
we have
∥∥r+∥∥2 ≥
(Ω(1)
∥∥r0∥∥ (x+)T s+
(x0)T s0
)2
, ‖r‖2 ≤(O(1)
∥∥r0∥∥ xT s
(x0)T s0
)2
, and(3.25)
(x+)T s+ ≥ Ω(1)(x+)T s+.(3.26)
First we bound the difference between the values of Φn(x+, s+) and Φn(x, s). Letr+ = s+ − F (x+).
Φn(x+, s+)− Φn(x, s)
= Φρ(x+, s+)− Φρ(x, s)− Φρ(x
+, s+) + Φρ(x, s) + Φn(x+, s+)− Φn(x, s)
≤ ζ − ρ− n2
log
((x+)T s+)2 +∥∥r+
∥∥2
+ρ− n
2log
(xT s)2 + ‖r‖2
(using Theorem 3.9)
≤ ζ − ρ− n2
log
((x+)T s+)2 +
(Ω(1)
∥∥r0∥∥ (x+)T s+
(x0)T s0
)2
+ρ− n
2log
(xT s)2 +
(O(1)
∥∥r0∥∥ xT s
(x0)T s0
)2
(using (3.25))
= ζ − (ρ− n) log
(x+)T s+
xT s
+ C1
≤ ζ − (ρ− n) log
(x+)T s+
xT s
− (ρ− n) log Ω(1) + C1 (using (3.26))
14 K.-L. HUANG AND S. MEHROTRA
≤ ζ − (ρ− n) log
(x+)T s+
xT s
+ C2
≤ ζ − (ρ− n) log 1− αη+ C2 (using Lemma 2.3)
= ζ − (ρ− n) log
1 + α
xT ds + sT dsxT s
+ C2 (using (3.13))
= ζ − (ρ− n) log
1 + α
pT e
xT s
+ C2 (using (3.4)).
Now,
αpT e
xT s=α((xT s/ρ
)eT e− eTD2e
)xT s
(using (3.2))
= α
(n
ρ− 1
)= −β Dmin
‖D−1p‖
(ρ− nρ
)(using (3.2))
≥ −βD2min
2(ρ− n)√3xT s
(using Lemma 3.5)
≥ −D2min
√n(ρ− n)
2xT sξ(√
3n+ 2ρ)(using (3.16))
≥ −D2min
2ρ√n
xT s(√
3n+ 2ρ)(because ξ = 0.25νFβ ≥ 0.25)
≥ −D2min
√n
xT s.
Therefore,
Φn(x+, s+)− Φn(x, s) ≤ ζ − (ρ− n) log
1−D2
min
√n
xT s
+ C2.
The difference between the values of Φn(x+, s+) and Φn(x, s) can be constructed ina similar way.
Lemma 3.12. Given an initial positive point (x0, s0). Suppose a sequence 〈xk, sk〉is generated by using Algorithm 1. There is a constant M1 independent of k such that
Φn(xk, sk) ≤M1 for all k.
Proof. Let C3 := (n/2) log
1 +
(O(1)
‖r0‖(x0)T s0
)2> 0. Using (3.25), we have
Φn(x, s) = (n/2) log
(xT s)2 + ‖r‖2−
n∑j=1
log xjsj
≤ (n/2) log
(xT s)2 +
(O(1)
∥∥r0∥∥ xT s
(x0)T s0
)2−
n∑j=1
log xjsj (using (3.25))
= n log xT s−n∑j=1
log xjsj + C3.(3.27)
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 15
In the following we analyze the behavior of n log xT s −∑nj=1 log xjsj . Let ι > 0 be
sufficiently large so that
ζ − (ρ− n) log
1− 1
nι−0.5
< 0.(3.28)
Now consider the case where Φn(x, s) ≥ n log xT s−∑nj=1 log xjsj ≥ ιn log n. In this
case we have,
n log xT s+ n logD−2min ≥ ιn log n,
⇒ logxT sD−2
min
≥ ι log n,
⇒ Φn(x+, s+)− Φn(x, s) = ζ − (ρ− n) log
1−D2
min
√n
xT s
(using (3.23))
≤ ζ − (ρ− n) log
1− 1
nι−0.5
< 0.
The above relations suggests that using the theoretical direction and a fixed step sizecan decrease the value of Φn(x, s) if this value is larger than ιn log n. Next we considerthe case where Φn(x, s) ≤ ιn log n. We show that the value of Φn(x, s) in this case isupper bounded by a constant amount independent of k, and the iterate is computedusing any favored search direction with inexact and different step sizes in the primaland dual spaces.Using (3.24) and (3.27), we have
Φn(x+, s+) ≤ Φn(x, s) ≤ ιn log n+ C3
≤ ζ − (ρ− n) log
1−D2
min
√n
xT s
+ ιn log n+ C2 + C3
≤ ζ − (ρ− n) log
1− 1√
n
+ ιn log n+ C2 + C3.
Consequently, it follows that Lemma 3.12 holds with
M1 := max
ζ − (ρ− n) log
1− 1√
n
+ ιn log n+ C2 + C3,Φn(x0, s0)
.
Lemma 3.13. Let M1 be a fixed constant, and
Φn(xk, sk) ≤M1 for all k.
Under the hypothesis of Lemma 3.12. Then, there exists a fixed M2 (independent ofk) such that
minxkj s
kj
(xk)T sk
≥M2.
Proof. As usual, the index k is dropped for simplification. Using the result inLemma 3.12, we have
M1 ≥ Φn(x, s) = (n/2) log
(xT s)2 + ‖r‖2−
n∑j=1
log xjsj ≥ n log xT s−n∑j=1
log xjsj .
16 K.-L. HUANG AND S. MEHROTRA
Hence, it follows that
log xT s− log xisi ≤M1 − (n− 1) log xT s+∑j 6=i
log xjsj
≤M1 − (n− 1) logxT s− xisi
+∑j 6=i
log xjsj
≤M1 − (n− 1) log n− 1 =: − logM2.
Now let
σ(x) := j ∈ 1, 2, . . . , n : xj ≥ sj and
σ(s) := j ∈ 1, 2, . . . , n : sj ≥ xj .
Combining Lemmas 3.12 and 3.13, the following theorem shows that the sequence〈xk, sk〉 generates a maximal complementary solution for HMCP in the limit.
Theorem 3.14. Under the hypothesis of Lemma 3.12, the limit point of thesequence 〈xk, sk〉 is a maximal complementary solution for HMCP.
Proof. Let (x∗, s∗) be any maximal complementary solution for HMCP such that
s∗ = F (x∗) and (x∗)T s∗ = 0.
Let (x∞, s∞) be a limit point of the sequence 〈xk, sk〉. In the following we showthat (x∞, s∞) is a maximal complementary solution, that is, σ(x∞) = σ(x∗) and
σ(s∞) = σ(s∗). Let |r|T := (|r1| , . . . , |rn|) and C4 := O(1)|r0|T x∗
(x0)T s0≥ 0. Using
Assumption 3.1, we have
rTx∗ ≤ |r|T x∗
≤ O(1)(xT s)
∣∣r0∣∣T x∗
(x0)T s0
= C4xT s.(3.29)
From the monotonicity of F (x), for any positive solution (x, s) > 0, we have
(x− x∗)T (F (x)− F (x∗)) ≥ 0,
⇒ −xT s∗ − (s− r)Tx∗ ≥ 0,
⇒ −xT s∗ + sTx∗ ≤ rTx∗ ≤ C4xT s, (using (3.29))
⇒∑
j∈σ(x∗)
x∗jsj +∑
j∈σ(s∗)
s∗jxj ≤ C4xT s.
As a result, if j ∈ σ(x∗), then
C4xT s ≥ sjx∗j ≥ (sjxj)
x∗jxj≥ min xjsj
x∗jxj.
Using Lemma 3.13, the above relation implies
xj ≥min xjsjC4xT s
x∗j ≥M2
C4x∗j .
Hence, we have x∞j ≥ (M2/C4)x∗j > 0, consequently σ(x∗) = σ(x∞). The proof ofσ(s∗) = σ(s∞) can be constructed in a similar way.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 17
4. Implementation details. In this section we present the implementation de-tails of a potential reduction homogeneous algorithm for the MCP using the potentialfunction given in Section 3. We describe the details in the context of solving a generalconvex optimization problem (CP).
4.1. Homogeneous formulation for convex programming. Consider a pri-mal CP
min c(x)(4.1)
s.t. ai(x) ≥ 0, i = 1, . . . , m,
ai(x) = 0, i = m+ 1, . . . ,m,
x ≥ 0, and x is free,
where x ∈ Rn represents the primal variables. We let n + n = n and let x =(x; x) ∈ Rn × Rn, where x and x respectively represent the “normal” and “free”primal variables. Here, the function c(·) : Rn → R is convex, the component functionsai(·) : Rn → R, i = 1, . . . , m are concave, and the component functions ai(·) : Rn →R, i = m+ 1, . . . ,m are affine.
We let m = m + m and y = (y; y) ∈ Rm × Rm, where y and y respectivelyrepresent the “normal” and “free” dual variables. Let s ∈ Rn represent the vector ofdual slacks. Let ω := (x; y; τ ; z; s;κ). In the following we use the notation
Ωx :=x = (x; x) : x ∈ Rn++, x ∈ Rn
,
Ωy :=y = (y; y) : y ∈ Rm++, y ∈ Rm
,
Ω :=ω : ω ∈ Ωx × Ωy ×Rm+n+2
++
.
Moreover, we assume that functions c(·) and ai(·), i = 1, . . . , m, are at least twicedifferentiable on the set Ωx, and the matrix
[Dz 00 0
]∇a(x)
∇a(x)T −∇2xL(x, y)−
[Dx 00 0
]
is nonsingular for any positive semidefinite matrices Dx ∈ Rn×n and Dz ∈ Rm×m.Here
L(x, y) := c(x)− yTa(x)
denotes the Lagrangian function, and we use the notation
∇xL(x, y) :=
(∂L(x, y)
∂x1, . . . ,
∂L(x, y)
∂xn
)to denote the gradient vector of L(x, y) associated with x, and use
∇2xL(x, y) :=
∂2L(x,y)∂x2
1· · · ∂2L(x,y)
∂x1∂xn
.... . .
...∂2L(x,y)∂xn∂x1
· · · ∂2L(x,y)∂x2
n
18 K.-L. HUANG AND S. MEHROTRA
to denote the Hessian matrix of L(x, y) associated with x.A dual problem associated with (4.1) is
min L(x, y)− xT s = L(x, y)−∇xL(x, y)x(4.2)
s.t. ∇xL(x, y)T = (s; 0),
x, y, s ≥ 0, and x, y are free.
Using (4.1) and (4.2), one can construct optimality conditions to (4.1) as
∇xL(x, y)T = (s; 0),(4.3)
a(x) = (z; 0),
0 ≤ (s, y) ⊥ (x, z) ≥ 0,
x, y are free.
Here z ∈ Rm represents the vector of surplus variables corresponding to the concavefunctions ai(·), i = 1, . . . , m.
Problem (4.3) is a monotone complementarity problem with equality constraintsand free variables. Following Andersen and Ye [11], one can formulate a homogenizedmonotone complementarity problem for (4.3) as follows:
τ∇xL(x/τ, y/τ)T = (s; 0),(4.4)
τa(x/τ) = (z; 0),
−xT∇xL(x/τ, y/τ)T − yTa(x/τ) = κ,
0 ≤ (s, y, κ) ⊥ (x, z, τ) ≥ 0,
x, y are free,
where τ and κ are introduced homogeneous variables.Given a point ω ∈ Ω, let us define the residual vectors as
rvd := τg − (s; 0),
rvp := τa(x/τ)− (z; 0),
rvg := −κ− xT g − yTa(x/τ),
and
J := ∇a(x/τ),
g := ∇xL(x/τ, y/τ)T = ∇c(x/τ)T − JT (y/τ).
Let
(rp; rd; rg) := ηr := −η(rvp; rvd; rvg).
Let X := diag(x) and Y := diag(y). Starting from ω, the (perturbed-)Newtonmethod solves the following system of linear equations to compute direction dω :=(dx; dz; dτ ; dy; ds; dκ): H v2 −JT
vT1 Hg −vT3J v3 0
dxdτdy
− (ds; 0)
dκ(dz; 0)
=
rdrgrp
,(4.5)
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 19
and
Xds + Sdx = rxs := γµ(ω)e− Xs,Zdy + Y dz = rzy := γµ(ω)e− Zy,τdκ + κdτ = rτκ := γµ(ω)− τκ,
(4.6)
where
µ(ω) := ((x; z; τ)T (s; y;κ))/(n+ m+ 1),
H := ∇2xL(x/τ, y/τ),
v1 := −∇c(x/τ)T −H(x/τ),
v2 := ∇c(x/τ)T −H(x/τ),
v3 := a(x/τ)− J(x/τ),
Hg := (x/τ)TH(x/τ).
We can verify that H v2 −JTvT1 Hg −vT3J v3 0
is a positive semidefinite matrix. Hence, the complementarity problem (4.4) is mono-tone.
The coefficient matrix in the system of linear equations (4.5) and (4.6) in generalis sparse. By performing simple block elimination on the linear system, we obtain thefollowing alternative formulation, which is an augmented system form [14] with anadditional row and column: Dz J v3
−JT Dx v2
−vT3 vT1 Dg
dydxdτ
=
rp + (Y −1rzy; 0)
rd + (X−1rxs; 0)rg + τ−1rτκ
:=
rprdrg
,(4.7)
where
Dx := H +
[X−1S 0
0 0
], Dz :=
[ZY −1 0
0 0
], and Dg := Hg + τ−1κ.
Now define two vectors u := (v3;−v2) and v := (−v3; v1), and a matrix M :=[Dz JJT −Dx
]. Solving (4.7) is equivalent to solving the following two augmented
systems:
Mq = (rp; rd) and(4.8)
Mp = u.
Therefore, we have obatined the following components of the step direction:
dτ =rg − vT qDg − vT p
and
(dy; dx) = q − pdτ .
The remaining components ds, dz, dκ of the step direction can be easily recoverd.
20 K.-L. HUANG AND S. MEHROTRA
M in (4.8) is a symmetric and indefinite matrix. To solve this system we can usethe Bunch-Parlett symmetric matrix factorization method [14] or the quasi-definitemethod [37]. The Bunch-Parlett method is known to avoid numerical instabilityassociated with free variables. In the current test results iOptimize uses the sparsesymmetric indefinite matrix factorization library MA57 [16].
4.2. Primal dual updates. Andersen and Ye [11] consider two types of up-dates: a linear update and a nonlinear update. Let ω = (x; y; τ ; z; s;κ) ∈ Ω be afeasible solution and dω = (dx; dy; dτ ; dz; ds; dκ) be a search direction. The linearupdate computes a new solution ω+ as the following:
ω+ := ω + αdω,
where α ≥ 0 is a given step size. The nonlinear update is a direct result of (2.6). Asshown in Lemma 2.3, with γ = 1− η the nonlinear update of the dual variables tendto reduce the infeasibilities and the complementarity gap at the same rate, which istheoretically desirable.
Andersen and Ye [11] propose a merit function to measure the progress in theiralgorithm. If the merit function is reduced to zero, then a complementary solutionis obtained. They show that for a suitably choice of γ, the (perturbed-) Newtondirection is a descent direction for the merit function.
In our implementation we use the potential function (3.1) to evaluate the progressin the algorithm. Given a search direction and a step size, we compute two trialsolutions using the linear and nonlinear updates. We accept the trial solution thatreduces the potential function more. We give a pseudo code for updating a solution,and discuss the step size computations in iOptimize in the next section.
4.3. Step size computations. To find a controllable step size, Andersen andYe employ a simple backtracking line search method equipped with safeguards. Theyrequire their merit function to be reduced by checking the Goldstein-Armijo rule.
To avoid line search computations, some standard implementations have useda certain fixed distance (step factor) to the boundary. However, a fixed factor >0.99 may be overly aggressive in the earlier phase of IPM. Mehrotra [25] proposeda heuristic to compute the step factor adaptively. His method adaptively allows forlarger (and smaller) step factor values. In our implementation we employ this heuristicto compute step sizes in the primal and dual spaces, and use them to update the pointwith linear or nonlinear update. This heuristic does not guarantee the reduction ofthe potential function value (assuming ∇xL(x, y) satisfies a SLC). In such a case,we choose the theoretical descent direction from solving (4.5) and (4.6) with suitableparameters η and γ that guarantees the reduction of the potential function value, andemploy a golden-section line search to find a controllable step size (See Section 4.5 fora detailed strategy).
Procedure computeStepFactor gives a pseudo code for computing primal (σp) anddual (σd) step factors, Procedure computeStepSize for computing the step size (α) inthe primal and dual spaces, and Procedure updateSolution for updating a given pointω along a search direction dω using linear and nonlinear updates. In procedure com-puteStepFactor, two parameters, θl (lower bound) and θu (upper bound), are used tosafeguard against very tiny or overly aggressive step factors.
4.4. Predictor-corrector technique. The computation of a search directionfrom (4.5) and (4.6) involves a matrix factorization step and subsequent solves withthe factored matrix. The cost of the solves in general is much cheaper, and thus
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 21
Procedure 2 (σp, σd) =computeStepFactor(ω, dω)
Input: Current point ω = (x; y; τ ; z; s;κ) ∈ Ω, search direction dω =(dx; dy; dτ ; dz; ds; dκ) parameters θl and θu.
Output: Primal step factor σp and dual step factor σd.1: Find the blocking variables:
lx := arg minj=1,...,n
−xj/(dx)j if (dx)j < 0 ,
ls := arg minj=1,...,n
−sj/(ds)j if (ds)j < 0 ,
lz := arg mini=1,...,m
−zi/(dz)i if (dz)i < 0 ,
ly := arg mini=1,...,m
−yi/(dy)i if (dy)i < 0 ,
lτ := −τ/dτ if dτ < 0,
lκ := −κ/dκ if dκ < 0.
2: Compute an estimate of the duality gap:
µx :=
n∑j=1
(xj + (dx)j)(sj + (ds)j),
µz :=
m∑i=1
(zi + (dz)i)(yi + (dy)i),
µτ := (τ + dτ )(κ+ dκ),
µ := ((µx + µz + µτ )(1− θl)) / (n+ m+ 1) .
3: Compute the step factors for all the variables:
σx :=
(µ
slx + (ds)lx− xlx
)(1
(dx)lx
),
σs :=
(µ
xls + (dx)ls− sls
)(1
(ds)ls
),
σz :=
(µ
ylz + (dy)lz− zlz
)(1
(dz)lz
),
σy :=
(µ
zly + (dz)ly− yly
)(1
(dy)ly
),
στ :=
(µ
κlτ + (dκ)lτ− τlτ
)(1
(dτ )lτ
),
σκ :=
(µ
τlκ + (dτ )lκ− κlκ
)(1
(dκ)lκ
).
4: Compute primal and dual step factors:
σp := max θl,min σx, σz, στ , θu ,σd := max θl,min σs, σy, σκ, θu .
22 K.-L. HUANG AND S. MEHROTRA
Procedure 3 α =computeStepSize(ω, dω)
Input: Current point ω = (x; z; τ ; y; s;κ) ∈ Ω and search direction dω =(dx; dz; dτ ; dy; ds; dκ).
Output: Primal and dual step size α.1: Compute the step sizes to the boundary
αx := minj=1,...,n
−xj/(dx)j if (dx)j < 0 ,
αs := minj=1,...,n
−sj/(ds)j if (ds)j < 0 ,
αz := mini=1,...,m
−zi/(dz)i if (dz)i < 0 ,
αy := mini=1,...,m
−yi/(dy)i if (dy)i < 0 ,
ατ := −τ/dτ if dτ < 0,
ακ := −κ/dκ if dκ < 0.
2: Call Procedure computeStepFactor(σp, σd, ω, dω) to compute the primal step fac-tor (σp) and the dual step factor (σd).
3: Compute primal and dual step sizes:
αp := σp ×min αx, αz, ατ ,αd := σd ×min αs, αy, ακ ,
4: Set α := min αp, αd.
Procedure 4 ω+ =updateSolution(ω, dω, α)
Input: Current point ω = (x; y; τ ; z; s;κ) ∈ Ω, search direction dω =(dx; dy; dτ ; dz; ds; dκ), primal and dual step size α.
Output: Updated point ω+ ∈ Ω.1: Compute the primal and dual updates:ωl := ω + α(dx; dy; dτ ; dz; ds; dκ).
2: Compute the nonlinear update:
xn := x+ αdx,
yn := y + αdx,
τn := τ + αdτ ,
zni := ((1− αη)(−rvp)i + τnai(xn/τn)) , i = 1, . . . , m,
snj :=((1− αη)(−rvd)j + τn∇xLj(xn/τn, yn/τn)T
), j = 1, . . . , n,
κn :=((1− αη)(−rvg)− (xn)T∇xL(xn/τn, yn/τn)T − (yn)Ta(xn/τn)
).
3: Check if it is beneficial to take linear update:4: if Φρ(ω
n) ≥ Φρ(ωl) then
5: Set ω+ := ωl.6: else7: Set ω+ := ωn.8: end if
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 23
it leads to the idea of reusing the factorization of the linear system to compute abetter search direction. Several approaches have been developed based on this idea.For example, Mehrotra corrector [25] and Gondzio’s higher-order corrector [18] areproper choices. The presented empirical results are based on using Mehrotra correctoronly. We now describe an adaption of this technique for our context.
Let ωk ∈ Ω be the current point, we first compute an affine (pure Newton)direction by solving (4.5) and (4.6) with η = 1 and γ = 0, that is,
rp := −rvku, rxs := −Xksk,rd := −rvkp , rzy := −Zkyk,rg := −rvkd , rτκ := −τkκk.
The resulting direction is denoted by daω = (dax; daz ; daτ ; day; das ; daκ). Next we compute a
step size αa by calling Procedure αa =computeStepSize(ωk, daω) where ωk and daω arethe input, and αa is the output, and then compute a tentative point ωa by callingProcedure ωk = updateSolution(ωa, daω, α
a) where ωk, dω, and αa are the input andωa is the output.
Mehrotra suggested to determine the barrier parameters γa and ηa by using thepredicted reduction in the complementarity gap µ(ωa) along the affine direction. Theidea behind this heuristic is to reduce the centrality parameter γa more when the IPMcan make good progress towards optimality in the affine direction. In our implemen-tation we compute the barrier parameters γa and ηa as follows:
γa = min
0.99,max
(µ(ωa)µ(ωk)
)3
, 0.01
,
ηa = 1− γa.
The new complementarity gap is given by
(Xk +Dax)(sk + das) =Da
xdas ,
(Zk +Daz )(yk + day) =Da
zday,
(τk + daκ)(κk + daτ ) = daτdaκ,
where Dax := diag(dax) and Da
z := diag(daz). Since the new complementarity productideally should be equal to µ(ωa)e, the new search direction compensates for the errorand thus is computed with the following right-hand side:
rp :=−ηarvp, rxs := γaµ(ωa)e− Xksk −Daxdas ,
rd :=−ηarvd, rzy := γaµ(ωa)e− Zkyk −Dazday,
rg :=−ηarvg, rτκ := γaµ(ωa)− τkκk − daτdaκ.
The resulting direction here is called “Mehrotra direction” and is denoted by dmω .
4.5. Search guided by the potential function. The implementation logicin iOptimize that combines the predictor and corrector search directions at eachiteration is now given. At the core of our implementation is to ensure sufficientreduction in the potential function at each iteration, while using Mehrotra directionto further improve reduction in the potential function.
At each iteration k we first check if Φn+m+1(ωk) ≤ ι(n+m+1) log(n+m+1). Thissafeguard is used to make sure that our algorithm generates a maximal complementarysolution. If it is not satisfied, then we use the theoretical direction computation phase;otherwise we compute a Mehrotra direction dmω and use it to generate a tentative step
24 K.-L. HUANG AND S. MEHROTRA
size ωm by calling Procedure αm =computeStepSize(ωk, dmω ), where ωk and dmω arethe input, and αm is the output. Then we compute a tentative point ωm by callingProcedure ωm = updateSolution(dmω , ω
k, αm), where ωk, dmω , and αm are the inputand ωm is the output. Though dmω generally enhances the convergence, it does notguarantee reduction of the potential function value, or maintain the overall decreasein the infeasibility to complementarity ratio (Assumption 3.1). Hence we determinethe quality of the Mehrotra direction by evaluating the potential function value at ωm
and check if ωm satisfies Assumption 3.1. We note that checking 3.1(ii) requires theknowledge of the scaled Lipschitz constant, which may not be available in practice;hence, we simply check if µ(ωm) < µ(ωk). If Φρ(ω
m) < Φρ(ωk) and Assumption 3.1
is satisfied, we set ωk+1 := ωm; otherwise we compute a theoretical direction dtω from(4.5) and (4.6), and iteratively employs a golden-section linear search method [13] tofind a controllable step size αt, and use it to compute a new tentative point ωt bycalling Procedure ωt =updateSolution(ωk, dtω, α), where ωk, dtω, and αt are the inputand ωt is the output. We terminate the line search if the potential function achievessufficient decrease.
Note that if the scaled Lipschitz constant is large, then many function evaluationsmay be needed in the golden section line search. To avoid this we simply take the stepafter 4 iterations of golden section method, and monitor the progress of the potentialfunction over multiple interior point iterates. We ensure that the potential functionis eventually reduced over these multiple iterates.
A flowchart of the implementation of search direction computations is given inFigure 4.1.
4.6. Starting point and termination criteria. Andersen and Ye [11] showthat the HMCP IPM achieves consistently good performance even for a trivial startingpoint
(x0; x0; y0; y0; τ0; z0; s0;κ0) = (e; 0; e; 0; 1; e; e; 1).
This trivial starting point is used in the results reported in this paper. However, itmight be possible to further improve the computational results by implementing a(better) non-trivial starting point.
Another important issue is the termination criteria. We choose the terminationcriteria used by Andersen and Ye [11] with slight modifications. Define the primaland dual objective values by
PriObj := c(x/τ), and DualObj := L(x/τ, y/τ)−∇xL(x/τ, y/τ)(x/τ).
Let rvkp , rvkd , and rvkg be the vectors of the residuals corresponding to the kth iterate.
We terminate the algorithm whenever one of the following criteria is satisfied:• For QP, an (approximate) optimal solution is obtained if
‖rvkp‖τ 1 + ‖b‖
< 10−8,
‖rvkd‖τ 1 + ‖c‖
< 10−8,
min
|DualObj− PriObj|
1 + |DualObj|,|rvkg |
1 + |rv0g |
< 10−8.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 25
compute Mehrotra
direction dm
from (4.5) and
(4.6) with RHS (4.10)
call computeStepSize
αm
=(ωk, d
m)
call updateSolution
ωm
=(ωk, d
m, α
m)
compute centering
direction d t
from (4.6) and
(4.7) with RHS γ=η=0.5
Φρ(ωm
) decreases
and Assumption 3.1
satisfied?
Iteratively compute
step size αt
by golden-section line
search method
call updateSolution
ωt=(ω
k, d
t, α
t)
No
Set ωk+1
:=ωm
Input: ωk
theoretical direction computations
Potential
function value of
ωt decreases?
No
Set ωk+1
:=ωt
Output: ωk+1
golden-section line search method
Line search max
iterations reached?
No
Yes
YesYes
No
Yes
Φn+m+1(ωk) ≤
ι(n+m+1)log(n+m+1)
?
Fig. 4.1: Implementation of search direction computations
• For QCQP and CP, an (approximate) optimal solution is obtained if
‖rvkp‖max
1, ‖rv0
p‖ < 10−8,
‖rvkd‖max 1, ‖rv0
d‖< 10−8,
|rvkg |max
1, |rv0
g | < 10−6.
• For QP, QCQP, and CP, the problem is (near) infeasible if
µ(ωk) < µ(ω0)10−8 andτk
min 1, κk<τ0
κ010−12.
We note that Andersen [9] developed an infeasibility certificate for the homoge-neous monotone complementarity problems, that provide further information aboutwhether the primal or dual problem is infeasible. This feature, however, has not beenimplemented in the current version of iOptimize.
5. Computational results. iOptimize was compiled with the Visual Studio2010 compiler. All computations were performed on a 3.2 GHz Intel Dual-Core CPUmachine with 4GB RAM. The program is run on one CPU only. An AMPL [1] interface
26 K.-L. HUANG AND S. MEHROTRA
Parameter Value Explanation of the parameter
θ n+ m+ 1 Parameter in the potential function (3.1)ρ 100 Parameter in the potential function (3.1)(θl, θu) (0.999, 1.0) Parameters for QP in Procedure computeStepFactor(θl, θu) (0.9, 1.0) Parameters for QCQP and CP in Procedure computeStepFactorι (n+ m+ 1)2 Parameter in (3.28)
(Θ(1),Ω(1)) (10−2, 102) Parameters in Assumption 3.1
Table 5.1: The default parameter settings in iOptimize
is implemented in iOptimize to read the AMPL nonlinear models and the correspondingfirst and second-derivatives. All problems are solved using the same default parametersettings, which are shown in Table 5.1.
To provide a clearer summary of the results, we use performance profiles [15].Consider a set A of na algorithms, a set P of np problems, and a performance measurema,p, e.g., node count or computation time. We compare the performance on problemp by algorithm a with the best performance by any algorithm on this problem usingthe following performance ratio
rp,a =mp,a
min mp,a | a ∈ A.
We therefore obtain an overall assessment of the performance of the algorithm bydefining the following value
ρa(τ) =1
npsize p ∈ P | rp,a ≤ τ .
This represents the probability for algorithm a that the performance ratio rp,a iswithin a factor τ of the best possible ratio. The function ρa(·) represents the distri-bution function for the performance ratio.
5.1. Computational results on feasible problems. We solve the convex QPproblems from Maros and Mezaros’s QP test set [24], the convex QCQP problems fromMittelmann’s QCQP test set [5] and Vanderbei’s AMPL nonlinear Cute and Non-Cute
set [2], the general convex CP problems from Vanderbei’s AMPL nonlinear test set [2],and from Leyffer’s mixed-integer nonlinear test set [6] while ignoring the integralityrequirements.
Tables A.1–A.4 give the computational results that are obtained with the defaultparameter settings. Each row in these tables contains the profiles of the problems be-fore and after presolving, the primal and the dual objectives, the number of iterations(Avg Iters), the computation times (Presolve Time and Solver Time), the propor-tion of the effort in evaluating the potential function (Prop Φ (%)), and the averagenumber of search directions used (Avg Dirs) (here two means iOptimize uses pureNewton direction and Mehrotra corrector, whereas one means iOptimize uses thetheoretical direction only). We note that the QCQP problems in Mittelmann’s testset are equivalent to those of Maros and Mezaros’s QP test set, as each QP problemis reformulated as a QCQP problem with an additional quadratic constraint and afree variable.
Tables A.5–A.8 provide a comparison between MOSEK homogeneous and self-dualinterior point optimizer and iOptimize under their default settings. We focus on thenumber of iterations needed to achieve the desired accuracy by both solvers.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 27
Table A.5 gives the computational results for solving 127 QP problems. Theaverage number of iterations used by iOptimize (15.92 iters) and MOSEK (16.22 iters)are comparable. All problems can be solved within 50 iterations by both solvers. Notethat for three problems (liswet4, powell20, and qpilotno) MOSEK terminates withNEAR OPTIMAL status. This implies MOSEK cannot compute a solution that has theprescribed accuracy.
Table A.6 gives the computational results for solving Mittelmann’s QCQP test setproblems. Here two different average performance results are provided. “Avg1” pro-vides the average performance on problems for which MOSEK terminate with OPTIMAL,NEAR OPTIMAL or UNKNOWN status. For problems QQ-aug2dqp and QQ-powell20,MOSEK is not able to converge to the optimal solution within 400 iterations. We excludeboth problems in “Avg1”. For problems QQ-laser, QQ-qforplan, and QQ-qseba,MOSEK requires more than 50 interior point iterations to successfully solve. We recom-pute the average performance statistics again in “Avg2” by further excluding thesethree examples. Considering average performances in these experiments, iOptimize(Avg1=16.43 iters, Avg2=16.26 iters) is significantly better than MOSEK (Avg1=20.91iters, Avg2=18.34 iters). Note that for QQ-liswet1, QQ-liswet7–QQ-liswet12, MOSEKterminates with UNKNOWN status. This may happen when MOSEK converges slowly, andhence the primal, dual, or gap residuals may not attain the prescribed accuracy. Onthe other hand, iOptimize is able to terminate with OPTIMAL status for all the testproblems within 40 iterations.
Table A.7 contains the computational results for solving 11 QCQP problems inthe Vanderbei’s AMPL nonlinear test set. Comparing the size to those in Mittelmann’sQCQP test set, the problems here are relatively small, but may include a quadraticobjective term and more than one quadratic constraint. Considering the average iter-ations required for solving the problems, both solvers generate similar computationalresults (MOSEK Avg=11.36 iters versus iOptimize Avg=11.64 iters).
Table A.8 contains the computational results for solving 46 general CP prob-lems. Thirty five of the CP problems are selected from Vanderbei’s AMPL nonlin-ear Cute and Non-Cute set, and 11 of the CP problems are selected from Leyffer’smixed-integer nonlinear test set. Two different average performance statistics areprovided. “Avg1” provides the average performance on all the instances, exceptproblem dallass, for which MOSEK fails to converge to an optimal solution within400 iterations. For problems dallasm, dallasl, and elena-s383, MOSEK respec-tively requires 128, 74, and 85 iterations before terminating with OPTIMAL status.Thus we recompute the average statistics again in “Avg2” by additionally exclud-ing these three problems. Comparing “Avg1”, the average number of iterations usedby iOptimize (14.00 iters) is better than those used by MOSEK (18.46 iters). Byexcluding these problems for which MOSEK requires more than 50 iterations to solve(“Avg2”), the performance comparison between both solvers becomes comparable,though iOptimize (12.69 iters) still performs slightly better than MOSEK (13.07 iters).Note that for problems antenna-antenna2, firfilter-fir convex, and wbv-lowpass2,MOSEK terminates with NEAR OPTIMAL status.
The average number of directions used by iOptimize is ≈ 1.93, i.e., Mehrotradirection was not used in about 7% of the iterations. It implies that the averageproportion of the computations for the theoretical direction with a fixed step sizeis not significant. In these experiments we observed that Assumption 3.1 and therequirement Φn+m+1(ωk) ≤ ι(n+ m+ 1) log(n+ m+ 1) is always satisfied over allthe iterates, implying that iOptimize uses the theoretical direction only in the case
28 K.-L. HUANG AND S. MEHROTRA
where Mehrotra direction fails to reduce the potential function value. We also notethat the average proportion of the efforts used for the potential function evaluations is3.1%, suggesting that the computational demands for evaluating the potential functionis also not significant, while a potential reduction algorithm is implemented thatguarantees global linear polynomial time convergence.
Figure 5.1 shows the performance profiles for feasible problems comparing theiteration performances of iOptimize and MOSEK.
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(a) Maros and Mezaros’s QP feasible problems
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
0.4
0.5
0.6
0.7
0.8
0.9
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(b) Mittelmann’s QCQP feasible problems
1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.50.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(c) Vanderbei’s QCQP feasible problems
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 30.4
0.5
0.6
0.7
0.8
0.9
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(d) General CP feasible problems
Fig. 5.1: Performance profiles of iOptimize and MOSEK on feasible problems
5.2. Computational results on infeasible problems. In this section we testthe performance of iOptimize for solving the QCQP and CP infeasible problems.With the lack of the infeasible test set in the public domain, we create infeasible testproblems by adding invalid objective constraints to the feasible problems.
Let x∗ be an optimal solution of problem (4.1). We create a corresponding infea-sible problem that has the form:
min c(x)(5.1)
s.t. ai(x) ≥ 0, i = 1, . . . ,mn,
ai(x) = 0, i = m+ 1, . . . ,m,
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 29
c(x) ≤ c(x∗)− λ,x ≥ 0, and m is free,
where λ is a constant with positive value. We choose
λ :=
2|c(x∗)|, if |c(x∗)| ≥ 100,100, if |c(x∗)| < 100.
Hence, the problems are made significantly infeasible.Using this construction we create 127 infeasible QCQP problems from Maros
and Mezaros’s QP test set [24], 11 QCQP infeasible problems from Vanderbei’sAMPL nonlinear Cute and Non-Cute set [2], 35 CP infeasible problems from Van-derbei’s AMPL nonlinear Cute and Non-Cute set [2], and 11 CP infeasible problemsfrom Leyffer’s mixed-integer nonlinear test set [6] without integrality requirements.
Tables A.9–A.11 give the computational results that are obtained with the defaultparameter settings. The number of iterations (Avg Iters), the computation times (Pre-solve Time and Solver Time), the average number of search directions used (Avg Dirs),and the proportion of the effort in evaluating the potential function (Prop Φ (%)) aregiven. Tables A.12–A.14 provide a comparison between MOSEK and iOptimize undertheir default settings.
We focus on the number of iterations needed to detect infeasibility by bothsolvers. Table A.12 contains the computational results for solving 127 QCQP in-feasible problems created from Maros and Mezaros’s QP test set. For 38 problemsMOSEK fails to detect infeasibility within 400 iterations. For problems iQQ-qscagr25
and iQQ-qstandat, MOSEK terminates with UNKNOWN status. We compute the av-erage iterations in “Avg1” by excluding these 38 examples. Comparing “Avg1”,iOptimize (28.15 iters) requires approximately 30% fewer iterations than MOSEK (42.20iters). This is because MOSEK requires more than 60 iterations to detect the infeasibil-ity in 12 problems. We further exclude these 12 problems and recompute the averageiterations in “Avg2”. Comparing “Avg2”, the performance of iOptimize (28.15 iters)is comparable to that of MOSEK (27.39 iters). However, iOptimize is able to correctlydetect infeasibility for each the problem within 50 interior point iterations.
Table A.13 has the computational results for solving the 11 QCQP infeasibleproblems created from Vanderbei’s AMPL nonlinear test set. We provide the aver-age statistics by excluding problem ipolak4, which MOSEK fails to solve within 400iterations. Though the comparison shows that iOptimize (24.60 iters) requires ap-proximately 20% more iterations than MOSEK (20.30 iters), iOptimize correctly solvesall these infeasible problems.
Table A.14 has the computational results for solving 46 CP infeasible prob-lems created from Vanderbei’s AMPL nonlinear Cute and Non-Cute set, and Leyffer’smixed-integer nonlinear test set. For problems idallasl, idallasm, and idallass,MOSEK receives function evaluate error from AMPL. iOptimize receives the same er-ror while solving problem idallasl. In seven problems MOSEK fails to declare theinfeasible status within 400 iterations. We compute the average statistics “Avg2”by excluding these 10 problems. Comparing “Avg1”, MOSEK (66.33 iters) requiressignificantly more iterations than iOptimize (19.44 iters). This happens becauseMOSEK requires more than 60 iterations to detect infeasibility in 13 problems whereasiOptimize needs this in only one problem. We further exclude these 10 problems andrecompute the average statistics in “Avg2”. Comparing “Avg2”, iOptimize (16.04iters) requires 60% more iteration than MOSEK (11.74 iters). This is because MOSEK is
30 K.-L. HUANG AND S. MEHROTRA
able to detect infeasibility during the preprocessing phase in 11 cases; on the otherhand, iOptimize preprocessor does not detect any infeasibility. This highlights theimportance of advanced preprocessing implementations, which is not the focus ofresearch described in the current paper.
Note that for problem antenna-iantenna2, iOptimize requires 186 iterationsto solve, which is significantly larger than the average iterations (28.07 iters). Witha fixed step factor parameter settings (θl, θu) = (0.9, 0.9), iOptimize can solve thisproblem in 46 iterations.
Observe that the average number of directions used for this problem is around1.39, which is significantly smaller than that used for solving the feasible problems.This suggests the importance of using the theoretical direction and potential functionto decide the usefulness of a direction.
Figure 5.2 shows the performance profiles for the infeasible problems compar-ing the iteration performances of iOptimize and MOSEK. Note that problems whichiOptimize or MOSEK fails to solve or solves them during the preprocessing phase arenot considered.
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 30.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(a) Mittelmann’s QCQP infeasible problems
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 20.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(b) Vanderbei’s QCQP infeasible problems
1 2 3 4 5 6 7 8 9
0.4
0.5
0.6
0.7
0.8
0.9
1Performance Profile − Iterations
τ
Pro
babi
lity
MosekiOptimize
(c) General CP infeasible problems
Fig. 5.2: Performance profiles of iOptimize and MOSEK on infeasible problems
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 31
5.3. Computational comparison with general nonlinear solvers. In thissection we compare iOptimize with general nonlinear solver Knitro and Ipopt onCP feasible and infeasible problems. The computational results on QP and QCQPproblems are not provided because these problems are stored in MOSEK QPS format,which Ipopt and Knitro do not support. Note that Knitro provides three differentalgorithms for solving the nonlinear problems. In our comparison, we choose the directprimal-dual IPM. For Ipopt, we use MA57 library for the sparse symmetric indefinitematrix factorization, which is the same in iOptimize. Default parameter settings areused for both solvers expect that maximum iterations allowed is set at 400.
We first focus on the computational results for solving the CP feasible problems.The last two columns of Table A.8 contain the number of iterations used by Ipopt andKnitro. Considering the average number of iterations used for solving these problems,iOptimize (12.77 iters) ranked first, Knitro (18.95 iters) ranked second, whereasIpopt (27.84 iters) ranked last. For problems dallasm and polak3, both Knitro andIpopt received a function evaluation error from AMPL. Ipopt failed to solve dallass
to optimality in 400 iterations. Observe that Ipopt requires more than 50 iterationsfor three problems and Knitro requires more than 50 iterations for five problems,whereas iOptimize solves all feasible CP problems within 50 iterations.
Now we focus on the CP infeasible problems. Computational results from Ipopt
and Knitro are reported in the last two columns of Table A.14. Knitro failed to solve13 problems within 400 iterations, whereas Ipopt fails in three problems. Knitro re-ceived an AMPL function evaluation error in three problems and Ipopt received thesame error in four problems. Note that both Knitro and Ipopt terminate with a nearoptimal status in one problem. In three problems Ipopt reported error in feasibilityrestoration phase. Overall, Ipopt successfully detects the infeasibility in 35 problemsand Knitro in 29 problems.
6. Concluding remarks. In this paper we have extended Mehrotra-Huang po-tential function [26] to the context of MCP, and shown that our potential reductioninterior point method generates a maximal complementary solution with desired ac-curacy if a scaled Lipschitz condition is satisfied. However, the main emphasis ofthe analysis is not to improve the best known complexity, but to make sure the inte-rior point method maintains the global linear polynomial time convergence propertieswhile achieving practical performance. We have implemented an interior point solverin a software package named iOptimize that does not ignore the theory of potentialfunction in the context of interior point methods, while adding only a small addi-tional computational burden. Computational comparisons on feasible and infeasibletest problems show that, in terms of iteration counts and the number of problemssolved, iOptimize appears to perform more efficiently and robustly than a maturecommercial solver MOSEK. We also find that iOptimize seems to detect infeasibilitymore reliably than general nonlinear solvers Ipopt and Knitro. It might be possibleto further improve the computational results by implementing a better preprocessor,a non-trivial starting point, or better step size computations. However, that was notthe focus of the research reported here.
At the core of the solver is the matrix factorization method, for which iOptimize
uses Bunch-Parlett factorization [14]. The method used in MOSEK is not known to us.However, we point out that numerical instability may result if quasi-definite systemapproach is used [11, 37]. Bunch-Parlett factorization in general is numerically stabler,but requires additional computational effort.
We also implemented a version of Gondzio’s higher-order correction strategy [18]
32 K.-L. HUANG AND S. MEHROTRA
in iOptimize. Our implementation of this strategy is a direct extension of the ap-proach in Mehrotra and Huang [26]. Computational results show that iOptimize us-ing Mehrotra direction and up to two additional Gondzio’s correctors for the QPproblems saves only approximately 9% iterations. However, we did not see any im-provement on QCQP and CP problems. The practical value of using other correctionstrategies is a topic of future research.
We note that the computational performance for a problem can be improvedpossibly with different parameter settings. For example, with parameters (θl, θu) =(0.99, 1), the average iterations for solving the QCQP and CP feasible problems is re-duced further by about 6%. However, this number is increased by about 9% while solv-ing the QCQP and CP infeasible problems. In fact, for antenna-iantenna vareps,iOptimize under default setting ((θl, θu) = (0.9, 1)) needs only 47 iterations to cor-rectly detect infeasibility whereas with parameters (θl, θu) = (0.99, 1) it will require454 iterations. A very aggressive step to the boundary is not desirable for infeasibleand general convex programming problems.
Finally, we note that a small but difficult two-variable example can be foundat http://mosek.com/fileadmin/talks/func versus conic.pdf, (page 29). Withthe default setting iOptimize uses 24 iterations to solve this example to optimalitywhereas MOSEK with default setting uses 124 iterations.
REFERENCES
[1] AMPL: A modeling language for mathematical programming. www.ampl.com.[2] AMPL nonlinear test problems. http://www.orfe.princeton.edu/~rvdb/ampl/nlmodels/.[3] COIN-OR Ipopt interior point optimizer. http://www.coin-or.org/ipopt/.[4] Knitro: A aolver for nonlinear optimization. http://www.ziena.com/knitro.html.[5] Mittelmann’s convex QCQP test problems. http://plato.asu.edu/ftp/ampl files/qpdata ampl/.[6] Mixed integer nonlinear programming (minlp) test problems.
http://wiki.mcs.anl.gov/leyffer/index.php/MacMINLP.[7] MOSEK: A high performance solftware for large-scale optimization problems.
http://www.mosek.com/.[8] Netlib. http://www.netlib.org/lp/index.html.[9] E.D. Andersen, On primal and dual infeasibility certificates in a homogeneous model for
convex optimization, SIAM Journal On Optimization, 11 (2000), pp. 380–388.[10] E.D. Andersen, C. Roos, and T. Terlaky, On implementing a primal-dual interior-point
method for conic quadratic optimization, Mathematical Programming, 95 (2003), pp. 249–277.
[11] E.D. Andersen and Y. Ye, A computational study of the homogeneous algorithm for large-scale convex optimization, Computational Optimization and Applications, 10 (1998),pp. 243–269.
[12] , On a homogeneous algorithm for the monotone complementarity problem, MathematicalProgramming, 84 (1999), pp. 375–399.
[13] M.S. Bazarra, H.D. Sherali, and C.M. Shetty, Nonlinear Programming: Theory And Al-gorithms, John Wiley & Sons, New York, 2 ed., 1993.
[14] J.R. Bunch and B.N. Parlett, Direct methods for solving symmetric indefinite systems oflinear equations, SIAM Journal on Numerical Analysis, 8 (1971), pp. 639–655.
[15] E. Dolan and J. More, Benchmarking optimization software with performance profiles, Math-ematical Programming, 91 (2002), pp. 201–213.
[16] I.S. Duff, MA57–a code for the solution of sparse symmetric definite and indefinie systems,ACM Transactions on Mathematical Software, 30 (2004), pp. 118–144.
[17] D. Goldfarb and S. Mehrotra, A relaxed variant of Karmarkar’s algorithm, MathematicalProgramming, 40 (1988), pp. 285–315.
[18] J. Gondzio, Multiple centrality corrections in a primal-dual method for linear programming,Computational Optimization and Applications, 6 (1996), pp. 137–156.
[19] O. Guler, Existence of interior points and interior paths in nonlinear monotone complemen-tarity problems, Mathematics of Operations Research, 18 (1993), pp. 148–162.
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 33
[20] O. Guler and Y. Ye, Convergence behavior of interior-point algorithms, mathematical Pro-gramming, 60 (1993), pp. 215–228.
[21] N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4(1984), pp. 373–395.
[22] M. Kojima, S. Mizuno, and A. Yoshise, An O(√nL) iteration potential reduction algorithm
for linear complementary problems, Mathematical Programming, 50 (1991), pp. 331–342.[23] Z.-Q. Luo, J.F. Sturm, and S. Zhang, Conic convex programming and seld-dual embedding,
Optimization Method and Software, 14 (2000), pp. 169–218.[24] I. Maros and C. Mezaros, A repository of convex quadratic programming problems, Opti-
mization Methods and Software, 11&12 (1999), pp. 671–681.[25] S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM Journal
on Optimization, 2 (1992), pp. 575–601.[26] S. Mehrotra and K.-L. Huang, Computational experience with a modified potential reduction
algorithm for linear programming, Optimization Method and Software, 27 (2012), pp. 865–891.
[27] M.D.C. Monteiro and I. Adler, An extension of Karmarkar type algorithm to a class ofconvex separable programming problems with global rate of convergence, Mathematical ofOperations Research, 15 (1990), pp. 408–422.
[28] Y.E. Nesterov and M.J. Todd, Self-scaled barriers and interior-point methods for convexprogramming, Mathematics of Operations Research, 22 (1997), pp. 1–42.
[29] F. Porta and Y. Ye, Interior-point methods for nonlinear complementarity problem, Journalof Optimization Theory and Applications, 68 (1996), pp. 617–642.
[30] F.A. Potra and Y. Ye, A quadratically convergent polynomial algorithm for solving entropyoptimization problems, SIAM Jorunal on Optimization, 3 (1993), pp. 843–860.
[31] , Interior point methods for nonlinear complementarity problems, Journal of Optimiza-tion Theory and Application, 88 (1996), pp. 617–647.
[32] R.T. Rockafellar, Convex Analysis, Princeton University Press, 1996.[33] A. Skajaa, J.B. Jørgensen, and P.C. Hansen, On implementing a ho-
mogeneous interior-point algorithm for nonsymmetric conic optimization,http://www.imm.dtu.dk/English/Service/
Phonebook.aspx?lg=showcommon&id=f30a529b-4c6b-4642-a67b-cca4f74d0a3a, (2011).[34] J.F. Sturm, Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones,
Optimization Method and Software, 11–12 (1999), pp. 625–653.[35] K. Tanabe, Centered newton method for mathematical programming, System Modelling and
Optimization, 113 (1988), pp. 197–206.[36] M.J. Todd and Y. Ye, A centered projective algorithm for linear programming, Mathematics
of Operations Research, 15 (1990), pp. 508–529.[37] R.J. Vanderbei, Symmetric quasi-definite matrices, SIAM Journal on Optimization, 5 (1995),
pp. 100–113.[38] X. Xu, P. Hung, and Y. Ye, A simplified homogeneous and self-dual linear programming
algorithm and its implementation, Annals of Operations Research, 62 (1996), pp. 151–171.[39] Y. Ye, M.J. Todd, and S. Mizuno, An O(
√nL) -iteration homogeneous and self-dual linear
programming algorithm, Mathematics of Operations Research, 19 (1994), pp. 53–67.[40] S. Zhang, A new self-dual embedding method for convex programming, Journal of Global
Optimization, 29 (2004), pp. 479–496.[41] J. Zhu, A path following alrogirhm for a class of convex programming problems, Method and
Models of Operations Research, 36 (1992), pp. 359–377.
Appendix. A.
34 K.-L. HUANG AND S. MEHROTRA
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Prim
al
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jectiv
eO
bje
ctiv
eT
ime
Tim
eIte
rsD
irs(%
)
aug2d
10000
20200
9604
19800
1.6
8741175E
+06
1.6
8741175E
+06
0.0
33.9
12
1.3
34.3
9aug2dc
10000
20200
10000
20200
1.8
1836807E
+06
1.8
1836807E
+06
0.0
24.0
712
1.3
34.4
8aug2dcqp
10000
20200
10000
20200
6.4
9813474E
+06
6.4
9813473E
+06
0.0
22.4
13
26.8
1aug2dqp
10000
20200
10000
20200
6.2
3701204E
+06
6.2
3701200E
+06
0.0
12.6
414
27.5
aug3d
1000
3873
512
2673
5.5
4067726E
+02
5.5
4067726E
+02
0.0
10.2
78
1.5
3.8
9aug3dc
1000
3873
1000
3873
7.7
1262439E
+02
7.7
1262439E
+02
0.0
10.3
88
1.5
4.3
8aug3dcqp
1000
3873
1000
3873
9.9
3362148E
+02
9.9
3362145E
+02
0.0
10.3
611
24.8
4aug3dqp
1000
3873
1000
3873
6.7
5237672E
+02
6.7
5237670E
+02
0.0
10.3
913
25.5
9cont-0
50
2401
2597
2401
2597
-4.5
6385090E
+00
-4.5
6385090E
+00
0.0
10.6
511
23.4
9cont-1
00
9801
10197
9801
10197
-4.6
4439787E
+00
-4.6
4439787E
+00
0.0
14.5
511
22.1
1cont-1
01
10098
10197
10098
10197
1.9
5527325E
-01
1.9
5527325E
-01
0.0
14.0
69
21.9
5cont-2
00
39601
40397
39601
40397
-4.6
8487592E
+00
-4.6
8487592E
+00
0.0
443.3
411
20.9
9cont-2
01
40198
40397
40198
40397
1.9
2483373E
-01
1.9
2483373E
-01
0.0
442.0
910
1.9
1cont-3
00
90298
90597
90298
90597
1.9
1512286E
-01
1.9
1512286E
-01
0.0
8247.5
913
1.9
20.4
7cvxqp1
m500
1000
500
1000
1.0
8751157E
+06
1.0
8751157E
+06
00.4
110
22.2
5cvxqp1
s50
100
50
100
1.1
5907181E
+04
1.1
5907181E
+04
00.0
88
21.3
2cvxqp2
m250
1000
250
1000
8.2
0155431E
+05
8.2
0155431E
+05
00.2
310
22.7
6cvxqp2
s25
100
25
100
8.1
2094048E
+03
8.1
2094048E
+03
00.1
10
1.9
1.0
8cvxqp3
m750
1000
750
1000
1.3
6282874E
+06
1.3
6282874E
+06
00.6
512
21.4
1cvxqp3
s75
100
75
100
1.1
9434322E
+04
1.1
9434322E
+04
00.0
910
20
dpklo
177
133
77
133
3.7
0096217E
-01
3.7
0096217E
-01
00.0
43
20
dto
c3
9998
14999
9997
14996
2.3
5262481E
+02
2.3
5262481E
+02
0.0
20.9
34
28.7
5dual1
185
185
3.5
0129664E
-02
3.5
0129651E
-02
00.1
12
22.0
6dual2
196
196
3.3
7336761E
-02
3.3
7336761E
-02
00.0
89
20
dual3
1111
1111
1.3
5755838E
-01
1.3
5755833E
-01
00.0
99
21.2
5dual4
175
175
7.4
6090842E
-01
7.4
6090840E
-01
00.0
810
21.3
2dualc
1215
223
13
21
6.1
5525083E
+03
6.1
5525083E
+03
00.2
625
1.4
80
dualc
2229
235
915
3.5
5130769E
+03
3.5
5130769E
+03
00.2
120
1.5
50
dualc
5278
285
18
4.2
7232327E
+02
4.2
7232327E
+02
0.0
10.1
213
1.6
20
dualc
8503
510
15
22
1.8
3093588E
+04
1.8
3093588E
+04
0.0
10.2
19
1.6
30.5
genhs2
88
10
810
9.2
7173692E
-01
9.2
7173694E
-01
0.0
10.0
33
20
gould
qp2
349
699
349
699
1.8
4275300E
-04
1.8
4273273E
-04
00.1
613
23.2
5gould
qp3
349
699
349
699
2.0
6278397E
+00
2.0
6278397E
+00
0.0
10.1
411
24.5
8hs1
18
17
32
17
32
6.6
4820450E
+02
6.6
4820450E
+02
0.0
10.0
910
20
hs2
11
31
3-9
.99599996E
+01
-9.9
9600001E
+01
0.0
10.0
89
20
hs2
68
510
510
2.5
8580312E
-05
-5.0
6859105E
-05
00.1
12
20
hs3
51
41
41.1
1111111E
-01
1.1
1111111E
-01
00.0
55
20
hs3
5m
od
14
13
2.5
0000007E
-01
2.4
9999989E
-01
00.0
911
20
hs5
13
53
50.0
0000000E
+00
9.9
7371075E
-11
0.0
10.0
33
20
hs5
23
53
55.3
2664756E
+00
5.3
2664757E
+00
00.0
43
20
hs5
33
53
54.0
9302326E
+00
4.0
9302302E
+00
0.0
10.0
44
20
hs7
63
73
7-4
.68181818E
+00
-4.6
8181818E
+00
00.0
55
20
ksip
1001
1021
1001
1021
5.7
5797941E
-01
5.7
5797941E
-01
0.0
10.3
816
23.4
5
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 35B
efo
reP
reso
lvin
gA
fter
Pre
solv
ing
Pri
mal
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jecti
ve
Ob
jecti
ve
Tim
eT
ime
Iters
Dir
s(%
)
lase
r1000
2002
1000
2002
2.4
0960135E
+06
2.4
0960134E
+06
0.0
10.2
110
24.0
4lisw
et1
10000
20002
10000
20002
3.6
1224021E
+01
3.6
1224021E
+01
0.0
33.2
119
1.8
97.5
3lisw
et2
10000
20002
10000
20002
2.4
9980788E
+01
2.4
9980759E
+01
0.0
22.6
817
27.3
2lisw
et3
10000
20002
10000
20002
2.5
0012273E
+01
2.5
0012181E
+01
0.0
22.5
516
27.3
9lisw
et4
10000
20002
10000
20002
2.5
0001164E
+01
2.5
0001107E
+01
0.0
22.7
618
28.0
6lisw
et5
10000
20002
10000
20002
2.5
0343102E
+01
2.5
0341783E
+01
0.0
22.5
416
27.9
1lisw
et6
10000
20002
10000
20002
2.4
9957529E
+01
2.4
9957445E
+01
0.0
23
18
27.8
2lisw
et7
10000
20002
10000
20002
4.9
8840891E
+02
4.9
8840891E
+02
0.0
23.7
119
1.8
97.3
1lisw
et8
10000
20002
10000
20002
7.1
4470067E
+02
7.1
4470059E
+02
0.0
25.4
737
1.7
87.7
6lisw
et9
10000
20002
10000
20002
1.9
6325126E
+03
1.9
6325126E
+03
0.0
25.1
231
1.6
57.1
8lisw
et1
010000
20002
10000
20002
4.9
4858035E
+01
4.9
4857840E
+01
0.0
23.1
821
1.9
57.6
lisw
et1
110000
20002
10000
20002
4.9
5239659E
+01
4.9
5239569E
+01
0.0
23.2
322
1.9
57.8
2lisw
et1
210000
20002
10000
20002
1.7
3692743E
+03
1.7
3692743E
+03
0.0
24.8
932
1.7
57.6
lots
chd
712
712
2.3
9841590E
+03
2.3
9841589E
+03
00.0
59
20
mosa
rqp1
700
3200
700
3200
-9.5
2875441E
+02
-9.5
2875445E
+02
00.2
19
25.5
mosa
rqp2
600
1500
600
1500
-1.5
9748211E
+03
-1.5
9748212E
+03
00.1
49
27.3
5p
ow
ell20
10000
20000
10000
20000
5.2
0895828E
+10
5.2
0895828E
+10
0.0
21.4
68
1.8
87.3
9pri
mal1
85
410
85
410
-3.5
0129646E
-02
-3.5
0129670E
-02
00.1
19
22.9
7pri
mal2
96
745
96
745
-3.3
7336714E
-02
-3.3
7336763E
-02
00.1
72
1.0
4pri
mal3
111
856
111
856
-1.3
5755836E
-01
-1.3
5755837E
-01
00.2
19
24.7
1pri
mal4
75
1564
75
1564
-7.4
6090837E
-01
-7.4
6090843E
-01
00.2
92
5.0
8pri
malc
19
239
9239
-6.1
5525083E
+03
-6.1
5525083E
+03
00.1
418
1.8
31.4
9pri
malc
27
238
7237
-3.5
5130769E
+03
-3.5
5130769E
+03
00.2
25
1.6
83.9
6pri
malc
58
295
8206
-4.2
7232327E
+02
-4.2
7232327E
+02
00.0
710
1.9
1.4
3pri
malc
88
528
8528
-1.8
3094298E
+04
-1.8
3094298E
+04
00.1
512
1.8
33.3
6q25fv
47
820
1876
790
1846
1.3
7444479E
+07
1.3
7444478E
+07
0.0
21.5
527
24.4
8qadlitt
le56
138
55
137
4.8
0318859E
+05
4.8
0318859E
+05
00.0
711
20
qafiro
27
51
27
51
-1.5
9078179E
+00
-1.5
9078179E
+00
00.0
814
1.8
60
qbandm
305
472
252
419
1.6
3523420E
+04
1.6
3523420E
+04
00.1
720
20.6
qb
eaconf
173
295
107
219
1.6
4712060E
+05
1.6
4712060E
+05
00.1
14
21.9
6qb
ore
3d
233
334
123
225
3.1
0020080E
+03
3.1
0020080E
+03
00.1
719
1.7
92.4
5qbra
ndy
220
303
136
246
2.8
3751149E
+04
2.8
3751149E
+04
00.1
317
20
qcapri
271
482
243
438
6.6
7932932E
+07
6.6
7932932E
+07
00.3
736
1.7
22.7
2qe226
223
472
214
463
2.1
2653433E
+02
2.1
2653433E
+02
00.1
716
22.4
8qeta
macr
400
816
334
669
8.6
7603696E
+04
8.6
7603693E
+04
00.4
932
22.2
9qff
fff8
0524
1028
322
826
8.7
3147464E
+05
8.7
3147456E
+05
00.3
925
1.9
23.3
7qfo
rpla
n161
492
122
450
7.4
5663147E
+09
7.4
5663144E
+09
00.3
931
1.7
12.8
6qgfr
dxpn
616
1160
590
1134
1.0
0790585E
+11
1.0
0790585E
+11
00.3
930
1.9
3.3
9qgro
w15
300
645
300
645
-1.0
1693640E
+08
-1.0
1693640E
+08
00.2
822
22.2
1qgro
w22
440
946
440
946
-1.4
9628953E
+08
-1.4
9628953E
+08
00.4
326
23.7
7qgro
w7
140
301
140
301
-4.2
7987139E
+07
-4.2
7987139E
+07
00.1
821
23.9
1qis
rael
174
316
174
316
2.5
3478378E
+07
2.5
3478376E
+07
00.2
828
1.7
51.8
3qp
cble
nd
74
114
74
114
-7.8
4254164E
-03
-7.8
4254440E
-03
00.1
117
21.9
4
36 K.-L. HUANG AND S. MEHROTRA
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Prim
al
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jectiv
eO
bje
ctiv
eT
ime
Tim
eIte
rsD
irs(%
)
qp
cb
oei1
351
726
343
718
1.1
5039140E
+07
1.1
5039140E
+07
00.2
722
22.2
6qp
cb
oei2
166
305
126
265
8.1
7196226E
+06
8.1
7196221E
+06
00.2
226
1.8
81.8
3qp
csta
ir356
614
356
532
6.2
0438749E
+06
6.2
0438744E
+06
00.3
225
1.7
61.9
2qpilo
tno
975
2446
866
2151
4.7
2858690E
+06
4.7
2858690E
+06
0.0
11.8
442
1.8
62.8
qre
cip
e91
204
76
141
-2.6
6615999E
+02
-2.6
6616001E
+02
00.1
219
20.8
8qpte
st2
42
44.3
7187500E
+00
4.3
7187500E
+00
00.0
36
20
qsc
205
205
317
203
315
-5.8
1395349E
-03
-5.8
1395349E
-03
00.1
217
21.6
9qsc
agr2
5471
671
469
669
2.0
1737938E
+08
2.0
1737938E
+08
00.2
222
1.9
54.1
3qsc
agr7
129
185
127
183
2.6
8659486E
+07
2.6
8659485E
+07
00.1
422
1.8
22.1
3qsc
fxm
1330
600
311
581
1.6
8826916E
+07
1.6
8826916E
+07
00.2
925
22.7
9qsc
fxm
2660
1200
622
1162
2.7
7761616E
+07
2.7
7761615E
+07
00.4
933
23.5
6qsc
fxm
3990
1800
933
1743
3.0
8163545E
+07
3.0
8163545E
+07
00.7
335
1.9
43.3
7qsc
orp
io388
466
355
436
1.8
8050955E
+03
1.8
8050955E
+03
00.1
514
1.9
32.8
qsc
rs8490
1275
445
1230
9.0
4560015E
+02
9.0
4560013E
+02
0.0
10.4
124
1.9
62.7
qsc
sd1
77
760
77
760
8.6
6666669E
+00
8.6
6666667E
+00
00.1
59
22.0
5qsc
sd6
147
1350
147
1350
5.0
8082139E
+01
5.0
8082139E
+01
00.2
914
22.2
qsc
sd8
397
2750
397
2750
9.4
0763578E
+02
9.4
0763570E
+02
0.0
10.3
811
23.7
6qsc
tap1
300
660
284
644
1.4
1586111E
+03
1.4
1586111E
+03
0.0
10.3
317
21.8
8qsc
tap2
1090
2500
1033
2443
1.7
3502651E
+03
1.7
3502649E
+03
0.0
10.5
113
22.4
5qsc
tap3
1480
3340
1408
3268
1.4
3875468E
+03
1.4
3875468E
+03
0.0
10.6
415
23.9
5qse
ba
515
1036
514
1033
8.1
4818004E
+07
8.1
4818004E
+07
0.0
10.6
426
1.9
21.7
5qsh
are
1b
117
253
112
248
7.2
0078318E
+05
7.2
0078318E
+05
00.4
729
1.7
61.9
6qsh
are
2b
96
162
96
162
1.1
7036917E
+04
1.1
7036917E
+04
00.2
517
20.4
1qsh
ell
536
1777
487
1475
1.5
7263685E
+12
1.5
7263684E
+12
0.0
10.9
838
1.7
91.7
6qsh
ip04l
402
2166
323
2104
2.4
2001553E
+06
2.4
2001553E
+06
0.0
10.3
814
22.9
3qsh
ip04s
402
1506
235
1356
2.4
2499367E
+06
2.4
2499367E
+06
0.0
10.3
114
24.2
6qsh
ip08l
778
4363
638
4247
2.3
7604062E
+06
2.3
7604062E
+06
0.0
11.1
214
24.7
4qsh
ip08s
778
2467
366
2079
2.3
8572885E
+06
2.3
8572885E
+06
0.0
10.4
114
1.9
33.3
1qsh
ip12l
1151
5533
784
5226
3.0
1887658E
+06
3.0
1887658E
+06
0.0
22.0
616
24.0
4qsh
ip12s
1151
2869
412
2190
3.0
5696225E
+06
3.0
5696225E
+06
0.0
10.7
317
23.1
3qsie
rra1227
2735
1212
2705
2.3
7504582E
+07
2.3
7504582E
+07
00.5
420
25.5
3qsta
ir356
614
356
532
7.9
8545276E
+06
7.9
8545275E
+06
00.3
623
1.8
72.2
9qsta
ndat
359
1274
346
855
6.4
1183839E
+03
6.4
1183839E
+03
00.2
113
21.9
6s2
68
510
510
2.5
8580312E
-05
-5.0
6859105E
-05
00.0
612
20
stadat1
3999
6000
3999
6000
-2.8
5268639E
+07
-2.8
5268640E
+07
0.0
11.0
312
24.5
stadat2
3999
6000
3999
6000
-3.2
6266649E
+01
-3.2
6266649E
+01
0.0
11.3
218
25.6
1sta
dat3
7999
12000
7999
12000
-3.5
7794529E
+01
-3.5
7794530E
+01
0.0
12.5
19
27.0
1ta
me
12
12
0.0
0000000E
+00
-4.1
9289224E
-10
00.0
43
20
ubh1
12000
18009
12000
17997
1.1
1600082E
+00
1.1
1600082E
+00
0.0
31.1
93
210.7
4yao
2000
4002
2000
4000
1.9
7704256E
+02
1.9
7704256E
+02
0.0
11.1
123
1.5
74.5
2zecevic
22
42
4-4
.12499999E
+00
-4.1
2500002E
+00
00.0
45
20
Avg
0.0
13.4
415.9
21.9
23.0
3
Tab
leA
.1:
iOptimize
perform
an
ceon
Maro
san
dM
eszaro
s’sQ
Ptest
prob
lems
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 37B
efo
reP
reso
lvin
gA
fter
Pre
solv
ing
Pri
mal
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jecti
ve
Ob
jecti
ve
Tim
eT
ime
Iters
Dir
s(%
)
-aug2d
10001
20201
10001
20201
1.6
8741182E
+06
1.6
8741175E
+06
0.0
13.7
58
1.7
54.1
4Q
Q-a
ug2dc
10001
20201
10001
20201
1.8
1836815E
+06
1.8
1836807E
+06
0.0
23.4
78
1.7
54.7
-aug2dcqp
10001
20201
10001
20201
6.4
9813474E
+06
6.4
9813474E
+06
0.7
56.7
919
1.6
85.1
6Q
Q-a
ug2dqp
10001
20201
10001
20201
6.2
3701203E
+06
6.2
3701202E
+06
0.7
36.7
21
1.7
15.3
8Q
Q-a
ug3d
1001
3874
1001
3874
5.5
4067753E
+02
5.5
4067816E
+02
0.0
10.5
65
22.0
3Q
Q-a
ug3dc
1001
3874
1001
3874
7.7
1262488E
+02
7.7
1262604E
+02
0.0
10.2
15
26.3
8Q
Q-a
ug3dcqp
1001
3874
1001
3874
9.9
3362146E
+02
9.9
3362144E
+02
0.0
40.4
411
25.7
4Q
Q-a
ug3dqp
1001
3874
1001
3874
6.7
5237669E
+02
6.7
5237664E
+02
0.0
30.4
312
26.3
1Q
Q-c
ont-
050
2402
2598
2402
2598
-4.5
6385143E
+00
-4.5
6385137E
+00
0.0
10.7
511
23.5
1Q
Q-c
ont-
100
9802
10198
9802
10198
-4.6
4439991E
+00
-4.6
4439983E
+00
0.0
15.5
912
22.2
-cont-
101
10099
10198
10099
10198
1.9
5527204E
-01
1.9
5527225E
-01
0.0
15.2
111
21.9
7Q
Q-c
ont-
200
39602
40398
39602
40398
-4.6
8487611E
+00
-4.6
8487609E
+00
0.0
549.4
313
21.3
2Q
Q-c
ont-
201
40199
40398
40199
40398
1.9
2483103E
-01
1.9
2483155E
-01
0.0
457.0
312
20.8
4Q
Q-c
ont-
300
90299
90598
90299
90598
1.9
1512063E
-01
1.9
1512092E
-01
0.1
261.3
12
20.4
3Q
Q-c
vxqp1
m501
1001
501
1001
1.0
8751109E
+06
1.0
8751098E
+06
0.0
10.3
810
22.1
6Q
Q-c
vxqp1
s51
101
51
101
1.1
5907177E
+04
1.1
5907177E
+04
00.0
58
24.1
7Q
Q-c
vxqp2
m251
1001
251
1001
8.2
0155390E
+05
8.2
0155375E
+05
00.2
210
22.8
6Q
Q-c
vxqp2
s26
101
26
101
8.1
2093991E
+03
8.1
2093970E
+03
00.0
59
20
-cvxqp3
m751
1001
751
1001
1.3
6282857E
+06
1.3
6282854E
+06
0.0
10.7
813
21.5
7Q
Q-c
vxqp3
s76
101
76
101
1.1
9434322E
+04
1.1
9434317E
+04
00.0
69
20
-dpklo
178
134
78
134
3.7
0096112E
-01
3.7
0096109E
-01
00.0
45
20
-dto
c3
9999
15000
9999
14998
2.3
5262505E
+02
2.3
5262481E
+02
0.3
12.2
27
1.7
15.9
7Q
Q-d
ual1
286
286
3.5
0125784E
-02
3.5
0123964E
-02
00.0
713
21.4
7Q
Q-d
ual2
297
297
3.3
7335240E
-02
3.3
7334725E
-02
00.0
712
21.4
5Q
Q-d
ual3
2112
2112
1.3
5755782E
-01
1.3
5755743E
-01
00.0
812
21.3
5Q
Q-d
ual4
276
276
7.4
6090320E
-01
7.4
6089988E
-01
00.0
610
21.9
2Q
Q-d
ualc
1216
224
216
224
6.1
5524425E
+03
6.1
5524190E
+03
00.1
421
23.6
2Q
Q-d
ualc
2230
236
230
236
3.5
5130765E
+03
3.5
5130761E
+03
00.1
217
22.6
8Q
Q-d
ualc
5279
286
279
286
4.2
7232103E
+02
4.2
7231914E
+02
00.0
79
24.7
6Q
Q-d
ualc
8504
511
504
511
1.8
3093532E
+04
1.8
3093473E
+04
00.1
313
26.5
6Q
Q-g
enhs2
89
11
911
9.2
7173702E
-01
9.2
7173694E
-01
00.0
35
20
-gould
qp2
350
700
350
700
1.8
4274493E
-04
1.8
4273733E
-04
00.1
515
24.7
6Q
Q-g
ould
qp3
350
700
350
700
2.0
6278287E
+00
2.0
6278325E
+00
00.1
210
24.2
4Q
Q-h
s118
18
33
18
33
6.6
4820445E
+02
6.6
4820439E
+02
00.0
511
20
-hs2
12
42
4-9
.99600032E
+01
-9.9
9600019E
+01
00.0
510
20
-hs2
68
611
611
-2.1
2146135E
-07
-7.6
9541657E
-07
00.0
715
20
-hs3
52
52
51.1
1111110E
-01
1.1
1111107E
-01
00.0
36
20
-hs3
5m
od
25
25
2.4
9999954E
-01
2.4
9999859E
-01
00.0
612
20
-hs5
14
64
64.2
4138339E
-08
3.6
0237236E
-08
00.0
25
24.7
6Q
Q-h
s52
46
46
5.3
2664757E
+00
5.3
2664757E
+00
00.0
25
20
-hs5
34
64
64.0
9302193E
+00
4.0
9302286E
+00
00.0
36
20
-hs7
64
84
8-4
.68181818E
+00
-4.6
8181818E
+00
00.0
36
20
-ksi
p1002
1022
1002
1022
5.7
5797941E
-01
5.7
5797940E
-01
0.0
10.2
913
23.4
2
38 K.-L. HUANG AND S. MEHROTRA
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Prim
al
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jectiv
eO
bje
ctiv
eT
ime
Tim
eIte
rsD
irs(%
)
-lase
r1001
2003
1001
2003
2.4
0960146E
+06
2.4
0960145E
+06
00.6
19
1.5
34.7
9Q
Q-lisw
et1
10001
20003
10001
20003
3.6
1224031E
+01
3.6
1224021E
+01
0.0
24.8
721
1.9
55.6
3Q
Q-lisw
et2
10001
20003
10001
20003
2.4
9980889E
+01
2.4
9980663E
+01
0.0
24.2
317
25.2
9Q
Q-lisw
et3
10001
20003
10001
20003
2.5
0012248E
+01
2.5
0012132E
+01
0.0
23.9
417
26.1
3Q
Q-lisw
et4
10001
20003
10001
20003
2.5
0001177E
+01
2.5
0001054E
+01
0.0
24.2
618
25.6
7Q
Q-lisw
et5
10001
20003
10001
20003
2.5
0342522E
+01
2.5
0342269E
+01
0.0
24.4
420
25.9
5Q
Q-lisw
et6
10001
20003
10001
20003
2.4
9957486E
+01
2.4
9957422E
+01
0.0
24.2
219
26.1
6Q
Q-lisw
et7
10001
20003
10001
20003
4.9
8841004E
+02
4.9
8840891E
+02
0.0
25.8
522
1.7
35.4
-liswet8
10001
20003
10001
20003
7.1
4470246E
+02
7.1
4470012E
+02
0.0
27.6
830
1.7
5.4
8Q
Q-lisw
et9
10001
20003
10001
20003
1.9
6325241E
+03
1.9
6325122E
+03
0.0
210.4
336
1.6
75
-liswet1
010001
20003
10001
20003
4.9
4858015E
+01
4.9
4857829E
+01
0.0
26.3
129
25.8
4Q
Q-lisw
et1
110001
20003
10001
20003
4.9
5239698E
+01
4.9
5239560E
+01
0.0
25.3
23
1.9
66.6
3Q
Q-lisw
et1
210001
20003
10001
20003
1.7
3692791E
+03
1.7
3692742E
+03
0.0
29.6
238
1.6
85.4
3Q
Q-lo
tschd
813
813
2.3
9841591E
+03
2.3
9841578E
+03
00.0
39
20
-mosa
rqp1
701
3201
701
3201
-9.5
2875445E
+02
-9.5
2875470E
+02
00.2
710
26.3
-mosa
rqp2
601
1501
601
1501
-1.5
9748212E
+03
-1.5
9748217E
+03
00.1
59
25.5
6Q
Q-p
ow
ell2
010001
20001
10001
20001
5.2
0895810E
+10
5.2
0895828E
+10
0.0
27.8
19
1.3
73.7
2Q
Q-p
rimal1
86
411
86
411
-3.5
0129655E
-02
-3.5
0129677E
-02
00.1
10
21.0
4Q
Q-p
rimal2
97
746
97
746
-3.3
7336759E
-02
-3.3
7336761E
-02
00.1
39
23.4
2Q
Q-p
rimal3
112
857
112
857
-1.3
5755836E
-01
-1.3
5755837E
-01
00.2
611
24.9
6Q
Q-p
rimal4
76
1565
76
1565
-7.4
6090838E
-01
-7.4
6090843E
-01
00.2
29
24.4
6Q
Q-p
rimalc
110
240
10
240
-6.1
5525075E
+03
-6.1
5525083E
+03
00.1
119
20.9
7Q
Q-p
rimalc
28
239
8239
-3.5
5130754E
+03
-3.5
5130761E
+03
00.0
917
20
-prim
alc
59
296
9296
-4.2
7232136E
+02
-4.2
7232268E
+02
00.0
710
23.1
3Q
Q-p
rimalc
89
529
9529
-1.8
3094193E
+04
-1.8
3094258E
+04
00.1
112
21.8
3Q
Q-q
25fv
47
821
1877
800
1856
1.3
7444358E
+07
1.3
7444267E
+07
0.0
21.7
428
1.9
33.1
6Q
Q-q
adlittle
57
139
56
138
4.8
0318851E
+05
4.8
0318850E
+05
00.0
512
20
-qafiro
28
52
28
52
-1.5
9078178E
+00
-1.5
9078188E
+00
00.0
414
24.6
5Q
Q-q
bandm
306
473
278
445
1.6
3523409E
+04
1.6
3523397E
+04
00.1
821
22.2
7Q
Q-q
beaconf
174
296
141
257
1.6
4711933E
+05
1.6
4711773E
+05
00.0
913
23.5
3Q
Q-q
bore
3d
234
335
200
303
3.1
0020162E
+03
3.1
0020194E
+03
00.1
819
23.3
9Q
Q-q
bra
ndy
221
304
141
251
2.8
3751074E
+04
2.8
3751029E
+04
00.1
617
22.6
1Q
Q-q
capri
272
483
262
457
6.6
7932279E
+07
6.6
7932452E
+07
0.0
10.3
32
1.6
93.3
6Q
Q-q
e226
224
473
223
472
2.1
2653431E
+02
2.1
2653417E
+02
00.1
517
21.3
9Q
Q-q
eta
macr
401
817
401
814
8.6
7601160E
+04
8.6
7601476E
+04
00.5
933
23.9
7Q
Q-q
ffff
f80
525
1029
491
995
8.7
3147438E
+05
8.7
3146951E
+05
00.4
926
24.0
2Q
Q-q
forp
lan
162
493
136
467
7.4
5661267E
+09
7.4
5660721E
+09
0.0
10.2
723
23.0
4Q
Q-q
gfrd
xpn
617
1161
591
1135
1.0
0790508E
+11
1.0
0790507E
+11
0.0
10.3
24
24.3
6Q
Q-q
gro
w15
301
646
301
646
-1.0
1693609E
+08
-1.0
1693652E
+08
00.2
621
22.7
6Q
Q-q
gro
w22
441
947
441
947
-1.4
9628886E
+08
-1.4
9628968E
+08
00.4
124
23.2
8Q
Q-q
gro
w7
141
302
141
302
-4.2
7987148E
+07
-4.2
7987170E
+07
00.1
521
24.7
-qisra
el
175
317
175
317
2.5
3478001E
+07
2.5
3477766E
+07
00.2
829
1.8
32.8
9Q
Q-q
pcble
nd
75
115
75
115
-7.8
4259484E
-03
-7.8
4280090E
-03
00.0
816
25.4
1
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 39
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Pri
mal
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jecti
ve
Ob
jecti
ve
Tim
eT
ime
Iters
Dir
s(%
)
-qp
cb
oei1
352
727
349
724
1.1
5039007E
+07
1.1
5039019E
+07
0.0
10.3
424
1.9
65.1
8Q
Q-q
pcb
oei2
167
306
141
280
8.1
7195892E
+06
8.1
7195821E
+06
00.2
529
1.9
74.1
8Q
Q-q
pcst
air
357
615
357
615
6.2
0438137E
+06
6.2
0437660E
+06
0.0
10.3
121
23.9
9Q
Q-q
pilotn
o976
2447
955
2248
4.7
2823223E
+06
4.7
2827122E
+06
0.0
41.2
330
23.0
7Q
Q-q
recip
e92
205
84
149
-2.6
6616053E
+02
-2.6
6616081E
+02
00.0
918
22.2
7Q
Q-q
pte
st3
53
54.3
7187497E
+00
4.3
7187483E
+00
00.0
27
20
-qsc
205
206
318
204
316
-5.8
1401694E
-03
-5.8
1421806E
-03
00.1
17
22.1
3Q
Q-q
scagr2
5472
672
472
672
2.0
1737737E
+08
2.0
1737657E
+08
00.3
629
1.6
63.1
3Q
Q-q
scagr7
130
186
129
185
2.6
8659112E
+07
2.6
8658869E
+07
00.0
919
23.3
3Q
Q-q
scfx
m1
331
601
323
593
1.6
8826874E
+07
1.6
8826848E
+07
00.3
530
1.7
73.8
-qsc
fxm
2661
1201
646
1186
2.7
7761534E
+07
2.7
7761498E
+07
00.6
936
1.8
14.1
5Q
Q-q
scfx
m3
991
1801
969
1779
3.0
8163509E
+07
3.0
8163491E
+07
00.9
737
1.8
44.3
-qsc
orp
io389
467
373
456
1.8
8050938E
+03
1.8
8050937E
+03
00.1
616
22.6
5Q
Q-q
scrs
8491
1276
491
1276
9.0
4559758E
+02
9.0
4559689E
+02
00.3
824
22.9
6Q
Q-q
scsd
178
761
78
761
8.6
6666691E
+00
8.6
6666672E
+00
00.1
211
23.6
-qsc
sd6
148
1351
148
1351
5.0
8082149E
+01
5.0
8082137E
+01
00.2
15
26.9
1Q
Q-q
scsd
8398
2751
398
2751
9.4
0763587E
+02
9.4
0763560E
+02
00.3
312
24.0
3Q
Q-q
scta
p1
301
661
301
661
1.4
1586111E
+03
1.4
1586111E
+03
00.1
920
23.1
7Q
Q-q
scta
p2
1091
2501
1091
2501
1.7
3502650E
+03
1.7
3502649E
+03
00.3
813
24.7
9Q
Q-q
scta
p3
1481
3341
1481
3341
1.4
3875468E
+03
1.4
3875468E
+03
00.5
214
24.5
9Q
Q-q
seba
516
1037
516
1037
8.1
4817931E
+07
8.1
4817942E
+07
0.0
20.4
328
1.8
64.4
7Q
Q-q
share
1b
118
254
113
249
7.2
0078322E
+05
7.2
0078214E
+05
00.1
526
21.4
-qsh
are
2b
97
163
97
163
1.1
7036837E
+04
1.1
7036812E
+04
00.0
919
21.1
5Q
Q-q
shell
537
1778
537
1775
1.5
7263633E
+12
1.5
7263551E
+12
0.0
21.0
130
24.2
5Q
Q-q
ship
04l
403
2167
357
2163
2.4
2001501E
+06
2.4
2001490E
+06
00.3
215
23.2
7Q
Q-q
ship
04s
403
1507
269
1415
2.4
2499308E
+06
2.4
2499280E
+06
00.2
214
22.9
-qsh
ip08l
779
4364
695
4346
2.3
7604021E
+06
2.3
7604015E
+06
0.0
21.5
315
23.9
-qsh
ip08s
779
2468
487
2242
2.3
8572832E
+06
2.3
8572819E
+06
0.0
10.5
915
25.0
5Q
Q-q
ship
12l
1152
5534
921
5412
3.0
1887662E
+06
3.0
1887624E
+06
0.0
33.3
219
22.8
8Q
Q-q
ship
12s
1152
2870
688
2515
3.0
5696198E
+06
3.0
5696007E
+06
0.0
10.8
419
24.7
3Q
Q-q
sierr
a1228
2736
1228
2721
2.3
7504456E
+07
2.3
7504474E
+07
00.6
420
25.0
6Q
Q-q
stair
357
615
357
546
7.9
8544934E
+06
7.9
8544810E
+06
0.0
10.3
324
1.9
22.5
1Q
Q-q
standat
360
1275
359
1192
6.4
1183858E
+03
6.4
1183826E
+03
0.0
10.2
917
24.2
-s268
611
611
-2.1
2146135E
-07
-7.6
9541657E
-07
00.0
415
20
-sta
dat1
4000
6001
4000
6001
-2.8
5268640E
+07
-2.8
5268640E
+07
0.0
11.7
19
1.5
35.0
2Q
Q-s
tadat2
4000
6001
4000
6001
-3.2
6266629E
+01
-3.2
6266666E
+01
0.0
11.0
118
27.0
1Q
Q-s
tadat3
8000
12001
8000
12001
-3.5
7794523E
+01
-3.5
7794533E
+01
0.0
12.1
818
26.6
5Q
Q-t
am
e2
32
3-3
.27685749E
-10
-6.5
3153282E
-10
00.0
26
20
-ubh1
12001
18010
12001
17998
1.1
1600080E
+00
1.1
1600080E
+00
0.4
71.4
65
211.0
9Q
Q-y
ao
2001
4003
2001
4003
1.9
7704313E
+02
1.9
7704256E
+02
0.0
21.0
123
1.8
75.1
4Q
Q-z
ecevic
23
53
5-4
.12500000E
+00
-4.1
2500003E
+00
00.0
27
20
Avg
0.0
34.0
816.4
91.9
53.3
7
Tab
leA
.2:
iOptimize
per
form
an
ceon
Mit
telm
an
n’s
QC
QP
test
pro
ble
ms
40 K.-L. HUANG AND S. MEHROTRA
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Prim
al
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jectiv
eO
bje
ctiv
eT
ime
Tim
eIte
rsD
irs(%
)
airp
ort
42
84
42
84
4.7
9527017E
+04
4.7
9526915E
+04
00.1
728
1.2
91.8
6p
ola
k4
33
33
2.4
3646932E
-08
1.6
5838221E
-07
00.0
413
20
makela
12
42
4-1
.41421356E
+00
-1.4
1421356E
+00
00.0
38
24.3
5m
akela
23
33
37.2
0000270E
+00
7.2
0000353E
+00
00.0
27
25.2
6m
akela
320
21
20
21
2.3
1888348E
-11
0.0
0000000E
+00
00.0
38
20
gig
om
ez1
35
35
-2.9
9999998E
+00
-2.9
9999992E
+00
00.0
39
1.6
70
hangin
g180
288
180
288
-6.2
0176047E
+02
-6.2
0176047E
+02
00.1
610
23.4
5m
adssch
j158
81
158
81
-7.9
7283702E
+02
-7.9
7283702E
+02
00.1
411
1.6
42.2
7ro
senm
mx
45
45
-4.3
9999956E
+01
-4.3
9999945E
+01
00.0
38
20
optre
ward
28
28
-1.0
9999913E
+00
-1.0
9999913E
+00
00.0
310
20
optp
rloc
30
35
30
35
-1.6
4197690E
+01
-1.6
4197857E
+01
00.0
616
1.8
10
Avg
0.0
00.0
711.6
41.8
61.5
6
Tab
leA
.3:iOptimize
perform
an
ceof
QC
QP
onCute
testp
rob
lemp
roblem
s
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 41
Befo
reP
reso
lvin
gA
fter
Pre
solv
ing
Pri
mal
Dual
Pre
solv
eSolv
er
Avg
Avg
Pro
pΦ
Pro
ble
mR
ow
sC
ols
Row
sC
ols
Ob
jecti
ve
Ob
jecti
ve
Tim
eT
ime
Iters
Dir
s(%
)
big
bank
814
1773
814
1773
-4.2
0569612E
+06
-4.2
0569667E
+06
0.0
37.1
525
1.6
6.2
5gri
dnete
3844
7565
3844
7565
2.0
6554692E
+02
2.0
6554690E
+02
0.0
53.1
72
7.6
6gri
dnetf
3844
7565
3844
7565
2.4
2108991E
+02
2.4
2108973E
+02
0.0
427.2
315
28.1
7gri
dneth
36
61
36
61
3.9
6262686E
+01
3.9
6262685E
+01
0.0
30.2
56
20.4
3gri
dneti
36
61
36
61
4.0
2474712E
+01
4.0
2474424E
+01
0.0
30.3
98
20
dallasl
598
837
598
837
-2.0
2604132E
+05
-2.0
2610489E
+05
0.0
32.4
29
1.5
54.6
5dallasm
119
164
119
164
-4.8
1981888E
+04
-4.8
1982189E
+04
0.0
32.0
849
1.9
21.0
7dallass
29
44
29
44
-3.2
3932257E
+04
-3.2
3932858E
+04
0.0
32.0
429
1.6
90.3
pola
k1
23
23
2.7
1828182E
+00
2.7
1828183E
+00
0.0
30.2
16
20
pola
k2
211
211
5.4
5981499E
+01
5.4
5981501E
+01
0.0
30.2
16
20.5
2p
ola
k3
10
12
10
12
5.9
3300335E
+00
5.9
3300335E
+00
0.0
30.4
39
20.2
4p
ola
k5
23
23
4.9
9999999E
+01
5.0
0000001E
+01
0.0
30.2
96
20
smbank
64
117
64
117
-7.1
2929200E
+06
-7.1
2929378E
+06
0.0
31.8
530
1.3
70.6
5canti
lvr
15
15
1.3
3995635E
+00
1.3
3995641E
+00
0.0
30.6
112
20
cb2
33
33
1.9
5222455E
+00
1.9
5222408E
+00
0.0
30.2
97
20.3
6cb3
33
33
2.0
0000001E
+00
2.0
0000004E
+00
0.0
30.2
87
20
chaconn1
33
33
1.9
5222455E
+00
1.9
5222408E
+00
0.0
30.3
72
0ch
aconn2
33
33
2.0
0000001E
+00
2.0
0000004E
+00
0.0
30.2
87
20
dip
igri
47
47
6.8
0630082E
+02
6.8
0629944E
+02
0.0
30.3
99
20
gpp
498
499
498
499
1.4
4009302E
+04
1.4
4009293E
+04
0.0
32.2
810
27.7
hong
14
14
1.3
4730701E
+00
1.3
4730660E
+00
0.0
30.6
616
20.3
1lo
adbal
31
51
31
51
4.5
2851044E
-01
4.5
2850995E
-01
0.0
30.8
121
20.2
5sv
anb
erg
5000
5000
5000
5000
8.3
6142276E
+03
8.3
6142275E
+03
0.1
1114.3
316
211.2
4ante
nna-a
nte
nna2
166
49
166
49
1.5
7986028E
+00
1.5
7986028E
+00
0.0
37.5
315
28.3
ante
nna-a
nte
nna
vare
ps
166
50
166
50
1.2
2309383E
+00
1.2
2309383E
+00
0.0
312.1
218
1.7
26.9
5bra
ess
-tra
fequil
361
722
361
722
5.2
6883657E
+01
5.2
6883559E
+01
0.0
31.2
517
21.4
6bra
ess
-tra
fequil2
437
798
437
798
5.2
6883639E
+01
5.2
6883616E
+01
0.0
31.8
614
27.7
3bra
ess
-tra
fequilsf
309
856
309
856
5.1
5910455E
+01
5.1
5910406E
+01
0.0
21.8
117
20.7
6bra
ess
-tra
fequil2sf
385
932
385
932
5.1
5910445E
+01
5.1
5910425E
+01
0.0
32.3
313
27.5
8ele
na-c
hem
eq
12
38
12
38
-1.9
1087024E
+03
-1.9
1087054E
+03
0.0
10.9
519
1.6
30.2
1ele
na-s
383
114
114
7.2
8593644E
+05
7.2
8593346E
+05
0.0
31.2
819
1.4
20.0
8firfi
lter-
fir
convex
243
163
243
163
1.0
4648765E
+00
1.0
4648766E
+00
0.0
30.8
314
26.9
9m
ark
ow
itz-g
row
thopt
18
18
-1.1
7084823E
-01
-1.1
7084822E
-01
0.0
40.3
69
20
wbv-a
nte
nna2
1197
1180
1197
1180
1.0
9359976E
+00
1.0
9359978E
+00
0.0
185.4
614
29.1
6w
bv-l
ow
pass
2200
219
200
219
1.0
5905136E
+00
1.0
5905055E
+00
08.3
225
26.9
9batc
h73
106
73
106
2.5
9180347E
+05
2.5
9175929E
+05
0.0
10.3
813
21.3
7st
ock
cycle
97
481
97
481
1.1
7916213E
+05
1.1
7915981E
+05
00.3
611
21.4
4sy
nth
es1
610
610
7.5
9284403E
-01
7.5
9284359E
-01
00.1
97
20.5
3sy
nth
es2
14
21
14
21
-5.5
4416573E
-01
-5.5
4416857E
-01
00.2
18
20
synth
es3
23
34
23
34
1.5
0821846E
+01
1.5
0821817E
+01
0.0
10.2
18
20
trim
loss
224
53
24
53
7.1
8306638E
-01
7.1
8306802E
-01
0.0
20.2
611
20.7
9tr
imlo
ss4
64
145
64
145
1.7
0933108E
+00
1.7
0933115E
+00
0.0
10.2
811
22.5
1tr
imlo
ss5
90
216
90
216
1.1
7887002E
+00
1.1
7887282E
+00
00.3
713
24.0
2tr
imlo
ss6
120
287
120
287
1.3
0565671E
+00
1.3
0565766E
+00
0.0
10.3
513
25.1
9tr
imlo
ss7
154
436
154
436
5.9
3497411E
-01
5.9
3498455E
-01
00.6
615
27.8
3tr
imlo
ss12
384
1028
384
1028
2.3
1187222E
+00
2.3
1187205E
+00
06.8
118
28.8
5
Avg
0.0
26.5
514.0
91.9
42.9
4
Tab
leA
.4:
iOptimize
per
form
an
ceonCute
CP
test
pro
ble
ms
42 K.-L. HUANG AND S. MEHROTRA
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
aug2d
612
lase
r12
10
qp
cb
oei1
20
22
aug2dc
612
liswet1
26
19
qp
cb
oei2
21
26
aug2dcqp
13
13
liswet2
32
17
qp
csta
ir22
25
aug2dqp
15
14
liswet3
21
16
qpilo
tno
35§
42
aug3d
68
liswet4
26§
18
qre
cip
e16
19
aug3dc
68
liswet5
25
16
qpte
st6
6aug3dcqp
11
11
liswet6
24
18
qsc
205
16
17
aug3dqp
12
13
liswet7
31
19
qsc
agr2
520
22
cont-0
50
11
11
liswet8
38
37
qsc
agr7
19
22
cont-1
00
12
11
liswet9
37
31
qsc
fxm
125
25
cont-1
01
13
9lisw
et1
036
21
qsc
fxm
230
33
cont-2
00
12
11
liswet1
131
22
qsc
fxm
330
35
cont-2
01
15
10
liswet1
230
32
qsc
orp
io13
14
cont-3
00
16
13
lotsch
d10
9qsc
rs821
24
cvxqp1
m10
10
mosa
rqp1
99
qsc
sd1
10
9cvxqp1
s8
8m
osa
rqp2
99
qsc
sd6
13
14
cvxqp2
m10
10
pow
ell2
010§
8qsc
sd8
12
11
cvxqp2
s9
10
prim
al1
10
9qsc
tap1
18
17
cvxqp3
m12
12
prim
al2
87
qsc
tap2
13
13
cvxqp3
s10
10
prim
al3
99
qsc
tap3
14
15
dpklo
15
3prim
al4
10
9qse
ba
16
26
dto
c3
54
prim
alc
118
18
qsh
are
1b
23
29
dual1
14
12
prim
alc
215
25
qsh
are
2b
17
17
dual2
12
9prim
alc
58
10
qsh
ell
33
38
dual3
12
9prim
alc
810
12
qsh
ip04l
14
14
dual4
12
10
q25fv
47
28
27
qsh
ip04s
13
14
dualc
110
25
qadlittle
11
11
qsh
ip08l
13
14
dualc
210
20
qafiro
12
14
qsh
ip08s
13
14
dualc
57
13
qbandm
19
20
qsh
ip12l
17
16
dualc
87
19
qb
eaconf
14
14
qsh
ip12s
17
17
genhs2
84
3qb
ore
3d
13
19
qsie
rra18
20
gould
qp2
14
13
qbra
ndy
14
17
qsta
ir22
23
gould
qp3
10
11
qcapri
26
36
qsta
ndat
15
13
hs1
18
10
10
qe226
14
16
s268
22
12
hs2
19
9qeta
macr
27
32
stadat1
25
12
hs2
68
22
12
qff
fff8
022
25
stadat2
41
18
hs3
55
5qfo
rpla
n23
31
stadat3
46
19
hs3
5m
od
11
11
qgfrd
xpn
23
30
tam
e3
3hs5
15
3qgro
w15
22
22
ubh1
63
hs5
24
3qgro
w22
25
26
yao
23
23
hs5
36
4qgro
w7
23
21
zecevic
25
5hs7
65
5qisra
el
21
28
ksip
21
16
qp
cble
nd
19
17
Avg
16.2
215.9
2
1§:
Near
optim
al
solu
tion
found.
Tab
leA
.5:C
omp
arisonw
ithMOSEK
on
Maro
san
dM
eszaro
s’sQ
Ptest
prob
lems
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 43
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
-aug2d
11
8Q
Q-l
ase
r135
19
-qp
cb
oei1
24
24
-aug2dc
98
-lis
wet1
41?
21
-qp
cb
oei2
29
29
-aug2dcqp
18
19
-lis
wet2
21
17
-qp
cst
air
24
21
-aug2dqp
21
-lis
wet3
17
17
-qpilotn
o39
30
-aug3d
65
-lis
wet4
17
18
-qre
cip
e16
18
-aug3dc
75
-lis
wet5
15
20
-qpte
st7
7Q
Q-a
ug3dcqp
17
11
-lis
wet6
17
19
-qsc
205
17
17
-aug3dqp
22
12
-lis
wet7
34?
22
-qsc
agr2
520
29
-cont-
050
14
11
-lis
wet8
49?
30
-qsc
agr7
18
19
-cont-
100
15
12
-lis
wet9
26?
36
-qsc
fxm
143
30
-cont-
101
12
11
-lis
wet1
018?
29
-qsc
fxm
239
36
-cont-
200
19
13
-lis
wet1
127?
23
-qsc
fxm
326
37
-cont-
201
14
12
-lis
wet1
224?
38
-qsc
orp
io17
16
-cont-
300
15
12
-lots
chd
15
9Q
Q-q
scrs
824
24
-cvxqp1
m15
10
-mosa
rqp1
15
10
-qsc
sd1
13
11
-cvxqp1
s15
8Q
Q-m
osa
rqp2
14
9Q
Q-q
scsd
617
15
-cvxqp2
m18
10
-pow
ell20
19
-qsc
sd8
16
12
-cvxqp2
s12
9Q
Q-p
rim
al1
11
10
-qsc
tap1
19
20
-cvxqp3
m18
13
-pri
mal2
99
-qsc
tap2
18
13
-cvxqp3
s14
9Q
Q-p
rim
al3
10
11
-qsc
tap3
18
14
-dpklo
16
5Q
Q-p
rim
al4
11
9Q
Q-q
seba
62
28
-dto
c3
11
7Q
Q-p
rim
alc
120
19
-qsh
are
1b
27
26
-dual1
14
13
-pri
malc
218
17
-qsh
are
2b
19
19
-dual2
13
12
-pri
malc
511
10
-qsh
ell
26
30
-dual3
14
12
-pri
malc
815
12
-qsh
ip04l
21
15
-dual4
13
10
-q25fv
47
39
28
-qsh
ip04s
18
14
-dualc
117
21
-qadlitt
le19
12
-qsh
ip08l
20
15
-dualc
214
17
-qafiro
16
14
-qsh
ip08s
21
15
-dualc
511
9Q
Q-q
bandm
21
21
-qsh
ip12l
24
19
-dualc
818
13
-qb
eaconf
12
13
-qsh
ip12s
24
19
-genhs2
85
5Q
Q-q
bore
3d
14
19
-qsi
err
a24
20
-gould
qp2
13
15
-qbra
ndy
17
17
-qst
air
31
24
-gould
qp3
13
10
-qcapri
26
32
-qst
andat
16
17
-hs1
18
11
11
-qe226
17
17
-s268
13
15
-hs2
113
10
-qeta
macr
24
33
-sta
dat1
53
19
-hs2
68
13
15
-qff
fff8
030
26
-sta
dat2
18
18
-hs3
56
6Q
Q-q
forp
lan
180§
23
-sta
dat3
18
18
-hs3
5m
od
13
12
-qgfr
dxpn
25
24
-tam
e5
6Q
Q-h
s51
55
-qgro
w15
21
21
-ubh1
20
5Q
Q-h
s52
65
-qgro
w22
24
24
-yao
27§
23
-hs5
310
6Q
Q-q
gro
w7
21
21
-zecevic
26
7Q
Q-h
s76
86
-qis
rael
28
29
Avg1
20.9
116.4
3Q
Q-k
sip
15
13
-qp
cble
nd
20
16
Avg2
18.3
416.2
6
1:
Max
num
ber
of
itera
tions
reach
ed.
2§:
Near
opti
mal
solu
tion
found.
3?:
Term
inate
dw
ith
unknow
nst
atu
s.
Tab
leA
.6:
Com
par
ison
wit
hMOSEK
on
Mit
telm
an
n’s
QC
QP
test
pro
ble
ms
44 K.-L. HUANG AND S. MEHROTRA
Problem MOSEK iOptimize
airport 26 28polak4 10 13makela1 8 8makela2 7 7makela3 6 8gigomez1 8 9hanging 13 10madsschj 10 11rosenmmx 9 8optreward 10 10optprloc 18 16
Avg 11.36 11.64
Table A.7: Comparison with MOSEK on Cute QCQP test problems
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 45
Problem MOSEK iOptimize Ipopt Knitro
bigbank 20 21 24 19gridnete 9 7 5 4gridnetf 16 13 27 17gridneth 8 6 6 3gridneti 10 8 7 6dallasl 74 33 249 245dallasm 128 31
dallass 31 135polak1 6 5 4 2polak2 6 5 4 1polak3 10 8
polak5 6 5 24 12smbank 14 27 12 11cantilvr 12 12 5 3cb2 8 8 4 3cb3 7 7 4 4chaconn1 8 8 4 3chaconn2 7 7 4 4dipigri 14 10 6 2gpp 12 10 22 11hong 13 15 11 7loadbal 22 21 13 7svanberg 14 14 30 16
antenna\antenna2 21§ 16 178 58antenna\antenna vareps 19 18 106 61braess\trafequil 15 16 18 22braess\trafequil2 11 13 26 13braess\trafequilsf 14 15 18 24braess\trafequil2sf 12 12 25 12elena\chemeq 23 20 37 16elena\s383 85 21 18 8firfilter\fir convex 33§ 13 27 11markowitz\growthopt 8 7 8 7wbv\antenna2 15 13 31 16wbv\lowpass2 30§ 22 31 66
batch 13 12 13 16stockcycle 11 10 23 12synthes1 6 6 6 6synthes2 8 8 10 7synthes3 8 8 9 8trimloss2 10 10 13 7trimloss4 11 12 16 8trimloss5 13 13 20 10trimloss6 12 12 23 11trimloss7 13 13 35 14trimloss12 19 17 41 22
Avg1 18.46 14.00 – –Avg2 13.07 12.69 – –
1 : Max number of iterations reached.2 §: Near optimal solution found.3 : Function evaluate error from AMPL.
Table A.8: Comparison with MOSEK, Ipopt, and Knitro on CP test problems
46 K.-L. HUANG AND S. MEHROTRA
Presolve Solver Avg Avg Prop ΦProblem Time Time Iters Dirs (%)
iQQ-aug2d 0.01 7.14 16 1.69 4.78iQQ-aug2dc 0.01 6.8 16 1.69 4.34iQQ-aug2dcqp 0.78 13.44 29 1.34 4.5iQQ-aug2dqp 0.73 13.4 29 1.34 4.44iQQ-aug3d 0 4.72 35 1.09 2.43iQQ-aug3dc 0 1.06 16 1.5 5.13iQQ-aug3dcqp 0.04 2.59 33 1.09 4.1iQQ-aug3dqp 0.03 2.68 37 1.05 4.4iQQ-cont-050 0 0.76 9 2 4.72iQQ-cont-100 0.01 6.7 14 1.93 2.17iQQ-cont-101 0 3.33 9 2 2.35iQQ-cont-200 0.03 47.74 14 2 1.71iQQ-cont-201 0.03 26.08 9 2 2.09iQQ-cont-300 0.05 104.79 9 2 1.57iQQ-cvxqp1 m 0 2.19 26 1.38 1.38iQQ-cvxqp1 s 0 2.24 35 1.2 0.45iQQ-cvxqp2 m 0 2.14 29 1.34 0.9iQQ-cvxqp2 s 0 2.41 37 1.19 0.92iQQ-cvxqp3 m 0 2.85 30 1.23 1.8iQQ-cvxqp3 s 0 1.22 35 1.17 0.98iQQ-dpklo1 0 0.36 13 2 0.83iQQ-dtoc3 0.32 4.21 15 1.87 5.9iQQ-dual1 0 1.14 31 1.32 0.88iQQ-dual2 0 0.52 16 1.69 0iQQ-dual3 0 0.63 18 1.67 1.61iQQ-dual4 0 1.23 33 1.33 0iQQ-dualc1 0 1.06 35 1.66 0iQQ-dualc2 0 1.19 35 1.57 0.59iQQ-dualc5 0 1.65 44 1.32 1.04iQQ-dualc8 0 1.06 34 1.68 1.9iQQ-genhs28 0 0.37 13 2 0iQQ-gouldqp2 0 0.27 10 2 0iQQ-gouldqp3 0 0.56 16 1.63 1.82iQQ-hs118 0 0.27 11 2 0iQQ-hs21 0.02 1.2 35 1.11 0.08iQQ-hs268 0 1.52 45 1.11 0iQQ-hs35 0 0.52 16 1.75 0iQQ-hs35mod 0 0.54 16 1.75 0.19iQQ-hs51 0 1.07 27 1.19 0iQQ-hs52 0 0.73 26 1.31 0iQQ-hs53 0 1.41 39 1.13 0iQQ-hs76 0 0.36 13 2 0iQQ-ksip 0 1.73 42 1.05 3.3
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 47
Presolve Solver Avg Avg Prop ΦProblem Time Time Iters Dirs (%)
iQQ-laser 0 0.4 13 2 5.22iQQ-liswet1 0.01 16.43 42 1.14 4.19iQQ-liswet2 0.02 17.23 43 1.12 4.9iQQ-liswet3 0.01 15 38 1.16 5.15iQQ-liswet4 0.01 15.72 40 1.13 5.16iQQ-liswet5 0.01 18.09 47 1.17 4.49iQQ-liswet6 0.02 16.79 43 1.14 4.64iQQ-liswet7 0.01 15.81 40 1.13 5.12iQQ-liswet8 0.01 14.03 33 1.15 4.37iQQ-liswet9 0.01 15.69 37 1.14 4.69iQQ-liswet10 0.02 16.5 42 1.14 4.97iQQ-liswet11 0.01 16.33 42 1.14 4.9iQQ-liswet12 0.01 15.73 38 1.13 4.62iQQ-lotschd 0 0.3 12 2 0iQQ-mosarqp1 0 2.04 33 1.09 4.55iQQ-mosarqp2 0 1.41 38 1.08 3.57iQQ-powell20 0.02 6.5 16 1.63 3.5iQQ-primal1 0.01 1.28 43 1.09 1.41iQQ-primal2 0 1.31 37 1.14 2.47iQQ-primal3 0 1.58 41 1.1 2.24iQQ-primal4 0.01 1.56 36 1.17 4.44iQQ-primalc1 0 1.37 42 1.26 1.03iQQ-primalc2 0 1.32 36 1.33 1.52iQQ-primalc5 0 0.97 23 1.39 0.93iQQ-primalc8 0 1.1 32 1.28 1.01iQQ-q25fv47 0.01 1.4 15 2 6.42iQQ-qadlittle 0 0.34 14 2 0.59iQQ-qafiro 0 1.37 39 1.15 0.07iQQ-qbandm 0 0.34 14 1.86 3.02iQQ-qbeaconf 0 0.41 14 1.93 0.5iQQ-qbore3d 0 0.51 18 1.89 0iQQ-qbrandy 0 0.48 16 1.88 0.41iQQ-qcapri 0 0.62 22 1.91 1.63iQQ-qe226 0 1.41 36 1.22 1.85iQQ-qetamacr 0 1.09 34 1.21 3.17iQQ-qfffff80 0 0.57 20 1.8 3.57iQQ-qforplan 0.01 0.25 11 2 1.63iQQ-qgfrdxpn 0.01 0.47 19 2 2.42iQQ-qgrow15 0 0.44 16 1.94 1.15iQQ-qgrow22 0.01 0.54 18 1.89 0iQQ-qgrow7 0 0.56 18 1.89 0.54iQQ-qisrael 0 0.47 19 2 1.3iQQ-qpcblend 0 1.5 38 1.16 0.27
48 K.-L. HUANG AND S. MEHROTRA
Presolve Solver Avg Avg Prop ΦProblem Time Time Iters Dirs (%)
iQQ-qpcboei1 0.01 0.31 12 2 6.06iQQ-qpcboei2 0 0.44 17 1.94 4.65iQQ-qpcstair 0.01 0.64 20 1.75 1.27iQQ-qpilotno 0.04 0.65 16 2 3.84iQQ-qrecipe 0 0.75 20 1.6 0.13iQQ-qptest 0 0.56 16 1.81 0.18iQQ-qsc205 0 1.51 41 1.24 1.35iQQ-qscagr25 0 0.32 14 2 0.32iQQ-qscagr7 0 0.33 14 2 1.22iQQ-qscfxm1 0 0.4 15 1.93 1.79iQQ-qscfxm2 0 0.42 16 1.94 2.44iQQ-qscfxm3 0 0.54 16 1.94 4.71iQQ-qscorpio 0 0.95 25 1.24 1.91iQQ-qscrs8 0 0.63 22 1.73 6.45iQQ-qscsd1 0 0.84 20 1.6 1.31iQQ-qscsd6 0 0.89 22 1.64 2.07iQQ-qscsd8 0 2.02 37 1.11 5.11iQQ-qsctap1 0 1.07 36 1.19 1.14iQQ-qsctap2 0 2.26 40 1.13 3.58iQQ-qsctap3 0 2.26 32 1.22 3.65iQQ-qseba 0.02 0.57 20 1.85 5.36iQQ-qshare1b 0 0.44 17 1.94 2.33iQQ-qshare2b 0 0.47 13 1.69 2.13iQQ-qshell 0.02 0.82 17 2 6.4iQQ-qship04l 0 0.31 11 2 0iQQ-qship04s 0.01 0.25 11 2 6.88iQQ-qship08l 0.01 1.63 16 1.94 4.56iQQ-qship08s 0 0.72 17 1.88 3.73iQQ-qship12l 0.01 2.96 17 1.71 4.74iQQ-qship12s 0.01 0.99 18 1.83 2.98iQQ-qsierra 0 0.64 17 2 5iQQ-qstair 0 0.39 17 2 1.05iQQ-qstandat 0.01 1.37 39 1.18 2.05iQQ-s268 0 1.56 45 1.11 0.06iQQ-stadat1 0.01 3.13 26 1.23 4.19iQQ-stadat2 0 4.39 35 1.14 4.42iQQ-stadat3 0.01 6.18 25 1.2 4.25iQQ-tame 0 0.23 10 2 0iQQ-ubh1 0.46 5.73 26 1.96 6.21iQQ-yao 0.01 3.17 38 1.05 4.85iQQ-zecevic2 0 0.23 9 2 0
Avg 0.02 4.40 25.30 1.56 2.48
Table A.9: iOptimize performance on Maros and Meszaros’s QCQP infeasible testproblems
Presolve Solver Avg Avg Prop ΦProblem Time Time Iters Dirs (%)
iairport 0.01 0.6 19 1.63 0.67ipolak4 0 0.81 25 1.28 0.12imakela1 0 1.11 32 1.13 0imakela2 0 1.13 32 1.13 0imakela3 0 0.32 14 2 0igigomez1 0 1.02 26 1.19 0ihanging 0 1.17 30 1.13 4.27imadsschj 0.06 0.53 17 1.59 0.76irosenmmx 0 1 35 1.14 0ioptreward 0 0.64 20 1.55 0ioptprloc 0 0.92 21 1.52 0.11
Avg 0.01 0.84 24.64 1.39 0.54
Table A.10: iOptimize performance on Cute QCQP infeasible test problems
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 49
Presolve Solver Avg Avg Prop ΦProblem Time Time Iters Dirs (%)
ibigbank 0 17.12 27 1.33 6.06igridnete 0 81.51 14 1.71 6.79igridnetf 0 377.58 36 1.08 5.67igridneth 0.02 0.78 23 1.39 0.26igridneti 0 1.23 40 1.07 1.15idallaslidallasm 0.01 1.96 47 1.4 2.26idallass 0 1.58 45 1.22 0.51ipolak1 0 0.41 16 1.81 0ipolak2 0 0.32 13 2 0ipolak3 0 0.77 27 1.3 0.52ipolak5 0 0.41 13 2 0.25ismbank 0 1.13 31 1.13 1.7icantilvr 0 0.29 12 2 0.35icb2 0 0.94 32 1.19 0icb3 0.01 1.25 31 1.16 0.24ichaconn1 0 0.95 32 1.19 0ichaconn2 0 1.26 31 1.16 0.08idipigri 0 1.29 32 1.19 0.08igpp 0 34.24 38 1.11 5.34ihong 0 0.92 22 1.45 0.11iloadbal 0 0.37 14 2 0.27isvanberg 0.08 632.99 27 1.22 6.65
antenna-iantenna2 0 68.79 186 1.65 13.31antenna-iantenna vareps 0 48.02 47 1.13 5.83braess-itrafequil 0 4.77 38 1.37 3.1braess-itrafequil2 0 1.87 9 2 7.34braess-itrafequilsf 0 9.23 46 1.26 2.71braess-itrafequil2sf 0 2.56 9 2 7.16elena-ichemeq 0 0.06 14 2 4.92elena-is383 0 0.06 14 1.93 1.64firfilter-ifir convex 0 0.75 13 1.92 6.03markowitz-igrowthopt 0 0.08 16 1.69 0wbv-iantenna2 0.06 119.88 17 2 7.97wbv-ilowpass2 0 4.54 13 1.92 7.35
ibatch 0 0.15 22 1.36 3.47istockcycle 0 0.88 35 1.14 6.29isynthes1 0 0.09 25 1.36 0isynthes2 0 0.09 23 1.35 1.15isynthes3 0 0.18 40 1.1 3.91itrimloss2 0 0.1 15 1.93 0itrimloss4 0 0.15 16 1.94 7.43itrimloss5 0 0.22 15 2 6.07itrimloss6 0 0.37 15 2 7.32itrimloss7 0 0.84 16 2 6.58itrimloss12 0.01 7.79 16 2 7.41
Avg 0.00 31.79 28.07 1.56 3.45
1 : Function evaluate error from AMPL.
Table A.11: iOptimize performance on Cute CP infeasible test problems
50 K.-L. HUANG AND S. MEHROTRA
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
Pro
ble
mMOSEK
iOptimize
iQQ
-aug2d
16
iQQ
-lase
r29
13
iQQ
-qp
cb
oei1
12
iQQ
-aug2dc
16
iQQ
-liswet1
31
42
iQQ
-qp
cb
oei2
17
iQQ
-aug2dcqp
29
iQQ
-liswet2
30
43
iQQ
-qp
csta
ir
20
iQQ
-aug2dqp
24
29
iQQ
-liswet3
30
38
iQQ
-qpilo
tno
16
iQQ
-aug3d
21
35
iQQ
-liswet4
30
40
iQQ
-qre
cip
e12
20
iQQ
-aug3dc
16
16
iQQ
-liswet5
21
47
iQQ
-qpte
st15
16
iQQ
-aug3dcqp
121
33
iQQ
-liswet6
30
43
iQQ
-qsc
205
16
41
iQQ
-aug3dqp
95
37
iQQ
-liswet7
24
40
iQQ
-qsc
agr2
5383?
14
iQQ
-cont-0
50
9
iQQ
-liswet8
26
33
iQQ
-qsc
agr7
38
14
iQQ
-cont-1
00
14
iQQ
-liswet9
24
37
iQQ
-qsc
fxm
1
15
iQQ
-cont-1
01
9
iQQ
-liswet1
030
42
iQQ
-qsc
fxm
2
16
iQQ
-cont-2
00
14
iQQ
-liswet1
130
42
iQQ
-qsc
fxm
3
16
iQQ
-cont-2
01
70
9iQ
Q-lisw
et1
224
38
iQQ
-qsc
orp
io31
25
iQQ
-cont-3
00
9
iQQ
-lotsch
d26
12
iQQ
-qsc
rs852
22
iQQ
-cvxqp1
m
26
iQQ
-mosa
rqp1
100
33
iQQ
-qsc
sd1
28
20
iQQ
-cvxqp1
s21
35
iQQ
-mosa
rqp2
66
38
iQQ
-qsc
sd6
25
22
iQQ
-cvxqp2
m97
29
iQQ
-pow
ell2
0
16
iQQ
-qsc
sd8
30
37
iQQ
-cvxqp2
s18
37
iQQ
-prim
al1
31
43
iQQ
-qsc
tap1
38
36
iQQ
-cvxqp3
m
30
iQQ
-prim
al2
32
37
iQQ
-qsc
tap2
40
iQQ
-cvxqp3
s22
35
iQQ
-prim
al3
33
41
iQQ
-qsc
tap3
87
32
iQQ
-dpklo
1
13
iQQ
-prim
al4
34
36
iQQ
-qse
ba
28
20
iQQ
-dto
c3
24
15
iQQ
-prim
alc
136
42
iQQ
-qsh
are
1b
51
17
iQQ
-dual1
10
31
iQQ
-prim
alc
219
36
iQQ
-qsh
are
2b
175
13
iQQ
-dual2
916
iQQ
-prim
alc
522
23
iQQ
-qsh
ell
20
17
iQQ
-dual3
918
iQQ
-prim
alc
824
32
iQQ
-qsh
ip04l
15
11
iQQ
-dual4
933
iQQ
-q25fv
47
15
iQQ
-qsh
ip04s
15
11
iQQ
-dualc
118
35
iQQ
-qadlittle
14
iQQ
-qsh
ip08l
16
iQQ
-dualc
215
35
iQQ
-qafiro
70
39
iQQ
-qsh
ip08s
17
iQQ
-dualc
510
44
iQQ
-qbandm
37
14
iQQ
-qsh
ip12l
17
iQQ
-dualc
830
34
iQQ
-qb
eaconf
26
14
iQQ
-qsh
ip12s
18
iQQ
-genhs2
8
13
iQQ
-qb
ore
3d
10
18
iQQ
-qsie
rra
17
iQQ
-gould
qp2
10
10
iQQ
-qbra
ndy
16
iQQ
-qsta
ir353?
17
iQQ
-gould
qp3
14
16
iQQ
-qcapri
22
iQQ
-qsta
ndat
340
39
iQQ
-hs1
18
232
11
iQQ
-qe226
281
36
iQQ
-s268
26
45
iQQ
-hs2
140
35
iQQ
-qeta
macr
34
iQQ
-stadat1
16
26
iQQ
-hs2
68
26
45
iQQ
-qff
fff8
0
20
iQQ
-stadat2
35
iQQ
-hs3
572
16
iQQ
-qfo
rpla
n21
11
iQQ
-stadat3
25
iQQ
-hs3
5m
od
25
16
iQQ
-qgfrd
xpn
21
19
iQQ
-tam
e6
10
iQQ
-hs5
129
27
iQQ
-qgro
w15
22
16
iQQ
-ubh1
26
iQQ
-hs5
222
26
iQQ
-qgro
w22
25
18
iQQ
-yao
27
38
iQQ
-hs5
327
39
iQQ
-qgro
w7
22
18
iQQ
-zecevic
28
9iQ
Q-h
s76
69
13
iQQ
-qisra
el
19
Avg1
42.2
028.1
5iQ
Q-k
sip30
42
iQQ
-qp
cble
nd
105
38
Acg2
27.3
928.1
5
1:
Max
num
ber
of
itera
tions
reach
ed.
2?:
Term
inate
dw
ithunknow
nsta
tus.
Tab
leA
.12:
Com
parison
with
MOSEK
on
Maro
san
dM
eszaro
s’sQ
CQ
Pin
feasible
testp
roblem
s
SOLVING CONVEX PROGRAMMING USING INTERIOR POINT METHOD 51
Problem MOSEK iOptimize
iairport 10 19ipolak4 25imakela1 28 32imakela2 29 32imakela3 8 14igigomez1 26 26ihanging 32 30imadsschj 24 17irosenmmx 27 35ioptreward 11 20ioptprloc 8 21
Avg 20.30 24.60
1 : Max number of iterationsreached.
Table A.13: Comparison with MOSEK on Cute QCQP infeasible test problems
52 K.-L. HUANG AND S. MEHROTRA
Problem MOSEK iOptimize Ipopt Knitro
ibigbank 196 27 48 42igridnete 134 14 21
igridnetf 36 42
igridneth 202 23 25 18igridneti 40 40 133§idallasl 50§
idallasm 47 0idallass 45 0ipolak1 50 16
ipolak2 7 13
ipolak3 178 27
ipolak5 7 13 37
ismbank 165 31 34 37icantilvr 0 12 25 0icb2 196 32 18icb3 196 31 8ichaconn1 196 32 6ichaconn2 196 31 10 17idipigri 157 32 188
igpp 38 65
ihong 11 22 16
iloadbal 123 14 22
isvanberg 27 48 53
antenna\iantenna2 186 156antenna\iantenna vareps 47
braess\itrafequil 43 38 36 126braess\itrafequil2 7 9 82 50braess\itrafequilsf 47 46 34 103braess\itrafequil2sf 9 54 91elena\ichemeq 14 14 102 16elena\is383 7 14 67
firfilter\ifir convex 0 13 99 0markowitz\igrowthopt 15 16 22
wbv\iantenna2 0 17 32 0wbv\ilowpass2 0 13 32 0
ibatch 0 22 32 158istockcycle 117 35 72
isynthes1 9 25 35 35isynthes2 62 23 48 46isynthes3 53 40 94
itrimloss2 0 15 52 0itrimloss4 0 16 40 0itrimloss5 0 15 60 0itrimloss6 0 15 55 0itrimloss7 0 16 61 0itrimloss12 0 16 73 0
Avg1 66.33 19.44 – –Avg2 11.74 16.04 – –
1 : Max number of iterations reached.2 : Function evaluate error from AMPL.3 §: Solver terminates with a near optimal status.4 : Ipopt fails in restoration phase.
Table A.14: Comparison with MOSEK, Ipopt, and Knitro on CP infeasible test prob-lems