A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new...

20
A new combinatorial algorithm for separable convex resource 1 allocation with nested bound constraints 2 Zeyang Wu * , Kameng Nip , Qie He 3 Abstract 4 The separable convex resource allocation problem with nested bound constraints aims to 5 allocate B units of resources to n activities to minimize a separable convex cost function, with 6 lower and upper bounds on the total amount of resources that can be consumed by nested 7 subsets of activities. We develop a new combinatorial algorithm to solve this model exactly. Our 8 algorithm is capable of solving instances of with millions of activities in several minutes. The 9 running time of our algorithm is at most 65% of the running time of the current best algorithm 10 for benchmark instances with three classes of convex objectives. The efficiency of our algorithm 11 derives from the combination of constraint relaxation and use of infeasibility information. In 12 particular, nested bound constraints are relaxed first; if the obtained solution violates some 13 bound constraint, we show that the problem can be divided into two subproblems of the same 14 structure and smaller size, according to the bound constraint with the largest violation. 15 1 Introduction 16 The resource allocation problem is a collection of optimization models with a wide range of ap- 17 plications in production planning, logistics, portfolio management, telecommunication, statistical 18 surveys, and machine learning [19, 20, 26, 27]. In its simplest case, the goal is to allocate certain 19 units of resources to a set of activities to minimize the total allocation cost. In this paper, we 20 study the model with a separable cost function and prescribed lower and upper bounds on the total 21 amount of resources that could be consumed by some nested subsets of activities. These bounds 22 could represent storage limits, time-window requirements, or budget constraints, depending on the 23 application. We called this model the resource allocation problem with nested lower and upper 24 bound constraints. This model also appears as a subproblem in vehicle routing models in green 25 logistics [22, 9] and the support vector ordinal regression problem [4, 33], and has to be solved 26 repeatedly by iterative algorithms for these models. All these applications motivate us to develop 27 an efficient algorithm for this model. 28 We now introduce the mathematical programming formulation of this model. Let [n] denote the set {1, 2,...,n} for any positive integer n and otherwise. The resource allocation problem with nested lower and upper bound constraints can be formulated as the following mixed-integer * Department of Industrial and Systems Engineering, University of Minnesota. Email: [email protected]. School of Mathematics (Zhuhai), Sun Yat-sen University, China. Email: [email protected] Department of Industrial and Systems Engineering, University of Minnesota. Email: [email protected]. Cor- responding author. 1

Transcript of A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new...

Page 1: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

A new combinatorial algorithm for separable convex resource1

allocation with nested bound constraints2

Zeyang Wu∗, Kameng Nip†, Qie He‡3

Abstract4

The separable convex resource allocation problem with nested bound constraints aims to5

allocate B units of resources to n activities to minimize a separable convex cost function, with6

lower and upper bounds on the total amount of resources that can be consumed by nested7

subsets of activities. We develop a new combinatorial algorithm to solve this model exactly. Our8

algorithm is capable of solving instances of with millions of activities in several minutes. The9

running time of our algorithm is at most 65% of the running time of the current best algorithm10

for benchmark instances with three classes of convex objectives. The efficiency of our algorithm11

derives from the combination of constraint relaxation and use of infeasibility information. In12

particular, nested bound constraints are relaxed first; if the obtained solution violates some13

bound constraint, we show that the problem can be divided into two subproblems of the same14

structure and smaller size, according to the bound constraint with the largest violation.15

1 Introduction16

The resource allocation problem is a collection of optimization models with a wide range of ap-17

plications in production planning, logistics, portfolio management, telecommunication, statistical18

surveys, and machine learning [19, 20, 26, 27]. In its simplest case, the goal is to allocate certain19

units of resources to a set of activities to minimize the total allocation cost. In this paper, we20

study the model with a separable cost function and prescribed lower and upper bounds on the total21

amount of resources that could be consumed by some nested subsets of activities. These bounds22

could represent storage limits, time-window requirements, or budget constraints, depending on the23

application. We called this model the resource allocation problem with nested lower and upper24

bound constraints. This model also appears as a subproblem in vehicle routing models in green25

logistics [22, 9] and the support vector ordinal regression problem [4, 33], and has to be solved26

repeatedly by iterative algorithms for these models. All these applications motivate us to develop27

an efficient algorithm for this model.28

We now introduce the mathematical programming formulation of this model. Let [n] denotethe set 1, 2, . . . , n for any positive integer n and ∅ otherwise. The resource allocation problemwith nested lower and upper bound constraints can be formulated as the following mixed-integer

∗Department of Industrial and Systems Engineering, University of Minnesota. Email: [email protected].†School of Mathematics (Zhuhai), Sun Yat-sen University, China. Email: [email protected]‡Department of Industrial and Systems Engineering, University of Minnesota. Email: [email protected]. Cor-

responding author.

1

Page 2: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

convex program:

minx

f(x) :=

n∑i=1

fi(xi) (1a)

s.t.

n∑i=1

xi = B, (1b)

ai ≤i∑

k=1

xk ≤ bi, ∀i ∈ [n− 1], (1c)

0 ≤ xi ≤ di, ∀i ∈ [n], (1d)

xi ∈ Z, ∀i ∈ [n], (1e)

where n is the number of activities in consideration, f is a separable cost function with fi being the univariate29

cost function for activity i ∈ [n], B is the total units of resources to allocate, ai, bi ≥ 0 for i ∈ [n− 1], di > 030

for i ∈ [n], and Z is the set of integers. For the ease of presentation, we also introduce parameters an and31

bn and set an = bn = B. Formulation (1) stipulates that we want to determine the amount of resources xi32

allocated to activity i to minimize the total allocation cost, while satisfying the constraint (1b) on the total33

amount resources consumed, the bound constraints (1c) on amount of resources that can be consumed by34

the first i activities, simple bound constraints (1d), and integrality constraints (1e). We use DRAP-NC to35

denote the discrete model described by Formulation (1), and RAP-NC to denote its continuous relaxation,36

i.e., Formulation (1) without constraints (1e). In this paper, we focus on solving DRAP-NC exactly.37

Before introducing our algorithm for DRAP-NC, we mention several assumptions used throughout the38

paper.39

1. We assume that fi is convex over the interval [0, di] for each i, and is encoded by a value oracle (which40

returns the value of fi(x) for any given input x in one operation) for any i ∈ [n]. We do not impose41

any additional assumption on fi such as differentiability or Lipschitz continuity.42

2. Without loss of generality (wlog), we assume that all data involved in (1) are integral, and a1 ≤ a2 ≤43

. . . ≤ an−1 and b1 ≤ b2 ≤ . . . ≤ bn−1.44

3. Wlog, we assume that ai < bi for i ∈ [n − 1]. For if ai = bi for some i, we can break the original45

problem into one DRAP-NC with variables x1, . . . , xi and one DRAP-NC with variables xi+1, . . . , xn46

and solve them separately.47

4. We assume the problem is feasible. Feasibility can be checked easily by checking whether [ai, bi] ∩48

[0,∑ik=1 dk] 6= ∅ for i ∈ [n].49

Vidal, Jaillet, and Maculan [33] first studied DRAP-NC and RAP-NC, and called constraints (1c) the50

nested lower and upper bound constraints, since these constraints are imposed upon a sequence of nested51

subsets of activities. As illustrated in [33], these nested bound constraints are motivated by a variety52

of applications: in production planning where xi denotes the amount of products to make at period i,53

constraints (1) stipulate that the production needs to cover the cumulative demand but does not exceed the54

inventory bounds [23]; in vessel fuel optimization where xi denotes the lapse of time between port i and55

port i+ 1, constraints (1) stipulate that customer i has to be visited within the time window [ai, bi] [25, 16];56

in statistical surveys where xi denotes the sampling size in strata i, constraints (1) stipulate the sampling57

budgets on different levels of subpopulations [24, 15, 18]. RAP-NC also appears as a subproblem in iterative58

algorithms for other more complex optimization and machine learning models, such as the pollution-routing59

model [2] and the joint vehicle routing and speed optimization model [9] in green logistics and the support60

vector ordinal regression problem [4, 33] in supervised learning. Vidal et al. developed an efficient exact61

algorithm called the Monotonic Decomposition Algorithm (MDA) to solve DRAP-NC. It running time is62

O(n log n logB), currently the best in the literature. MDA could also be used to find an ε-optimal solution63

of RAP-NC (a solution x satisfying ‖x− x∗‖∞ ≤ ε with x∗ being some optimal solution) in O(n log n log Bε )64

time through parameter scaling.65

In this paper, we introduce an infeasibility guided divide and conquer algorithm to solve DRAP-NC66

exactly. We call this algorithm DCA for short in the rest of this paper. DCA is conceptually simple and easy67

2

Page 3: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

to implement. Its main idea can be described as follows: relax the nested bound constraints (1c) first and68

solve the problem to optimality; if the obtained optimal solution satisfies the nested bound constraints, then69

this solution is optimal for DRAP-NC and we stop; otherwise find out the bound constraint that has the70

maximum violation, fix it at equality, and divide the original problem into two DRAP-NCs of smaller size71

and solve each recursively. The running time of DCA is Θ(n2 log Bn ). Although this running time is worse72

than that of MDA with a ratio of nlogn , the computational performance of DCA is surprisingly good on a73

variety of instances from different applications. In particular, the running time of DCA is between 35% and74

64% of the running time of MDA on all the test instances (the largest instance has more than 6.5 million75

variables). The running time of DCA also grows much slower with n2 on all the test instances.76

We hypothesize that this mismatch between the theoretical running time and practical performance of77

DCA is largely due to the conservative worst-case analysis of the complexity of an algorithm. In particular,78

the asymptotic complexity result indicates that there exist “bad” instances for which the running time of79

DCA is proportional to n2 log Bn for some large n. But it seems that all the benchmark instances in the80

literature are not bad enough for the running time to reach the asymptotic bound. In our computational81

experiment in Section 6, we report another metric, the number of resource allocation subproblems solved82

by DCA, to provide some evidence to our hypothesis. In addition, we introduce in Section 5 a family of83

DRAP-NC instances for which DCA only needs two recursions to terminate, illustrating the advantage of84

DCA in solving some instances. Finally, our algorithm can be easily extended to find an ε-approximate85

solution of RAP-NC by scaling all parameters by 1ε .86

Our algorithm combines three ingredients: (1) A recursive divide-and-conquer framework; (2) A scaling-87

based greedy algorithm for the simple resource allocation problem without nested bound constraints, derived88

by Hochbaum in [17]; (3) Constraint relaxation and the use of constraint violation information to guide how89

to divide the problem. The third ingredient is somewhat similar to how a typical branch-and-bound algorithm90

solves an integer program: find the optimal solution of the linear programming relaxation and branch on an91

variable whose integrality constraint is violated. We summarize the contributions of this paper below.92

1. We develop a new combinatorial algorithm DCA to solve the discrete resource allocation allocation93

problem with nested lower and upper bound constraints. Its asymptotic running time is Θ(n2 log Bn ).94

DCA can also be extended to find an ε-optimal solution of RAP-NC in Θ(n2 log Bnε ).95

2. We apply DCA to a variety of benchmark instances of DRAP-NC with different convex objectives.96

DCA is efficient in solving all these instances, with the running time being between 35% and 64% of97

the running time of the currently best algorithm MDA.98

3. We present some explanation on the practical efficiency of DCA, and introduce specially designed99

instances for which the running time of DCA varies significantly.100

The remainder of this section is organized as follows. Section 2 reviews existing algorithms for DRAP-NC101

and similar models that are mostly related to our algorithm. Section 3 introduces the details of DCA and102

Section 4 presents the proof of its correctness. In Section 5, we analyze the time complexity of DCA, and103

present some specially design instances for which DCA only requires a few recursions to terminate. We104

present extensive numerical results on DRAP-NC instances in Section 6 and conclude in Section 7.105

2 Literature review106

Resource allocation is a fundamental problem in operations research with a wide variety of applications,107

from early examples in distribution of search effort [21] and sample allocation in stratified sampling [24] to108

recent ones including vessel speed optimization [16], energy management [31], and machine learning [33].109

In its simplest form, resource allocation aims to allocate a fixed amount of resources among n activities110

to minimize the total allocation cost. Depending on the context of the applications, the resource could be111

divisible or indivisible, leading to continuous or discrete resource allocation models. The methodologies for112

these two classes of models could be significantly different. In theory algorithms for discrete models could113

be easily applied to find an ε-optimal solution of the continuous models by scaling all parameters by 1ε . On114

the other hand, a large class of algorithms for the continuous models rely on optimality conditions such as115

the Karush-Kuhn-Tucker conditions, which do not have a counterpart in the discrete cases. In this paper,116

3

Page 4: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

we focus on surveying results for discrete resource allocation models. We only mention one paper on the117

continuous model that is closely connected to our algorithm: van der Klauw et al. [31] used the same divide-118

and-conquer framework to solve RAP-NC, which is a convex optimization problem. Since they were handling119

continuous problems, the proof of correctness of their algorithm used Karush-Kuhn-Tucker conditions of the120

original and relaxed problems. For a comprehensive review of algorithms for continuous resource allocation121

models, we refer interested readers to the following excellent surveys [26, 20, 27].122

Discrete resource allocation problems are NP-hard in general, but efficient algorithms exist when the ob-123

jective function and the resource constraints have certain properties. One such case is that the objective func-124

tion is separable and convex and there are no additional resource constraints other than the non-negativity125

constraints, i.e., model (1) without constraints (1c) and upper bounds on variables in constraints (1d). This126

model (or its continuous relaxation) is usually called the simple resource allocation problem in the literature.127

We use DRAP to denote this model. [13] developed the first greedy algorithm for DRAP, with the greedy128

step of allocating one more unit of resource to the activity with the smallest marginal cost. The running time129

of this algorithm is O(B log n+ n), which is not polynomial in the problem input size. With more complex130

greedy steps than that used in [13], polynomial-time algorithms have been developed [10, 8]. The current131

best algorithm for DRAP was developed by Frederickson and Johnson [8] and runs in O(n log Bn ) time when132

B ≥ 2n. The algorithm relies on a complicated procedure to find the B-th smallest element in a matrix with133

sorted columns. Hochbaum developed a scaling-based algorithm with a greedy subroutine for a more general134

resource allocation problem [17]. Her algorithm, when specialized to DRAP, achieves the best running time135

O(n log Bn ), but avoids many complex procedures of Frederickson and Johnson’s algorithm in [8]. This is136

also the algorithm we use in DCA to solve the subproblem DRAP. Faster algorithms exist for DRAP with137

special objective functions: DRAP with linear or quadratic separable objective functions can be solved in138

O(n) time [3]. It should be noted that all the aforementioned algorithms for DRAP can be easily extended139

to DRAP with additional upper bounds on variables [19].140

A model that is slightly more complicated is DRAP with additional nested upper bound constraints, i.e.,141

model (1) with ai = −∞ in constraints (1c) for each i ∈ [n]. This type of nested upper bound constraints are142

also called ascending constraints or cumulative constraints in the literature. We use DRAP-NUB to denote143

this model. Galperin and Waksman showed that DRAP-NUB is equivalent to a continuous relaxation144

obtained by piecewise linear continuation of the cost functions, and they developed an O(B log n)-time145

algorithm to solve the continuous relaxation [11]. Tamir developed an O(n2 log2B)-time algorithm based on146

a technique for the unrestricted selection problem [30]. Dyer and Walker proposed an O(n log n log2 Bn )-time147

divide-and-conquer algorithm [5]. They first solved a subproblem with only the first half of the variables,148

fixed those variables permanently afterwards, and then solved another subproblem with the remaining half of149

the variables. The time complexity of DRAP-NUB was improved to O(n log n log Bn ) by Hochbaum’s scaling-150

based algorithm [17]. Vidal, Jaillet, and Maculan developed an O(n logm log Bn )-time divided-and-conquer151

algorithm for DRAP-NUB, where m is the number of ascending constraints. Their algorithm recursively152

divides a problem into two subproblems of even size, and uses optimal solutions of the subproblems to153

tighten variable bounds so that each DRAP-NUB subproblem only needs to be solved as a DRAP. The same154

framework was later extended to MDA to deal with DRAP-NC.155

The nested upper bound constraints are a special case of the more general submodular constraints.156

The submodular constraint has the form of∑i∈S xi ≤ r(S), where the set S ⊆ [n] is an element of a157

distributive lattice and r is a submodular function defined over this lattice. The submodular constraints158

generalize a large class of constraints such as the generalized upper bound constraints, tree constraints,159

and network constraints [20]. We use DRAP-SC to denote the discrete resource allocation problem with160

submodular constraints. Federgruen and Groenevelt showed that the greedy approach for DRAP in [13]161

could be extended to DRAP-SC [6], by allocating one more unit of resource to the activity that has the162

smallest marginal cost and allows for such increase. The running time of this greedy approach is O(B(log n+163

F )) where F is the number of operations required to check the feasibility of a given increment in the164

solution. Groenevelt developed two polynomial-time algorithms for DRAP-SC under some conditions: a165

decomposition algorithm which runs in polynomial time for network and generalized symmetric polymatroids166

and a bottom up algorithm which runs in polynomial time when the polymatroid is given as an explicit list167

of constraintss [12]. The decomposition algorithm uses optimal solutions of DRAP to tighten the variable168

bounds of the original problem. Similar ideas have been used in the decomposition algorithms for DRAP-169

NUB [34] and DRAP-NC in [33]. The two algorithms developed by Groenevelt can be applied to the170

4

Page 5: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

continuous relaxation of the problem by calling any subroutine to solve the continuous relaxation of DRAP.171

Hochbaum developed an O(n(log n+F ) log Bn )-time scaling-based algorithm that uses a greedy algorithm as172

a subroutine for DRAP-SC [17]. This improvement on the time complexity is based on a general proximity173

theorem between DRAP-SC and its scaled version. The proximity theorem showed that the greedy algorithm174

can be applied to DRAP-SC with arbitrary increments and only the last increment of each variable can be175

potentially erroneous, which could be removed to produce a valid lower bound to the integer optimal solution.176

When appllied to DRAP-NUB, this general algorithm runs in O(n log n log Bn ) time. Hochbaum also showed177

that the algorithm runs in O(n(log n+F ) log Bnε ) time for the continuous relaxation of DRAP-SC, and cannot178

be improved for DRAP and DRAP with generalized upper bounds.179

There has been limited work on problems with resource constraints with nested lower and upper bound180

constraints. Ahuja and Hochbaum studied a dynamic lot-sizing model with linear costs and nested lower181

and upper bound constraints on the cumulative production [1]. They treated the problem as a min-cost182

flow problem over a network with a special structure, and developed an O(n log n)-time algorithm using183

efficient data structures for the successive shortest path algorithm. In the recent paper [33], Vidal et al.184

showed that their decomposition algorithm for DRAP-NUB can be extended to DRAP-NC with nontrivial185

bound tightening techniques. They called the algorithm the Monotonic Decomposition Algorithm. Their186

algorithm runs in O(n logm logB) time, where m is the pairs of nested lower and upper bound constraints187

in (1c), and can be applied to the continuous relaxation in O(n logm log nBε ) time. Although MDA also uses188

a divided-and-conquer framework, it always divides the problem evenly at each recursion instead of using189

infeasible intermediate solutions to decide where to divide. Thus the number of recursions of MDA is less190

dependent on the instance than DCA.191

3 An infeasibility guided divide-and-conquer algorithm192

In this section, we present the details of DCA. For the ease of exposition, we introduce two dummy parametersa0, b0, and set a0 = b0 = 0. For integers i and j with i ≤ j, we use [i : j] to denote the set of integersi, i + 1, . . . , j. We define the problem DRAP-NC(s, e) below to be DRAP-NC with respect to activitys, s+ 1, . . . , e for s, e ∈ [n].

minx

e∑i=s

fi(xi) (2a)

s.t.

e∑k=s

xk = be − as−1, (2b)

ai − as−1 ≤i∑

k=s

xk ≤ bi − bs−1, ∀i ∈ [s : e], (2c)

0 ≤ xi ≤ di, xi ∈ Z, ∀i ∈ [s : e], (2d)

where we assume that as−1 = bs−1 and ae = be. Moreover, we define the problem DRAP(s, e) to be arelaxation of DRAP-NC(s, e) by dropping the nested constraints (2c).

minx

e∑i=s

fi(xi) (3a)

s.t.

e∑k=s

xk = be − as−1, (3b)

0 ≤ xi ≤ di, xi ∈ Z, ∀i ∈ [s : e]. (3c)

Our goal is to solve DRAP-NC(1, n). To solve DRAP-NC(s, e) for any s, e ∈ [n], we first solve the193

relaxation DRAP(s, e) to optimality. If the resulting optimal solution satisfies the nested constraints (2c),194

then we stop and output it as the optimal solution of DRAP-NC(s, e). Otherwise, we find the index K of195

the most violated nested constraint and fix the total resources consumed by the first K activities to be aK or196

5

Page 6: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

bK , depending on which side of the nested constraints is violated. In particular, let x be an optimal solution197

of DRAP(s, e). If∑Kk=s xk > be− bs−1, then we set the constraint

∑Kk=s xk ≤ be− bs−1 to equality until the198

end of the algorithm; otherwise we have∑Kk=s xk < ae − as−1 and set the constraint

∑Kk=s xk ≥ ae − as−1199

to equality for the rest of the algorithm. With the above adjustment, we are able to divide DRAP-NC(s, e)200

into two subproblems DRAP-NC(s,K) and DRAP-NC(K + 1, e), and solve each separately. Finally, we201

combine the optimal solutions of DRAP-NC(s,K) and DRAP-NC(K + 1, e) to obtain an optimal solution202

of DRAP-NC(s, e). The subproblems DRAP-NC(s,K) and DRAP-NC(K + 1, e) will be solved recursively203

following the above procedure. The details of the algorithm is described in Algorithm 1.204

Algorithm 1 DCA: A recursive algorithm for DRAP-NC(s, e)

1: Input: Nested bounds ai and bi for i ∈ [s− 1 : e] with as−1 = bs−1 and ae = be; variable upperbound di and function oracle fi, for i ∈ [s : e].

2: Output: An optimal solution (xs, . . . , xe) of DRAP-NC(s, e).3: function DRAP-NC(s, e)4: if e = s then return xs = as − as−15: end if6: Solve DRAP(s, e) and obtain an optimal solution (xs, . . . , xe)7: if (xs, . . . , xe) satisfies the nested lower and upper bound constraints then return

(xs, . . . , xe)8: end if9: Find index K of the most violated nested constraint, with the tie-breaking rule of choosing

the maximum index.10: if

∑Kk=s xk > bK − as−1 then aK ← bK . Update bounds for subproblems

11: else bK ← aK12: end if13: (xs, . . . , xK)← DRAP-NC(s,K)14: (xK+1, . . . , xe)← DRAP-NC(K + 1, e)15: return (xs, xs+1, . . . , xe−1, xe)16: end function

We state our main result below.205

Theorem 1. Algorithm 1 solves DRAP-NC correctly.206

The correctness of Algorithm 1 follows directly from the Proposition 1 below.207

Proposition 1. Let (xs, xs+1, · · · , xe) be an optimal solution of DRAP(s, e). Suppose (xs, xs+1, · · · , xe) vi-208

olates at least one nested bound constraint of DRAP-NC(s, e) and the index of the most violated nested209

constraint is K. Then there exists an optimal solution (xs, xs+1, · · · , xe) of DRAP-NC(s, e) such that210 ∑Kk=s xk = aK − as−1 if

∑Ki=s xi < aK − as−1 or

∑Kk=s xk = bK − as−1 if

∑Ki=s xi > bK − as−1.211

We postpone the proof of Proposition 1 to Section 4, and first elaborate on Step 6 of Algorithm 1, i.e., how212

to solve the relaxation problem DRAP(s, e) to optimality.213

3.1 Step 6 of Algorithm 1214

At Step 6 of Algorithm 1, we need to solve a problem DRAP(s, e), which is an instance of DRAP with215

e−s+1 variables. As we mentioned in Section 2, the current fastest algorithm for DRAP runs in O(n log Bn )216

time [8, 17]. Here we use Hochbaum’s algorithm [17] to solve DRAP(s, e), since it is much easier to implement217

than that of [8]. Hochbaum’s algorithm, which was called Algorithm GAP in [17], was developed for a218

more general resource allocation problem with submodular constraints. It makes multiple calls to a greedy219

subroutine (called greedy(s) in [17]) that returns a solution that uses as many resources as possible. To give220

6

Page 7: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

a complete description of our algorithm, we present Hochbaum’s algorithm specialized to DRAP with our221

own notations. The details of solving DRAP(s, e) are described in Algorithm 2 (corresponding to Algorithm222

GAP in [17]) and Algorithm 3 (corresponding to greedy(s) in [17]). We use e, 0, and ei to denote a column223

vector of all one’s, a column vector of all zeros, and a unit vector with the i-th row being one.224

Algorithm 2 An algorithm for DRAP(s, e) [17]

1: Input: An instance of DRAP(s, e) with variable upper bound di and value oracle fi for i ∈ [s :e], and total amount of resources B = be − as−1.

2: Output: An optimal solution x∗ of DRAP(s, e).3: function DRAP(s, e)4: Initialization: δ ←

⌈B2n

⌉, x← 0.

5: while δ > 1 do6: x← Greedy(δ, x,B)7: x← maxx− δe,0, δ ←

⌈δ2

⌉8: end while9: x∗ ← Greedy(1, x,B)

10: return x∗

11: end function

Algorithm 3 The greedy subroutine Greedy(δ, y,B) [17]

1: Input: Integer step size δ, a vector y that satisfies the variable bound constraints of DRAP(s, e),i.e., yi ∈ [0, di] for i ∈ [s : e], the total amount of available resources B = be − as−1, and thevalue oracle fi for i ∈ [s : e].

2: Output: A solution x with the property that xi ∈ [0, di] for i ∈ [s : e] and∑e

i=s xi ≥∑e

i=s yi.3: function Greedy(δ, y, B)4: Initialization: x← y, R← B − y>e, E ← s, s+ 1, · · · , e.5: while R > 0 and E 6= ∅, do6: Find i such that ∆i(xi) = minj∈E∆j(xj) where ∆j(xj) = fj(xj + 1)− fj(xj). . Find

out the index with the smallest marginal cost.7: if xi + 1 > di then E ← E \ i . Check if the variable bound constraint is violated.8: else if xi + δ > di or δ > R then9: δ′ = minR, di − xi, xi ← xi + δ′, R← R− δ′

10: E ← E \ i11: else xi ← xi + δ, R← R− δ12: end if13: end while14: return x15: end function

4 Proof of Proposition 1225

In this section, we provide a complete proof of Proposition 1. The proof relies on the connection betweenthe optimal solution of the discrete resource allocation problem and the optimal solution of its continuousrelaxation through piecewise linear functions. This connection was also used in proving the correctness ofMDA in [33]. Given a convex function f(x), we define a piecewise linear function as follows:

fPL(x) = f(bxc) + (x− bxc)× (f(dxe)− f(bxc)). (4)

7

Page 8: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

It can be easily verified that fPL(x) has the same values as f(x) at integer points. Consider any in-226

stance of DRAP-NC(s, e). We define DRAP-NCPL(s, e) to be a problem by replacing the function fi227

in DRAP-NC(s, e) with the function fPLi for each i ∈ [s : e]. Similarly, we define RAP-NCPL(s, e),228

DRAPPL(s, e), and RAPPL(s, e) to be the continuous relaxation of DRAP-NCPL(s, e), DRAP-NCPL(s, e)229

without nested constraints, the continuous relaxation of DRAP-NCPL(s, e) without nested constraints, re-230

spectively. We first introduce several propositions that be needed later for the proof.231

Proposition 2. Any optimal solution of DRAP-NC(s, e) is an optimal solution of DRAP-NCPL(s, e) and232

vice versa. Any optimal solution of DRAP(s, e) is an optimal solution of DRAPPL(s, e) and vice versa.233

Proof. The first statement is easy to show. Since fi(xi) and fPLi (xi) coincide in the integer domain for each234

i, each feasible solution of DRAP-NC(s, e) is a feasible solution of DRAP-NCPL with the same objective235

value, and vice versa. Therefore, any optimal solution of DRAP-NC is an optimal solution of DRAP-NCPL236

and vice versa. The second statement follows directly from the fact that RAPPL(s, e) and DRAPPL(s, e)237

are special cases of RAP-NCPL(s, e) and DRAP-NCPL(s, e) respectively by setting ai = −∞ and bi = ∞238

for each i.239

Proposition 3. [33, Theorem 4] Any optimal solution of DRAP-NCPL(s, e) is also an optimal solution of240

its continuous relaxation RAP-NCPL(s, e).241

Proposition 4. Any optimal solution of DRAP(s, e) is an optimal solution of RAPPL(s, e) and vice versa.242

Proof. By Proposition 2, any optimal solution of DRAP(s, e) is an optimal solution of DRAPPL(s, e) and243

vice versa. By Proposition 3, any optimal solution of DRAPPL(s, e) is an optimal solution of RAPPL(s, e)244

and vice versa, by setting ai = −∞ and bi =∞ for each i. Then the result follows directly.245

With Proposition 4, we are now ready to prove Proposition 1.246

Proof of Proposition 1. We will prove that there exists an optimal solution (xs, . . . , xe) of DRAP-NC(s, e)247

satisfying∑Kk=s xk = bK − as−1 if

∑Kk=s xk > bK − as−1. The result for the other case in which

∑Kk=s xk <248

aK − as−1 can be proven analogously.249

We prove the result by contradiction. Suppose that there does not exist any optimal solution of250

DRAP-NC(s, e) such that the constraint∑Kk=s xk ≤ bK − as−1 is satisfied at equality. Since any in-251

stance of DRAP-NC(s, e) has only a finite number of optimal solutions, we select the optimal solution252

that maximizes∑Kk=s xk among all optimal solutions. Call this solution x = (xs, . . . , xe) and we have253 ∑K

k=s xk < bK − as−1. We will show that we could create another optimal solution x of DRAP-NC(s, e)254

such that∑Kk=s xk >

∑Kk=s xk, contradicting to the choice of x.255

The construction works as follows. Let I = arg maxi |∑ik=s xk = bi − as−1, s − 1 ≤ i < K and256

J = arg mini |∑ik=s xk = bi − as−1,K < i ≤ e. We claim that there exist integer p ∈ [I + 1 : K] and q ∈257

[K+1 : J ] such that xp > xp and xq < xq. To see this, since the index of the most violated nested constraint258

is K,∑Kk=s xk−bK ≥

∑Ik=s xk−bI . Then

∑Kk=s xk−

∑Ik=s xk ≥ bK−bI . Meanwhile,

∑Kk=s xk < bK−as−1259

and∑Ik=s xk = bI−as−1. Thus

∑Kk=s xk−

∑Ik=s xk ≥ bK−bI > (

∑Kk=s xk+as−1)−bI =

∑Kk=s xk−

∑Ik=s xk.260

Therefore∑Kk=I+1 xk >

∑Kk=I+1 xk. Then there must exist some p ∈ [I+1 : K] such that xp > xp. Similarly,261

we can find some q ∈ [K + 1 : J ] such that xq < xq.262

We claim that for such p and q, the following inequalities hold.

sup ∂fPLp (xp) ≤ inf ∂fPLp (xp) ≤ sup ∂fPLq (xq) ≤ inf ∂fPLq (xq). (5)

To see this, since the solution x is an optimal solution of DRAP(s, e), by Propositions 4, it is also an optimalsolution of the continuous optimization problem RAPPL(s, e). In addition, xp > xp ≥ 0 and xq < xq ≤ dq.

Therefore, the solution x should satisfy the Karush-Kuhn-Tucker conditions for RAPPL(s, e) (see Chapters28-30 of [28] for Karush-Kuhn-Tucker conditions with subdifferentials for optimization problems involvingnon-smooth functions). In particular, there must exist some dual variable λ such that

inf ∂fPLp (xp) ≤ λ ≤ sup ∂fPLq (xq),

8

Page 9: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

where ∂fPLi (x) is the subdifferential of fPLi (x) for i ∈ [s : e]. By the monotonicity of the subdifferential of263

convex functions, we prove (5).264

Now we create an integer vector x = (xs, . . . , xe) such that xp = xp + 1, xq = xq − 1, and xi = xi265

otherwise. We claim that x is also an optimal solution of DRAP-NC(s, e). To check the feasibility of x, note266

that xp < xp ≤ dp and xp and xp are integers, so xp ≤ dp. Similarly, we can show that xq ≥ 0. To check that267

x satisfies all nested constraints, note that by the choice of p and q, for each j ∈ [p : q],∑jk=s xk < bj−as−1.268

Since all parameters are integral, for each j ∈ [p : q],∑jk=s xk ≤ bj − as−1. To check the optimality of x,269

note that fPLp and fPLq are both piecewise linear functions with break points only at integer points. Then270

from (5) we have fPLp (xp) + fPLq (xq) ≤ fPLp (xp) + fPLq (xq). But∑Kk=s xk =

∑Kk=s xk + 1, contradicting to271

the choice of x that it maximizes∑Kk=s xk.272

5 Time complexity of Algorithm 1273

The upper bound on the running time of the algorithm is shown by the Proposition below.274

Proposition 5. The time complexity of Algorithm 1 is O(n2 log Bn ).275

Proof. By Proposition 1, Algorithm 1 correctly solves DRAP-NC.276

For each subproblem DRAP-NC(s, e), it takes O((e − s + 1) log Be−s+1 ) time to solve the relaxation277

DRAP(s, e) by Hochbaum’s algorithm. In addition, it takes O(e−s+1) operations to find the most violated278

nested constraint given an optimal solution to DRAP(s, e). Since each recursion of Algorithm 1 fixes one279

nested constraint at equality, we have at most n− 1 nested constraints to fix at equality. The depth of the280

recursion tree is bounded by n. At the same level of the recursion tree, the total size of the subproblems is281

n. Therefore, the complexity of Algorithm 1 is O(n2 log Bn ).282

Proposition 5 gives an upper bound on the worst-case running time of our algorithm. The followinginstance shows that that upper bound is indeed tight. This instance is a minor modification of an instanceproposed by Thibaut Vidal [32] for an earlier version of our algorithm, in which we did not assume ai < bifor each i. Consider the following instance of DRAP-NC:

min

n∑k=1

x2k (6a)

s.t.

n∑k=1

xk = (−1)nn, (6b)

(−1)ii ≤i∑

k=1

xk ≤ (−1)ii+ 1, ∀i ∈ [1 : n− 1], (6c)

− 2n ≤ xi ≤ 2n, xi ∈ Z, ∀i ∈ [n]. (6d)

This instance can be transformed into Formulation (1) through variable substitution. This instance is feasible283

since it has a solution (−1, 3,−5, · · · , (−1)n(2n− 1)).284

We claim that DCA needs Ω(n) recursions to terminate when solving this instance. To see this, in thefirst iteration of DCA, the nested constraints (6c) are relaxed and we solve DRAP(1, n) below (assuming nto be even here):

min

n∑k=1

x2k (7a)

s.t.

n∑k=1

xk = n, (7b)

− 2n ≤ xi ≤ 2n, xi ∈ Z, ∀i ∈ [n]. (7c)

9

Page 10: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

The optimal solution is x∗1 = . . . = x∗n = 1. The most violated nested constraint under this optimal solution is285

the (n−1)-th upper-bound constraint∑n−1k=1 xk ≤ −(n−1)+1, which has a violation of (n−1)−[−(n−1)+1] =286

2n − 3. Therefore, the constraint∑n−1k=1 xk ≤ −(n − 1) + 1 = −n + 2 is set at equality for the rest of the287

algorithm, and DRAP(1, n) is divided into DRAP-NC(1, n− 1) and DRAP-NC(n, n).288

Since∑n−1k=1 xk = −n+ 2 and

∑nk=1 xk = n, the problem DRAP-NC(n, n) is as follows:

min x2n (8a)

s.t. xn = 2n− 2, (8b)

− 2n ≤ xn ≤ 2n, xn ∈ Z. (8c)

It has a trivial optimal solution x∗n = 2n− 2.289

The problem DRAP-NC(1, n− 1) is reduced to the following:

min

n−1∑i=1

x2i (9a)

s.t.

n−1∑k=1

xk = −n+ 2, (9b)

(−1)ii ≤i∑

k=1

xk ≤ (−1)ii+ 1, ∀i ∈ [1 : n− 2], (9c)

− 2n ≤ xi ≤ 2n, xi ∈ Z, ∀i ∈ [1 : n− 1]. (9d)

We solve DRAP-NC(1, n − 1) similarly by first relaxing the nested constraints. An optimal solution for its290

relaxation DRAP(1, n − 1) is x∗1 = . . . = x∗n−2 = −1 and x∗n−1 = 0. The most violated nested constraint291

in (9c) under this optimal solution is the (n − 2)-th lower bound constraint n − 2 ≤∑n−2k=1 xk, which has292

a violation of (n − 2) − [−(n − 2)] = 2n − 4. Therefore, the constraint n − 2 ≤∑n−2k=1 xk is set at equality293

for the rest of the algorithm, and the problem DRAP-NC(1, n− 1) is divided into DRAP-NC(1, n− 2) and294

DRAP-NC(n− 1, n− 1).295

The problem DRAP-NC(n− 1, n− 1) has a trivial optimal solution. The problem DRAP-NC(1, n− 2)is reduced to the following:

min

n−2∑i=1

x2i (10a)

s.t.

n−2∑k=1

xk = n− 2, (10b)

(−1)ii ≤i∑

k=1

xk ≤ (−1)ii+ 1, ∀i ∈ [1 : n− 3], (10c)

− 2n ≤ xi ≤ 2n, xi ∈ Z, ∀i ∈ [1 : n− 2]. (10d)

DRAP-NC(1, n− 2) has the same structure and coefficients as DRAP-NC(1, n) with two less variables and296

nested bound constraints. It could be solved recursively as discussed before. Thus DCA only fixes one297

variable at each level of the recursion tree, and the depth of the recursion tree will be n for this instance.298

The overall running time for this instance will be Ω(n2 log Bn ) with B = 2n2 + (−1)nn. Combining this fact299

with Proposition 5, we have the following result.300

Theorem 2. The time complexity of Algorithm 1 is Θ(n2 log Bn ).301

We would like to point out the analysis above is based on the worst-case scenario, which means the302

estimation that the running time of DCA grows proportionally with n2 log Bn may be too conservative. As303

will be shown by the analysis in the rest of this section and the computational experiments in Section 6, our304

algorithm could be much more efficient than this asymptotic upper bound.305

10

Page 11: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

5.1 An example beyond the worst-case analysis306

The previous section shows that time complexity of DCA is Θ(n2 log Bn ), which is worse than the time

complexity O(n log n logB) of MDA [33]. This comparison, however, does not tell the full story of theefficiency of the two algorithms. The running time of DCA is much more dependent on the instance, similarto the fact that the actual running time of a branch-and-bound algorithm depends heavily on the particularinteger program it solves. DCA could be very efficient for some instances, as illustrated below.

min

n+1∑k=1

x2k −Mxn+1 (11a)

s.t.

n+1∑k=1

xk = n, (11b)

i ≤i∑

k=1

xk ≤ i+ 1, ∀i ∈ [n] (11c)

0 ≤ xi ≤ 2n, xi ∈ Z, ∀i ∈ [1 : n+ 1]. (11d)

The constant M in the objective function is sufficiently large such that the optimal solution of the problem307

with the objective (11a) and constraints (11b) and (11d) is x∗1 = . . . = x∗n = 0 and x∗n+1 = n. This could308

be achieved, for example, by setting M = 100n2. To solve this instance, DCA only needs two recursions309

and solves three DRAP subproblems; on the other hand, MDA needs 1 + dlog ne recursions and solves Θ(n)310

DRAP subproblems.311

To see this, by slightly abusing the notation, we call problem (11) DRAP-NC(1, n+ 1). Observe that it312

has a unique optimal solution x∗1 = . . . = x∗n = 1 and x∗n+1 = 0. When applying DCA to this instance, we313

first solve its relaxation without the nested constraints (11c), which we called DRAP(1, n+ 1). The optimal314

solution is x∗1 = . . . = x∗n = 0 and x∗n+1 = n. The nested constraint n ≤∑nk=1 xk has the largest violation315

under this optimal solution. Then we set∑nk=1 xk = n for the rest of the algorithm, and divide the problem316

into two subproblems DRAP-NC(1, n) and DRAP-NC(n+ 1, n+ 1). The two subproblems are described as317

follows.318

DRAP-NC(1, n):

min

n∑k=1

x2k (12a)

s.t.

n∑k=1

xk = n, (12b)

i ≤i∑

k=1

xk ≤ i+ 1, ∀i ∈ [1 : n− 1] (12c)

0 ≤ xi ≤ 2n, xi ∈ Z. ∀i ∈ [n], (12d)

DRAP-NC(n+ 1, n+ 1):

min x2n+1 −Mxn+1 (13a)

s.t. xn+1 = 0, (13b)

0 ≤ xi ≤ 2n, xi ∈ Z. (13c)

The subproblem DRAP-NC(n+1, n+1) has a trivial optimal solution x∗n+1 = 0. On the other hand, to solve319

the subproblem DRAP-NC(1, n), DCA first solves its relaxation without the nested constraints, DRAP(1, n).320

Observe that DRAP(1, n) has a unique optimal solution x∗1 = . . . = x∗n = 1, which does not violated any321

nested constraint. Therefore, we obtain the optimal solution to the original problem DRAP(1, n + 1) by322

solving three DRAP subproblems in two recursions.323

11

Page 12: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

In comparison, MDA needs to solve Θ(n) DRAP subproblems at 1+dlog ne recursions to solve DRAP-NC(1, n+324

1). To see this, at each recursion, MDA breaks down a DRAP-NC instance into two subproblems of equal325

size, and each subproblem is further divided recursively until the subproblem has exactly one variable, which326

is then solved trivially. Using optimal solutions of the two subproblems of smaller sizes, MDA generates327

stronger variable bounds for DRAP-NC subproblems of larger sizes (one level higher in the recursion tree).328

In [33], Vidal et al. showed that the nested bound constraints become redundant with the new variable329

bounds and each DRAP-NC subproblem can be reduced to solving four DRAP problems. Since MDA al-330

ways divides the problem evenly, the depth of the recursion tree for an instance with n variables will be331

1 + dlog ne and the total number of calls to the DRAP subroutine will be Θ(n).332

We hope that the instance above could provide some insight on why the worst-case analysis sometimes333

overestimates the running time of DCA: DCA only divides the problem when necessary, i.e., when the334

intermediate solution found violates some nested bound constraint. The number of recursions may be335

significantly less than Ω(n) in the worst-case scenario.336

6 Numerical experiments337

In this section, we evaluate the performance of DCA through two sets of experiments, using DRAP-NC338

instances with different types of convex objectives. The first experiment is a sanity test on the correctness of339

DCA. We apply DCA to a set of DRAP-NC instances with linear objectives, and compare its output with340

the output of a state-of-the-art optimization solver Gurobi [14]. For all the test instances we generate, the341

two approaches always produce the same optimal solutions. In the second experiment, we apply DCA to342

solve DRAP-NC instances with three classes of nonlinear objectives motivated by applications like project343

crashing [7] and vessel fuel minimization [29]. We compare the results of DCA with those of MDA. Note344

that in [33], MDA was only implemented for continuous RAP-NC instances, and bisection search was used345

to solve the RAP relaxation. For comparison with DCA, we implement MDA on our own, replacing the346

bisection search for the RAP relaxation with Hochaum’s scaling-based algorithm for the DRAP subproblem.347

All programs are coded in Java and the experiments are conducted on a desktop computer with an i7-8700k348

CPU, 32 gigabyte memory, an Ubuntu/Linux operating system, and Gurobi 8.0.1. The time limit is set to349

1200 seconds for each instance. To accurately report the solution time for instances that are solved in less350

than one millisecond, we solve each each such instance 100 times and report the average running time.351

6.1 DNAP-NC with linear objectives352

In the first experiment, we set the function fi(x) in (1a) to be fi(x) = pix, where pi is drawn from a uniform353

distribution over [−1, 1]. The parameters in the constraints (1b) to (1d) are generated in the following way.354

• The variable upper bound di’s are drawn from a discrete uniform distribution over the set 1, 2, · · · , Vb,355

where Vb is a parameter we could change.356

• The bounds ai’s and bi’s in the nested constraints are generated following a similar procedure ofgenerating RAP-NC instances in [33], in order to guarantee the feasibility of an instance. In particular,we first generate two sequences of integers vini=0 and wini=0 as follows.

v0 = w0 = 0,

vi = vi−1 +Xvi ,

wi = wi−1 +Xwi ,

where Xwi and Xv

i are drawn independently from a uniform distribution over the interval [0 : di].357

Then set ai = minvi, wi and bi = maxvi, wi for i ∈ [n]. Finally, we set B = maxan, bn.358

We choose n = 2i × 100 for i ∈ [5 : 15] and Vb ∈ 10, 100. For each (n, Vb) pair, we generate 10 instances,359

so there are 11× 2× 10 = 220 test instances for the first experiment.360

We apply DCA and Gurobi to solve each instance, and compare their optimal solutions. Normally Gurobiwill have a difficult time in solving mixed-integer linear programs with millions of integer variables, but inthis case we can use Gurobi’s linear programming (LP) solver. The reason is that the coefficient matrix in

12

Page 13: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

the left hand side of constraints of DRAP-NC is totally unimodular, so DRAP-NC with a linear objectivecan be solved as a linear program by ignoring the integrality constraints (1e). This greatly reduces thesolution time of Gurobi. We also notice the following formulation trick helped further reduce the solutiontime of Gurobi’s LP solver significantly. By introducing new variables yi =

∑ik=1 xk − ai for i ∈ [n− 1], we

could reformulate DRAP-NC as follows:

min

n∑i=1

fi(xi)

s.t. xn + yn−1 = B − an−1,x1 − y1 = a1,

xi + yi−1 − yi = ai − ai−1, ∀i ∈ [2 : n− 1],

0 ≤ yi ≤ bi − ai, ∀i ∈ [n− 1],

0 ≤ xi ≤ di, ∀i ∈ [n].

Such formulation has a more sparse coefficient matrix than that of the original formulation (1). We observe361

that Gurobi is on average thirty to one hundred times faster in solving the new formulation than the original362

formulation.363

Both DCA and Gurobi solve the 220 test instances very efficiently, in less than a minute for each instance.364

The optimal solutions obtained by both approaches are exactly the same for each instance. This validates365

the correctness of the DCA. We also summarize the running time of both approaches in Table 1 and Figure 1.366

The running time is averaged over 10 instances for each (n, Vb) pair.367

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

5

10

15

20

25

30

35

40Linear Objectives with VarBound = 10

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

5

10

15

20

25

30

35

40Linear Objectives with VarBound = 100

Figure 1: Solution time of DCA and Gurobi for instances with linear objectives.

Parameters DCA Gurobi Parameters DCA Gurobi

n Vb Time (s) Time (s) n Vb Time (s) Time (s)

3, 200 10 0.003 0.012 3, 200 100 0.003 0.0126, 400 10 0.006 0.027 6, 400 100 0.006 0.02612, 800 10 0.013 0.069 12, 800 100 0.014 0.06925, 600 10 0.034 0.157 25, 600 100 0.032 0.15551, 200 10 0.065 0.350 51, 200 100 0.061 0.348102, 400 10 0.136 0.762 102, 400 100 0.132 0.762204, 800 10 0.245 1.629 204, 800 100 0.273 1.697409, 600 10 0.693 3.619 409, 600 100 0.687 3.437819, 200 10 1.411 7.834 819, 200 100 1.877 7.994

1, 638, 400 10 3.318 16.232 1, 638, 400 100 3.428 16.6953, 276, 800 10 5.313 35.591 3, 276, 800 100 8.005 36.572

Table 1: Solution statistics of DCA and Gurobi for instance with linear objectives.

It can be seen that the LP solver of Gurobi is able to solve instances with more than three million variables368

13

Page 14: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

in less than one minute, but DCA is on average 5 times faster. From Figure 1, we can see that the running369

time of DCA grows much slower than that of Gurobi with the problem size n.370

6.2 DRAP-NC with nonlinear objectives371

In the following, we test the performance of DCA on DRAP-NC instances with three classes of convexnonlinear objectives used in the computational experiments for MDA in [33]: [F], [CRASH], and [FUEL].

[F] :fi(x) =x4

4+ pix, (15)

[CRASH] :fi(x) = ki +pix, (16)

[FUEL] :fi(x) = pi × ci × (cix

)3. (17)

The instance of the largest size has more than 6 million variables, so it is beyond the capability of any372

mixed-integer convex optimization solver. We report two solution statistics of both DCA and MDA on these373

instances: running time and total number of DRAP subproblems solved. We report the second statistics374

due to the fact that both DCA and MDA spend most of the solution time on solving DRAP subproblems.375

The total number of DRAP subproblems solved could shed some light on the different running time of both376

algorithms.377

6.2.1 DRAP-NC with objectives [F]378

In each instance, the function fi(x) has the form fi(x) = x4

4 + pix, where pi is drawn from a uniform379

distribution over [−1, 1]. We fix the parameter Vb to be 100, and set n = 2i × 100 for i ∈ [3 : 16]. The380

parameters in the constraints are generated in the same way as in the linear case. For each value of n, we381

generate 10 instances, so there are 14× 10 = 140 instances. The average running time as well as the average382

number of DRAP subproblems solved are reported in Table 2 and Figure 2.383

Parameters Time (s) Average number of DRAPs solved

n DCA MDA Ratio DCA MDA Ratio

800 0.01 0.02 52.38% 98.8 6,396 1.54%1,600 0.02 0.04 39.47% 142.2 12,796 1.11%3,200 0.03 0.09 36.67% 177.2 25,596 0.69%6,400 0.07 0.20 36.04% 342.0 51,196 0.67%12,800 0.16 0.45 34.68% 374.0 102,396 0.37%25,600 0.38 0.99 37.92% 561.6 204,796 0.27%51,200 1.22 2.49 48.98% 1310.2 409,596 0.32%102,400 2.20 4.95 44.47% 1179.8 819,196 0.14%204,800 5.69 11.16 51.00% 2598.2 1,638,396 0.16%409,600 12.04 26.13 46.09% 2230.4 3,276,796 0.07%819,200 32.20 62.63 51.41% 2762.6 6,553,596 0.04%

1,638,400 80.90 149.04 54.28% 4738.0 13,107,196 0.04%3,276,800 148.14 363.45 40.76% 3731.2 26,214,396 0.01%6,553,600 431.89 868.56 49.72% 9323.4 52,428,796 0.02%

Table 2: Solution statistics of DCA and MDA for instances with objectives [F].

14

Page 15: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

200

400

600

800

1000[F] Cost Objectives with VarBound = 100

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

1

2

3

4

5

6107 Number of subproblems solved

Figure 2: The growth of running time and the number of DRAP subproblems solved for instances withobjectives [F].

It can be seen that the average running time DCA is no more than 55% of the average running time384

of MDA for the same set of instances. As shown in Table 2, the ratio between the average running time of385

DCA and that of MDA does not seem to change much as n grows. This indicates that the running time of386

both algorithms grows at a similar rate with the instance size. This observation does not match the time387

complexity difference between the two algorithms, which suggests the running time of DCA should grow388

much faster with the problem size n than MDA. As we discussed in the introduction, we hypothesize that389

this discrepancy is mainly caused by the worst-case analysis of the complexity of DCA. As shown by the390

proof of Proposition 5, the quadratic growth of running time only happens when we have to fix n− 1 nested391

bound constraints at equality. Then DCA will have n recursions and the number of DRAP subproblems will392

grow linearly with n. But the number of recursions needed for some instance may be significantly less than393

n. This is indeed observed in our experiment for all test instances. As Table 2 shows, the average number394

of DRAP subproblems solved by DCA grows much more slowly than n. For the largest n = 6, 553, 600, the395

number of DRAPs solved by DCA is almost negligible compared to that by MDA.396

Finally, we observe that the numbers of DRAPs solved by MDA are the same for instances of the same397

size. This is due to the design of MDA, which recursively divides the problem into two subproblems with398

even size until the subproblem contains only one variable. This indicates that MDA solves many DRAP399

subproblems with a few number of variables. This is also the reason that despite the much larger number of400

DRAPs solved by MDA, its overall solution time is comparable to DCA.401

6.2.2 DRAP-NC with objectives [CRASH]402

[CRASH] is a convex cost function that appears in the context of project crashing [7]. In each instance,403

the function fi(x) has the form fi(x) = ki + pix , where pi and ki are drawn independently from a uniform404

distribution over [0, 1]. We set n = 2i× 100 for i ∈ [3 : 16] and Vb = 100. Other parameters are generated in405

the same way as in the instances with objectives [F]. We generate 10 instances for each value of n, and 140406

instances in total. The average running time as well as the average number of DRAP subproblems solved407

are reported in Table 3 and Figure 3.408

15

Page 16: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

Parameters Time (s) Average number of DRAPs solved

n DCA MDA Ratio DCA MDA Ratio

800 0.01 0.02 52.38% 110.2 6,396 1.72%1,600 0.02 0.04 40.00% 130.8 12,796 1.02%3,200 0.04 0.09 43.82% 256.6 25,596 1.00%6,400 0.10 0.20 48.02% 361.4 51,196 0.71%12,800 0.23 0.45 51.35% 654.4 102,396 0.64%25,600 0.45 1.08 42.15% 636.6 204,796 0.31%51,200 1.17 2.27 51.41% 1147.2 409,596 0.28%102,400 2.42 5.23 46.31% 1010.8 819,196 0.12%204,800 6.70 12.12 55.34% 1791.0 1,638,396 0.11%409,600 16.76 27.38 61.21% 2930.6 3,276,796 0.09%819,200 39.87 62.90 63.39% 3455.6 6,553,596 0.05%

1,638,400 86.28 146.51 58.89% 4758.8 13,107,196 0.04%3,276,800 222.66 346.48 64.26% 9547.8 26,214,396 0.04%6,553,600 470.37 846.71 55.55% 11013.2 52,428,796 0.02%

Table 3: Solution statistics of DCA and MDA for instances with objectives [CRASH].

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

200

400

600

800

1000[CRASH] Cost Objectives with VarBound = 100

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

1

2

3

4

5

6107 Number of subproblems solved

Figure 3: The growth of running time and the number of DRAP subproblems solved for instances withobjectives [CRASH].

The performance of DCA and MDA for the instances with objectives [CRASH] is similar to their per-409

formance for the instances with objective [F]. The running time of both algorithms grows at a similar pace410

with the instance size, with the average running time of DCA no more than 65% of that of MDA for the411

same set of instances. The average number of DRAP instances solved by DCA grows much slower with n,412

which again implies that the worst-case running time is a very conservative estimate for this set of instances.413

16

Page 17: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

6.2.3 DRAP-NC with objectives [FUEL]414

[FUEL] is a convex function used to measure fuel consumption in vessel speed optimization [29]. In each415

instance, the function fi(x) has the form fi(x) = pi × ci × ( cix )3, where pi and ci are drawn independently416

from a uniform distribution over [0, 1]. We set n = 2i × 100 for i ∈ [3 : 16] and Vb = 100. Other parameters417

are generated in the same way as in the instances with objectives [F]. We generate 10 instances for each418

value of n, and 140 instances in total. The average running time as well as the average number of DRAP419

subproblems solved are reported in Table 4 and Figure 4.420

Parameters Time (s) Average number of DRAPs solved

n DCA MDA Ratio DCA MDA Ratio

800 0.01 0.02 54.17% 135.8 6,396 2.12%1,600 0.02 0.04 45.45% 151.6 12,796 1.18%3,200 0.04 0.10 40.20% 220.8 25,596 0.86%6,400 0.10 0.23 44.00% 421.8 51,196 0.82%12,800 0.23 0.51 46.25% 502.2 102,396 0.49%25,600 0.54 1.13 47.66% 514.4 204,796 0.25%51,200 1.22 2.49 48.98% 1310.2 409,596 0.32%102,400 2.67 5.83 45.72% 912.2 819,196 0.11%204,800 7.33 12.98 56.49% 2225.2 1,638,396 0.14%409,600 18.62 30.03 61.99% 2916.2 3,276,796 0.09%819,200 40.72 72.23 56.38% 2771.2 6,553,596 0.04%

1,638,400 108.05 167.96 64.33% 6643.6 13,107,196 0.05%3,276,800 237.60 402.04 59.10% 7161.0 26,214,396 0.03%6,553,600 586.07 936.21 62.60% 14240.2 52,428,796 0.03%

Table 4: Solution statistics of DCA and MDA for instances with objectives [FUEL].

17

Page 18: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

200

400

600

800

1000[FUEL] Cost Objectives with VarBound = 100

2.5 3 3.5 4 4.5 5 5.5 6 6.5 70

1

2

3

4

5

6107 Number of subproblems solved

Figure 4: The growth of running time and the number of DRAP subproblems solved for instances withobjectives [FUEL].

The performance of DCA and MDA for the instances with objectives [FUEL] is again very similar to421

their performance for the instances with objectives [F] and [CRASH]. The running time of both algorithms422

grows at a similar pace with the instance size, with the average running time of DCA no more than 65% of423

that of MDA for the same set of instances. The average number of DRAP instances solved by DCA is at424

most 2.12% of that by MDA.425

From the experiment above, we observe that the performance of DCA is very similar in the three426

classes of instances: the average running time grows slower than n2 log Bn and the average number of DRAP427

subproblems sovled grows slowly with n. In addition, the performance of DCA does not seem to be affected428

by the form of the convex costs. This is not surprising since DCA does not use any information of the429

function form.430

7 Conclusions and future work431

In the paper, we proposed a simple and efficient exact algorithm to solve the discrete resource allocation432

problem with nested bound constraints. This algorithm first solves the problem by relaxing the nested433

bound constraints, and then recursively divides the problem into two smaller subproblems of the same type434

by fixing the most violated bound constraint at equality. This simple technique turns out to be very effective435

in solving large-sized instances. The time complexity of our algorithm is Θ(n2 log Bn ), worse than that of the436

current best algorithm MDA, but the running time of our algorithm is no more than 65% of the running437

time of MDA on all the instances we test. An important direction to explore is whether this infeasibility438

guided divide and conquer framework could be extended to resource allocation problems with other types of439

constraints, such as the generalized upper-bound constraints, tree constraints, or more general submodular440

constraints studied in [17]. Another direction worth exploring is how DCA could assist the design of exact441

algorithms for more complex problems such as the joint vehicle routing and speed optimization problem [9].442

The current most efficient exact algorithm for these types of problems usually employs a branch-and-price443

framework and the biggest challenge is how to design an efficient algorithm for the pricing problem. It would444

be interesting to see how DCA could help solve the pricing problem.445

18

Page 19: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

Acknowledgement446

We are extremely grateful to Thibaut Vidal for proposing an initial version of Instance (6) and sharing the447

implementation details of MDA. His comments on an earlier version of our algorithm helped improve the448

quality of this paper greatly.449

References450

[1] Ravindra K Ahuja and Dorit S Hochbaum. Solving linear cost dynamic lot-sizing problems in O(n log n)451

time. Operations research, 56(1):255–261, 2008.452

[2] Tolga Bektas and Gilbert Laporte. The pollution-routing problem. Transportation Research Part B:453

Methodological, 45(8):1232–1250, 2011.454

[3] Peter Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters,455

3(3):163–166, 1984.456

[4] Wei Chu and S Sathiya Keerthi. Support vector ordinal regression. Neural computation, 19(3):792–815,457

2007.458

[5] ME Dyer, N Kayal, and J Walker. A branch and bound algorithm for solving the multiple-choice459

knapsack problem. Journal of computational and applied mathematics, 11(2):231–249, 1984.460

[6] Awi Federgruen and Henri Groenevelt. The greedy procedure for resource allocation problems: Neces-461

sary and sufficient conditions for optimality. Operations research, 34(6):909–918, 1986.462

[7] Stephan Foldes and Francois Soumis. Pert and crashing revisited: Mathematical generalizations. Eu-463

ropean Journal of Operational Research, 64(2):286–294, 1993.464

[8] Greg N Frederickson and Donald B Johnson. The complexity of selection and ranking in X + Y and465

matrices with sorted columns. Journal of Computer and System Sciences, 24(2):197–208, 1982.466

[9] Ricardo Fukasawa, Qie He, Fernando Santos, and Yongjia Song. A joint vehicle routing and speed467

optimization problem. INFORMS Journal on Computing, 30(4):694–709, 2018.468

[10] Zvi Galil and Nimrod Megiddo. A fast selection algorithm and the problem of optimum distribution of469

effort. Journal of the ACM (JACM), 26(1):58–64, 1979.470

[11] A Galperin and Z Waksman. A separable integer programming problem equivalent to its continual471

version. Journal of Computational and Applied Mathematics, 7(3):173–179, 1981.472

[12] Henri Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid473

feasible region. European journal of operational research, 54(2):227–236, 1991.474

[13] O. Gross. A Class of Discrete Type Minimization Problems. RM-1644, Rand Corporation, 1956.475

[14] LLC Gurobi Optimization. Gurobi optimizer reference manual, 2019.476

[15] HO Hartley. Multiple purpose optimum allocation in stratified sampling. In Proceedings of the Social477

Statistics Section, American Statistical Association, pages 258–261, 1965.478

[16] Qie He, Xiaochen Zhang, and Kameng Nip. Speed optimization over a path with heterogeneous arc479

costs. Transportation Research Part B: Methodological, 104:198–214, 2017.480

[17] Dorit S Hochbaum. Lower and upper bounds for the allocation problem and other nonlinear optimization481

problems. Mathematics of Operations Research, 19(2):390–409, 1994.482

[18] HF Huddleston, PL Claypool, and RR Hocking. Optimal sample allocation to strata using convex483

programming. Journal of the Royal Statistical Society: Series C (Applied Statistics), 19(3):273–278,484

1970.485

[19] Toshihide Ibaraki and Naoki Katoh. Resource Allocation Problems: Algorithmic Approaches. MIT486

press, 1988.487

19

Page 20: A new combinatorial algorithm for separable convex resource … · 2019-09-17 · 1 A new combinatorial algorithm for separable convex resource 2 allocation with nested bound constraints

[20] Naoki Katoh, Akiyoshi Shioura, and Toshihide Ibaraki. Resource allocation problems. Handbook of488

combinatorial optimization, pages 2897–2988, 2013.489

[21] Bernard O Koopman. The optimum distribution of effort. Journal of the Operations Research Society490

of America, 1(2):52–63, 1953.491

[22] Raphael Kramer, Anand Subramanian, Thibaut Vidal, and F Cabral Lucıdio dos Anjos. A matheuristic492

approach for the pollution-routing problem. European Journal of Operational Research, 243(2):523–539,493

2015.494

[23] Stephen F Love. Bounded production and inventory models with piecewise concave costs. Management495

Science, 20(3):313–318, 1973.496

[24] Jerzy Neyman. On the two different aspects of the representative method: the method of stratified497

sampling and the method of purposive selection. Journal of the Royal Statistical Society, 97(4):558–498

625, 1934.499

[25] Inge Norstad, Kjetil Fagerholt, and Gilbert Laporte. Tramp ship routing and scheduling with speed500

optimization. Transportation Research Part C: Emerging Technologies, 19(5):853–865, 2011.501

[26] Michael Patriksson. A survey on the continuous nonlinear resource allocation problem. European502

Journal of Operational Research, 185(1):1–46, 2008.503

[27] Michael Patriksson and Christoffer Stromberg. Algorithms for the continuous nonlinear resource alloca-504

tion problem—new implementations and numerical studies. European Journal of Operational Research,505

243(3):703–722, 2015.506

[28] Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 1970.507

[29] David Ronen. The effect of oil price on the optimal speed of ships. Journal of the Operational Research508

Society, 33(11):1035–1040, 1982.509

[30] Arie Tamir. Efficient algorithms for a selection problem with nested constraints and its application to510

a production-sales planning model. SIAM Journal on Control and Optimization, 18(3):282–287, 1980.511

[31] Thijs van der Klauw, Marco ET Gerards, and Johann L Hurink. Resource allocation problems in512

decentralized energy management. OR Spectrum, 39(3):749–773, 2017.513

[32] Thibaut Vidal. personal communication, 2018.514

[33] Thibaut Vidal, Daniel Gribel, and Patrick Jaillet. Separable convex optimization with nested lower and515

upper constraints. INFORMS Journal on Optimization, 2018.516

[34] Thibaut Vidal, Patrick Jaillet, and Nelson Maculan. A decomposition algorithm for nested resource517

allocation problems. SIAM Journal on Optimization, 26(2):1322–1340, 2016.518

20