A new effective dynamic program for an investment optimization … · 2020-07-01 · A New E ective...

22
HAL Id: hal-01435294 https://hal.archives-ouvertes.fr/hal-01435294 Submitted on 13 Jan 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A new effective dynamic program for an investment optimization problem Evgeny Gafarov, Alexandre Dolgui, Alexander Lazarev, Frank Werner To cite this version: Evgeny Gafarov, Alexandre Dolgui, Alexander Lazarev, Frank Werner. A new effective dynamic pro- gram for an investment optimization problem. Automation and Remote Control / Avtomatika i Tele- mekhanika, MAIK Nauka/Interperiodica, 2016, 77 (9), pp.1633-1648. 10.1134/S0005117916090101. hal-01435294

Transcript of A new effective dynamic program for an investment optimization … · 2020-07-01 · A New E ective...

HAL Id: hal-01435294https://hal.archives-ouvertes.fr/hal-01435294

Submitted on 13 Jan 2017

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

A new effective dynamic program for an investmentoptimization problem

Evgeny Gafarov, Alexandre Dolgui, Alexander Lazarev, Frank Werner

To cite this version:Evgeny Gafarov, Alexandre Dolgui, Alexander Lazarev, Frank Werner. A new effective dynamic pro-gram for an investment optimization problem. Automation and Remote Control / Avtomatika i Tele-mekhanika, MAIK Nauka/Interperiodica, 2016, 77 (9), pp.1633-1648. �10.1134/S0005117916090101�.�hal-01435294�

A New Effective Dynamic Program for an InvestmentOptimization Problem

Evgeny R. Gafarova,b, Alexandre Dolguia, Alexander A. Lazarevb,c,d,e,∗, FrankWernerf

aEcole Nationale Superieure des Mines, LIMOS-UMR CNRS 6158, F-42023 Saint-Etienne,France

bV.A. Trapeznikov Institute of Control Sciences of the Russian Academy of Sciences,Profsoyuznaya st. 65, 117997 Moscow, Russian Federation

cLomonosov Moscow State University, GSP-1, Leninskie Gory, 119991 Moscow, RussianFederation

dMoscow Institute of Physiscs and Technology, 9 Institutskiy per., Dolgoprudny, 141700Moscow Region, Russian Federation

eInternational Laboratory of Decision Choice and Analysis, National Research University,Higher School of Economics, 20 Myasnsnitskaya street, 101000 Moscow Region, Russian

FederationfFakultat fur Mathematik, Otto-von-Guericke-Universitat Magdeburg, PSF 4120, 39016

Magdeburg, Germany

Abstract

After a series of publications of T.E. O’Neil et al. (e.g. in 2010), dynamicprogramming seems to be the most promising way to solve knapsack problems.Some techniques are known to make dynamic programming algorithms (DPA)faster. One of them is the graphical method that deals with piecewise linearBellman functions. For some problems, it was previously shown that the graph-ical algorithm has a smaller running time in comparison with the classical DPAand also some other advantages. In this paper, an exact graphical algorithm(GrA) and a fully polynomial-time approximation scheme based on it are pre-sented for an investment optimization problem having the best known runningtime. The algorithms are based on new Bellman functional equations and a newway of implementing the GrA.

Keywords: Multi-choice knapsack problem, Investment problem, Dynamicprogramming, FPTASMSC: 90B30, 90C39, 90-08.

∗Corresponding authorEmail addresses: [email protected] (Evgeny R. Gafarov), [email protected] (Alexandre

Dolgui), [email protected] (Alexander A. Lazarev), [email protected] (Frank Werner)

Preprint submitted to Journal of LATEX Templates June 21, 2015

1. Introduction

The selection of projects is an important problem in science and engineering.In general, project portfolio selection problems are hard combinatorial problems,often formulated as knapsack or in a more general form as multi-dimensionalmulti-choice knapsack problems, possibly also with multiple objectives. They5

have a wide range of business applications in capital budgeting and productionplanning.

Different techniques have been used to solve the knapsack problem or itsgeneralizations. Since the project data are often uncertain or imprecise, some-times fuzzy techniques are used. For instance, in [1], a fuzzy multi-dimensional10

multiple-choice knapsack problem is formulated for the project selection prob-lem, and then an efficient epsilon-constraint method and an multi-objectiveevolutionary algorithm are applied. In [2], a data envelope analysis, knapsackformulation and fuzzy set theory integrated model was suggested. In [3], theproblem of selecting projects to be included in an R&D portfolio has also been15

formulated as a multi-dimensional knapsack problem. If partial funding andimplementation is allowed, linear programming can be used and the sensitiv-ity of the project selection decisions is examined. The selection among rankedprojects under segmentation, policy and logical constraints was discussed in [4].After ranking the projects by a multi-criteria approach, integer programming20

was applied to get a final solution satisfying the constraints. At the integerprogramming phase, a knapsack formulation was applied. Dynamic order ac-ceptance and capacity planning on a single bottleneck resource has been con-sidered in [5]. Stochastic dynamic programming was applied to determine aprofit threshold for the accept/reject decision and to allocate a single bottle-25

neck resource to the accepted projects with the objective to maximize expectedrevenue.

Often dynamic programming algorithms (DPA) are used for the exact solu-tion and fully-polynomial time approximation schemes (FPTAS) based on thesealgorithms are derived for the approximate solution of such problems. In [6],30

a vector merging problem was considered in a dynamic programming contextwhich can be incorporated into an FPTAS for the knapsack problem. Approx-imation algorithms for knapsack problems with cardinality constraints, wherean upper bound is imposed on the number of items that can be selected, weregiven in [7]. Improved algorithms for this problem were presented in [8], where35

hybrid rounding techniques were applied. An efficient FPTAS for the multi-objective knapsack problem was given in [9]. It uses general techniques such ase.g. dominance relations in dynamic programming. Approximation algorithmsfor knapsack problems with sigmoid utilities were derived in [10]. The authorscombined algorithms from discrete optimization with algorithms from contin-40

uous optimization. In [11], greedy algorithms for the knapsack problem wereconsidered and improved approximation ratios for different variants of the prob-lem were derived. Some applications of AI techniques to generation planningand investment were described in [12].

This paper deals with a project selection (or allocation) problem with the45

2

single criterion of maximizing the total profit under one budget constraint, whichis also a generalization of the knapsack problem. More precisely, a set N ={1, 2, . . . , n} of n potential projects and an investment budget (amount) A >0, A ∈ Z, are given. For each project j ∈ N , a profit function fj(x), x ∈ [0, A],is also given, where the value fj(x

′) denotes the profit received if the amount50

x′ is invested into the project j. The objective is to determine an amountxj ∈ [0, A], xj ∈ Z, for each project j ∈ N such that

∑nj:=1 xj ≤ A and the

total profit∑nj:=1 fj(xj) is maximized. Closely related problems do not only

exist in the area of project investment [13], but also in warehousing, economiclot sizing, etc. [14, 15].55

In the following, we work with piecewise linear functions fj(x). The interval[0, A] can be written as a union of intervals in the form

[0, A] = [t0j , t1j ]⋃

(t1j , t2j ]⋃. . .⋃

(tk−1j , tkj ]

⋃. . .⋃

(tkj−1j , t

kjj ]

such that the profit function has the form fj(x) = bkj + ukj (x − tk−1j ) for x ∈

(tk−1j , tkj ], where k is the number of the interval, bkj is the value of the function at

the beginning of the interval, and ukj is the slope of the function. Without loss

of generality, assume that b1j ≤ b2j ≤ . . . ≤ bkjj , tkj ∈ Z, j ∈ N, k = 1, 2, . . . , kj ,

and that tkjj = A, j = 1, 2, . . . , n.60

A special case of the problem under consideration is similar to the well-knownbounded knapsack problem:

maximize∑nj:=1 pjxj

s.t.∑nj:=1 wjxj ≤ A,

xj ∈ [0, bj ], xj ∈ Z, j = 1, 2, . . . , n,

(1)

for which a dynamic programming algorithm (DPA) of time complexity O(nA) isknown [16]. Dynamic programming algorithms and an FPTAS for the boundedset-up knapsack problem, which is a generalization of the bounded knapsack65

problem in which each item type has a set-up weight and a set-up value includedinto the objective function, were suggested in [17]. These algorithms can also beapplied to the bounded knapsack problem, and one of the dynamic programmingalgorithms has the same time complexity as the best known algorithm for thebounded knapsack problem.70

It is known [18] that all branch and bound (B&B) algorithms with a lowerand an upper bound calculated in polynomial time have an exponential runningtime close to 2O(x) operations, unlike P = NP , where x is the input length.For example, for the one-dimensional knapsack problem, the time complexity is

equal to or greater than 32

2n+3/2√π(n+1)

, where n is the number of items. This means75

that B&B algorithms are not far away from a complete enumeration.In a series of publications by T.E. O’Neil et al. [19], it was shown that some

of the well-known NP-hard problems can be solved by dynamic programmingin sub-exponential time, i.e., in 2O(

√x) operations. So, at the moment, dynamic

3

programming seems to be the most promising way to solve knapsack problems.80

Some techniques are known to make dynamic programming faster, e.g., in [20,21, 22], functional equations and techniques were considered that are differentfrom the ones considered in this paper.

The following problem is also similar to the problem under consideration:

minimize∑nj:=1 fj(xj)

s.t.∑nj:=1 xj ≥ A,

xj ∈ [0, A], xj ∈ Z, j = 1, 2, . . . , n,

(2)

where fj(xj) are piecewise linear as well. For this problem, a DPA with a run-85

ning time of O(∑kjA) [23] and an FPTAS with a running time of O((

∑kj)

3/ε)[24] are known.

In this paper, we present an alternative DPA based on a so-called graphicalapproach. This algorithm has a running time of O(

∑kj min{A,F ∗}), where F ∗

is the optimal objective function value. Thus, it outperforms an algorithm from90

[25] which has a worse running time close to O(nkmaxAlog(kmaxA)), wherekmax = maxj−1,...,n kj . The second contribution of this paper is an FPTASderived by a scaling argument from this new DPA. Note that an FPTAS wasalready proposed for the treated problem in [25], but the new FPTAS has animproved running time of O(

∑kjn log log n/ε).95

While the running time of a similar DPA from [23] is proportional to thesum of all linear profit pieces times the budget A, the new algorithm replacesA by the largest profit of a single project. The main idea is as follows. In-stead of evaluating the dynamic programming functions for every budget valuet = 1, . . . , A, we keep the profit functions (depending on the budget value) as100

piecewise linear functions. These functions can be represented by a collectionof linear pieces, instead of a full table of the values. Since all relevant data areinteger and the profit functions must be non-decreasing in the budget value, onecan easily bound the number of relevant linear pieces to obtain the improvedcomplexity.105

The remainder of the paper is as follows. In Section 2, we present theBellman equations to solve the problem under consideration. In Section 3, anexact graphical algorithm (GrA) based on an idea from [26] is presented. InSection 4, an FPTAS based on this GrA is derived.

2. Dynamic programming algorithm110

In this section, we present a DPA for the problem considered. For any projectj and any state t ∈ [0, A], we define Fj(t) as the maximal profit incurred forthe projects 1, 2, . . . , j, when the remaining budget available for the projectsj + 1, j + 2, . . . , n is equal to t. Thus, we have:

Fj(t) = max∑jh:=1 fh(xh)

s.t.∑jh:=1 xh ≤ A− t,

xh ≥ 0, xh ∈ Z, h = 1, 2, . . . , j.

(3)

4

We define Fj(t) = 0 for t /∈ [0, A], j = 1, 2, . . . , n and F0(t) = 0 for any t. Then115

we have the following recursive equations:

Fj(t) = maxx∈[0,A−t]{fj(x) + Fj−1(t+ x)}= max

1≤k≤kjmax

x∈(tk−1j

,tkj]⋂

[0,A−t]{bkj − ukj t

k−1j + ukj · x+ Fj−1(t+ x)},

j = 1, 2, . . . , n.(4)

Lemma 1. All functions Fj(t), j = 1, 2, . . . , n, are non-increasing on the in-terval [0, A].

The proof of this lemma immediately follows from definition (3) of the functionsFj(t).120

The DPA based on the equations (4) can be organized as follows. For eachstage j = 1, . . . , n, for the state t = 0 we compute no more than kjA valuesvxk = {bkj−ukj t

k−1j +ukj ·x+Fj−1(t+x)}, 1 ≤ k ≤ kj , x ∈ (tk−1

j , tkj ] and put them

into the corresponding lists Lk. If we have for the next value vxk ≤ vx−1k , we do

not put it into the corresponding list. This means that the values in each list Lk125

are ordered in in non-decreasing order. For the next state t = 1, we only need toexclude the last element from the considered Lk, k = 1, . . . , n, if it correspondsto an x which is not in the interval (tk−1

j , tkj ]⋂

[0, A − t] and compare the newkj last elements of the lists. If we continue in the same way for t = 2, . . . , A,we can calculate Fj(t), t = 1, 2, . . . , A, in O(kjA) time. As a consequence the130

running time of the DPA using such a type of Bellman equations is O(∑kjA).

A similar idea was presented in [23].The algorithms presented in [25] for the problem under consideration are

based on the functional equations (3) and another technique to implement thegraphical method. In contrast, the GrA presented in this paper is based on the135

equations (4).

3. Graphical algorithm

In this section, we develop an algorithm which constructs the functionsFj(t), j = 1, 2, . . . , n, in a more effective way by using the idea of graphicalapproach proposed in [26]. We will use the name DPA for the algorithm pre-140

sented in Section 2 and GrA for this new algorithm.The underlying general idea of improving the DPA for piecewise linear func-

tions is as follows: Instead of going through all capacity values up to A, we keepthe Bellmans functions depending on the capacity. They are again piecewiselinear functions. Keeping these function pieces well organized for all capaci-145

ties involves a considerable amount of technicalities and allows for running timeimprovements if the data representation is done in a clever way.

Below we prove that the functions Fj(t), j = 1, 2, . . . , n, constructed in theGrA are piecewise linear. Any piecewise linear function ϕ(x) can be defined by

5

three sets of numbers: a set of break points I (at each break point, a new linearsegment of the piecewise linear function ends), a set of slopes U and a set ofvalues of the function at the beginning of the intervals B. Let I[k] denote thek-th element of the ordered set I. The same notations will be used for the setsU and B as well. The notation ϕ.I[k] denotes the k-th element of the set I ofthe function ϕ(x). Then, for example, for x ∈ (tk−1

j , tkj ] = (fj .I[k − 1], fj .I[k]],we have

fj(x) = fj .B[k] + fj .U [k](x− fj .I[k]).

Note that ϕ.I[k] < ϕ.I[k + 1], k = 1, 2, . . . , |ϕ.I| − 1 and kj = |fj .I|. Ineach step j, j = 1, 2, . . . , n, of the subsequent algorithm, temporary piecewiselinear functions Ψi

j and Φij are constructed. These functions are used to define150

functions Fj(t), j = 1, 2, . . . , n, The functions Fj(t) are piecewise linear as well.For t ∈ Z, their values are equal to the values of the functions Fj(t) in the DPA.

Let ϕ.I[0] = 0 and ϕ.I[|ϕ.I| + 1] = A. The points t ∈ ϕ.I and the otherend points of the intervals with the piecewise linear functions considered in thisarticle will be called break points. To construct a function in the GrA means155

to compute their sets I, U and B. Then the GrA is as follows. At each stagej = 1, . . . , n, we compute the temporary functions Ψk

j (t) and Φkj (t) which areused to compute Fj(t). The key features of the algorithm consist in the com-putation of Φkj (t), modifying a piecewise linear function Fj−1(t) by changinglinear fragments.160

Graphical algorithm

1. Let F0(t) = 0, i.e., F0.I := {A}, F0.U := {0}, F0.B := {0};

2. FOR j := 1 TO n DO

2.1. FOR k := 1 TO kj DO165

2.1.1. Construct the temporary function

Ψkj (t) = fj .B[k]− fj .U [k] · fj .I[k − 1] + fj .U [k] · t+ Fj−1(t)

according to Procedure 2.1.1.;

2.1.2. Construct the temporary function

Φkj (t) = maxx∈(fj .I[k−1],fj .I[k]]

⋂[0,A−t]

{Ψkj (t+ x)− fj .U [k] · t}

according to Procedure 2.1.2.;

2.1.3. IF k = 1 THEN Fj(t) := Φkj (t) ELSE Fj(t) := max{Fj(t),Φkj (t)}.2.2. Modify the sets I, U,B of the function Fj(t) according to Procedure

2.2.170

3. The optimal objective function value is equal to Fn(0).

6

The above algorithm uses Procedures 2.1.1. and 2.1.2. described below.In Procedure 2.1.1., we shift the function Fj−1(t) up by the value fj .B[k] −fj .U [k] ·fj .I[k−1] and increase all slopes in its diagram by fj .U [k]. If all valuest ∈ Fj−1.I are integer, then all values from the set Ψi

j .I are integer as well. It175

is obvious that Procedure 2.1.1. can be performed in O(|Fj−1.I|) time.

Procedure 2.1.1.

Given are k and j;

Ψkj .I = ∅, Ψk

j .U = ∅ and Ψkj .B = ∅.

FOR i := 1 TO |Fj−1.I| DO180

add the value Fj−1.I[i] to the set Ψkj .I;

add the value

fj .B[k]− fj .U [k] · fj .I[k − 1] + fj .U [k] · Fj−1.I[i] + Fj−1.B[i]

to the set Ψkj .B;

add the value fj .U [k] + Fj−1.U [i] to the set Ψkj .U ;

Before describing Procedure 2.1.2., we present Procedure FindMax in whichthe maximum function ϕ(t) of two linear fragments ϕ1(t) and ϕ2(t) is con-185

structed.

Procedure FindMax

1. Given are the functions ϕ1(t) = b1 + u1 · t and ϕ2(t) = b2 + u2 · t and aninterval (t′, t′′]. Let u1 ≤ u2;

2. IF t′′− t′ ≤ 1 THEN RETURN ϕ(t) = max{ϕ1(t′′), ϕ2(t′′)}+ 0 · t defined190

on the interval (t′, t′′];

3. Find the intersection point t∗ of ϕ1(t) and ϕ2(t);

4. IF t∗ does not exist OR t∗ /∈ (t′, t′′] THEN

IF b1 + u1 · t′ > b2 + u2 · t′ THEN RETURN ϕ(t) = ϕ1(t) defined onthe interval (t′, t′′];195

ELSE RETURN ϕ(t) = ϕ2(t) defined on the interval (t′, t′′];

5. ELSE

IF t∗ ∈ Z THEN

ϕ(t) := ϕ1(t) on the interval (t′, t∗];

ϕ(t) := ϕ2(t) on the interval (t∗, t′′];200

RETURN ϕ(t);

ELSE IF t∗ /∈ Z THEN

7

Figure 1: Procedure FindMax. Cutting of a non-integer point

ϕ(t) := ϕ1(t) on the interval (t′, bt∗c];ϕ(t) := b2 + u2 · bt∗c+ 0 · t on the interval (bt∗c − 1, bt∗c];ϕ(t) := ϕ2(t) on the interval (bt∗c, t′′];205

RETURN ϕ(t);

The case when t∗ /∈ Z is presented in Fig. 1. So, if both points t′ and t′′ areinteger, then ϕ.I contains only integer break points t. The running time ofProcedure FindMax is constant.

In the subsequent Procedure 2.1.2., we do the following. When we shift s′ to210

the right, we shift the interval I ′ = [tleft, tright] of the length fj .I[k]−fj .I[k−1].We have to use the values Ψk

j (x) for x ∈ T ′ to calculate Φkj (t) at the point t = s′.

Since Ψkj (x) is piecewise linear, it is only necessary to consider the values Ψk

j (x)at the break points belonging to T ′ and at the end points of the interval T ′.So, if we shift s′ to the right by a small value x ∈ [0, ε] such that all the break215

points remains the same, then the value Φkj (t) will be changed according to thevalue ϕmax(x).

Procedure 2.1.2.

2.1.2.1. Given are k, j and Ψkj (t);

2.1.2.2. Φkj .I := ∅, Φkj .U := ∅ and Φkj .B := ∅;220

2.1.2.3. s′ := 0, tleft := s′ + fj .I[k − 1], tright := min{s′ + fj .I[k], A};

2.1.2.4. Let T ′ = {Ψkj .I[v],Ψk

j .I[v + 1], . . . ,Ψkj .I[w]} be the maximal subset of

Ψkj .I, where tleft < Ψk

j .I[v] < . . . < Ψkj .I[w] < tright,

Let T := {tleft}⋃T ′⋃{tright};

8

2.1.2.5. WHILE s′ ≤ A DO225

2.1.2.6. IF T ′ = ∅ THEN letw + 1 = argmaxi=1,2,...,|Ψk

j.I|{Ψk

j .I[i]|Ψkj .I[i] > tright}

and v = argmini=1,2,...,|Ψkj.I|{Ψk

j .I[i]|Ψkj .I[i] > tleft};

2.1.2.7. IF w + 1 is not defined THEN let w + 1 = |Ψkj .I|;

2.1.2.8. IF v is not defined THEN let v = |Ψkj .I|;230

2.1.2.9. IF tleft < A THEN εleft := Ψkj .I[v]− tleft ELSE εleft := A− s′;

2.1.2.10. IF tright < A THEN εright := Ψkj .I[w+1]−tright ELSE εright :=

+∞;

2.1.2.11. ε := min{εleft, εright};2.1.2.12. IF tleft < A THEN

bleft := Ψkj .B[v] + Ψk

j .U [v] · (tleft −Ψkj .I[v − 1])− fj .U [k] · s′

ELSE bleft := 0;235

2.1.2.13. IF tright < A THEN

bright := Ψkj .B[w+ 1] + Ψk

j .U [w+ 1] · (tright −Ψkj .I[w])− fj .U [k] · s′

ELSE bright := 0;

2.1.2.14. IF T ′ = ∅ THEN binner := 0 ELSE

binner := maxs=v,v+1,...,w

{Ψkj .B[s]+Ψk

j .U [s]·(Ψkj .I[s]−Ψk

j .I[s−1])}−fj .U [k]·s′;

2.1.2.15. Denote function

ϕleft(x) := bleft − (fj .U [k]−Ψkj .U [v]) · x.

IF tleft = A THEN ϕleft(x) := 0;

2.1.2.16. Denote function

ϕright(x) := bright − (fj .U [k]−Ψkj .U [w + 1]) · x.

IF tright = A THEN ϕright(x) := 0;

2.1.2.17. Denote function

ϕinner(x) := binner − fj .U [k] · x.

IF T ′ = ∅ THEN ϕinner(x) := 0;

2.1.2.18. Construct the piecewise linear function

ϕmax(x) := maxx∈[0,ε]

{ϕleft(x), ϕright(x), ϕinner(x)}

according to Procedure FindMax;240

9

2.1.2.19. add the values from ϕmax.I increased by s′ to the set Φkj .I;

2.1.2.20 add the values from ϕmax.B to the set Φkj .B;

2.1.2.21. add the values from ϕmax.U to the set Φkj .U ;

2.1.2.22. IF ε = εleft THEN exclude Ψkj .I[v] from the set T and v := v+1;

2.1.2.23. IF ε = εright THEN include Ψkj .I[w + 1] to the set T and w :=245

w + 1;

2.1.2.24. s′ := s′ + ε.

2.1.2.25. tleft := s′+ fj .I[k− 1], tright := min{s′+ fj .I[k], A}, recomputeT ′ (for details, see the proof of Lemma 2);

2.1.2.26. Modify the function Φkj according to Procedure 2.2. (described below).250

Next, we present Procedure 2.2. used in step [2.1.26] of Procedure 2.1.2.In Procedure 2.2., we combine two adjoining linear fragments that are in thesame line. That means that, if we have two adjacent linear fragments which aredescribed by the values (slopes) Fj .U [k], Fj .U [k + 1] and Fj .B[k], Fj .B[k + 1],where Fj .U [k] · (Fj .U [k] − Fj .U [k − 1]) + Fj .B[k] = Fj .B[k + 1], (i.e., these255

fragments are on the same line), then, to reduce the number of intervals |Fj .I|and thus the running time of the algorithm, we can join these two intervals intoone interval.

Procedure 2.2.Given is Fj(t);260

FOR k := 1 TO |Fj .I| − 1 DO

IF Fj .U [k] = Fj .U [k+1] AND Fj .U [k] ·(Fj .U [k]−Fj .U [k−1])+Fj .B[k] =Fj .B[k + 1] THEN

Fj .B[k + 1] := Fj .B[k];

Delete the kth elements from Fj .B, Fj .U and Fj .I;265

Lemma 2. Procedure 2.1.2. has a running time of O(|Fj−1.I|).

Proof. Step [2.1.2.14] and the re-computation of T ′ in step [2.1.2.25] have tobe performed with the use of a simple data structure. Let {q1, q2, . . . , qr} be amaximal subset of T ′ having the following properties:

q1 < q2 < . . . < qr;270

there is no q ∈ T ′ such that qi ≤ q < qi+1 and

Ψkj .B[q] + Ψk

j .U [q] · (Ψkj .I[q] − Ψk

j .I[q − 1]) ≥ Ψkj .B[qi+1] + Ψk

j .U [qi+1] ·(Ψk

j .I[qi+1]−Ψkj .I[qi+1 − 1]),

i = 1, . . . , r − 1.

10

We can keep track of the set {q1, q2, . . . , qr} by storing its elements in in-275

creasing order in a Queue Stack, i.e., a list with the property that elements atthe beginning can only be deleted while at the end, elements can be deletedand added [27]. This data structure can easily be implemented such that eachdeletion and each addition requires a constant time. So, steps [2.1.2.14] and[2.1.2.25] can be performed in constant time.280

Each of the steps [2.1.2.6]–[2.1.2.25] can be performed in constant time. Theloop [2.1.2.5] can be performed in O(|Ψk

j .I|) time, where |Ψkj .I| = |Fj−1(t).I|,

since each time a break point from |Ψkj .I| is added or deleted. So, the lemma is

true.

We remind that in the DPA, the functional equations (4) are considered. Infact, in Procedure 2.1.1., we construct the function

bkj − ukj tk−1j + ukj · (t+ x) + Fj−1(t+ x)

and in Procedure 2.1.2., we construct the function

Φkj (t) = maxx∈(tk−1

j,tk

j]⋂

[0,A−t]{bkj − ukj tk−1

j + ukj · (t+ x)− ukj · t+ Fj−1(t+ x)}.

Unlike the DPA, to construct Φkj (t) in the GrA, we do not consider all integer285

points x ∈ (tk−1j , tkj ]

⋂[0, A−t], but only the break points from the interval, since

only they influence the values of Φkj (t) (and in addition tleft, tright). Step [2.1.3]can be performed according to Procedure FindMax as well, i.e., to construct thefunction Fj(t) := max{Fj(t),Φij(t)}, their linear fragments have to be comparedin each interval, organized by their break points. It is easy to see that we do290

the same operations with the integer points t as in the DPA. So, the valuesFj(t), t ∈ Z, are the same for the GrA and the DPA, and we can state thefollowing:

Lemma 3. The values Fj(t), j = 1, 2, . . . , n, at the points t ∈ [0, A]⋂Z are

equal to the values of the functions Fj(t) considered in the DPA.295

Lemma 4. All functions Fj(t), j = 1, 2, . . . , n, are piecewise linear on theinterval [0, A] with integer break points.

Proof. For F0(t), the lemma is true. In Procedure 2.1.1., all break points fromthe set Ψi

1.I are integer as well (see the comments after Procedure 2.1.1.). Sinceall points from f1.I are integer, we have ε ∈ Z and as a consequence, s′ ∈ Z.300

According to the Procedure FindMax, all points ϕmax.I considered in Procedure2.1.2. are integer. So, all break points from Φij .I, i = 1, 2, . . . , kj , are integer as

well. Thus, the break points of the function F1(t) := max{F1(t),Φi1(t)} are in-teger, if we use Procedure FindMax to compute the function max{F1(t),Φi1(t)}.Analogously, we can prove that all break points of F2(t) are integer, etc.305

It is obvious that all functions Fj(t), j = 1, 2, . . . , n, constructed in the GrAare piecewise linear. Thus, the lemma is true.

11

f1.I = {3, 10, 13, 25} f2.I = {5, 25} f3.I = {2, 4, 6, 25} f4.I = {3, 4, 25}f1.U = {0, 1, 1

3 , 0} f2.U = { 25 , 0} f3.U = {0, 2, 1

2 , 0} f4.U = {0, 0, 0}f1.B = {0, 0, 7, 8} f2.B = {0, 2} f3.B = {0, 0, 4, 5} f4.I = {0, 1, 4}

Table 1: Functions fj(t)

Theorem 1. The GrA finds an optimal solution of the problem in

O

(∑kj min

{A, max

j=1,2,...,n{|Fj .B|}

})time.

Proof. Analogously to the proof of Lemma 4, after each step [2.1.3] of theGrA, the function Fj(t), j = 1, 2, . . . , n, has only integer break points from the310

interval [0, A]. Each function Φij .I, j = 1, 2, . . . , n, i = 1, 2, . . . , kj , has onlyinteger break points from [0, A] as well. So, to perform step [2.1.3], we needto perform Procedure FindMax on no more than A + 1 intervals. Thus, therunning time of step [2.1.3] is O(A). According to Lemmas 1 and 2, the runningtime of steps [2.1.1] and [2.1.2] is O(Fj .I), where Fj .I ≤ A. The running time315

of step [2.2.] is O(Fj .I) as well.Analogously to Section 2, it is easy to show that Fj(t), j = 1, 2, . . . , n, is a

non-increasing function in t. Thus,

Fj .B[k] ≥ Fj .B[k + 1], j = 1, 2, . . . , n, k = 1, 2, . . . , |Fj .I| − 1.

Then, according to Procedure 2.2., there are no more than 2 · |Fj .B| differentvalues in the set Fj .I.

Thus, the running time of the GrA is

O

(∑kj min

{A, max

j=1,2,...,n{|Fj .B|}

}).

In fact, the running time is less than O(∑kj min{A,F ∗}), where F ∗ is the

optimal objective function value, since maxj=1,2,...,n |Fj .B| ≤ F ∗.320

4. Example

Next, we will illustrate the idea of the GrA using the numerical examplepresented in Fig. 2. A full description of all calculations can be found in [25].Here, we only present a short sketch. In this instance, we consider four projectswith the profit functions fj(t), j = 1, 2, 3, 4 (see Table 1).325

STEP j = 1, k = 1. According to Procedure 2.1.1, we have Ψkj (x) = 0,

Ψkj .I = {0}, Ψk

j .U = {0} and Ψkj .B = {0}.

Next, we consider each iteration of the cycle [2.1.2.5] in Procedure 2.1.2.First consider, s’=0. Before the first iteration, we have T ′ = ∅, tleft = 0,

12

Figure 2: Functions fj(t)

tright = 3, and thus, in step [2.1.2.11], we have ε = min{25 − 0, 25 − 3} = 22.330

In steps [2.1.2.12.-14], we obtain bleft = 0, bright = 0 and binner = 0 and insteps [2.1.2.15.-17], we have ϕleft(x) = 0, ϕright(x) = 0, ϕinner(x) = 0 and, asa consequence, ϕmax(x) = 0. In step [2.1.2.24], we get s′ = s′ + 22 = 22;

So, we have Φ11(x) = ϕmax(x) = 0 for x = [0, 22] (from the previous to the

current value of s′).335

Next, consider s’=22. After steps [2.1.2.22.-25] in the previous iteration, we haveT ′ = {25}, tleft = 22 and tright = 25. These values are used in this iteration.In step [2.1.2.11], we have ε = 25 − 22 = 3. Then we get bleft = 0, bright = 0and binner = 0. Moreover, ϕleft(x) = 0, ϕright(x) = 0 and ϕinner(x) = 0. Thus,ϕmax(x) = 0. We get s′ = 22 + 3 = 25;340

So, we have Φ11(x) = ϕmax(x) = 0 for x = [22, 25] as well and as a consequence,

Φ11(x) = 0, Φ1

1.I = {0}, Φ11.U = {0} and Φ1

1.B = {0}. Observe that instead ofapproximately 25 states t in the DPA, here we considered only two states s′.Next, we present the detailed computations for the functions Φ2

1,Φ31 and Φ4

1.

STEP j = 1, k = 2. We have Ψkj (x) = x − 3, Ψk

j .I = {25}, Ψkj .U = {1} and345

Ψkj .B = {−3}.

First, consider s’=0. We get T ′ = ∅, tleft = 3, tright = 10 and ε = min{25 −3, 25 − 10} = 15. Moreover, bleft = 0, bright = 7 and binner = 0. Thenϕleft(x) = 0 + (1 − 1)x, ϕright(x) = 7 + (1 − 1)x and ϕinner(x) = 0. Thus,ϕmax(x) = 7. We get s′ = s′ + 15 = 15;350

Next, consider s’=15. We have T ′ = {25}, tleft = 15+3 = 18, tright = 15+10 =25 and ε = 25 − 18 = 7. Moreover, bleft = −3 + 1 · 18 − 1 · 15 = 0, bright = 0

13

and binner = −3 + 1 · (25 − 0) − 1 · 15 = 7. We get ϕleft(x) = 0 + (1 − 1)x,ϕright(x) = 0 and ϕinner(x) = 7−x. Thus, we obtain ϕmax(x) = 7−x. We gets′ = s′ + 7 = 22;355

Finally, consider s’=22. We have T ′ = ∅, tleft = 25, tright = 22 + 10 = 32 andε = A − s′ = 25 − 22 = 3. Moreover, ϕleft(x) = ϕright(x) = ϕinner(x) = 0.Then ϕmax(x) = 0. We get s′ = s′ + 3 = 25;We have Φ2

1.I = {15, 22, 25}, Φ21.U = {0,−1, 0} and Φ2

1.B = {7, 7, 0}.

STEP j = 1, k = 3. We have Ψkj (x) = x + 3 2

3 , Ψkj .I = {25}, Ψk

j .U = { 13}360

and Ψkj .B = {3 2

3}. This step is performed analogously. We have to considers′ = 0, 12, 15.We obtain Φ3

1.I = {12, 15, 25}, Φ31.U = {0,− 1

3 , 0} and Φ31.B = {8, 7, 0}.

STEP j = 1, k = 4. We have Ψkj (x) = 8, Ψk

j .I = {25}, Ψkj .U = {0} and

Ψkj .B = {8}. This step is performed analogously. We have to consider s′ = 0, 12.365

We obtain Φ41.I = {12, 25}, Φ4

1.U = {0, 0} and Φ41.B = {8, 0}.

So, after STEP j = 1, we have F1(t) = max{Φ11,Φ

21,Φ

31,Φ

41},

F1.I = {12, 15, 22, 25}, F1.U = {0,− 13 ,−1, 0} and F1.B = {8, 8, 7, 0}, see Fig.

3.1. In fact, the function F1(t) is obtained only from the two functions Φ21 and

Φ31, where Φ2

1 is the maximum function on the interval [0, 15] and Φ31 is the370

maximum function on the interval [15, 25].Next, we only present the states considered and the functions calculated in

the steps j = 2, 3, 4. As mentioned before, the full description of all calculationscan be found in [25].

STEP j = 2, k = 1. The states considered are s′ = 0, 7, 10, 12, 15, 17, 20, 22.375

We have Φ12.I = {7, 10, 15, 20, 25}, Φ1

2.U = {0,− 13 ,−

25 ,−1,− 2

5} and Φ12.B =

{10, 10, 9, 7, 2}, see Fig. 3.2.

STEP j = 2, k = 2. Since f2.U [2] = 0, this step can be done in an easier way.It is only necessary to shift the diagram of the function F1(t) to the left by thevalue 5 and up by the value 2. So, we have380

Φ22.I = {12 − 5, 15 − 5, 22 − 5, 25 − 5}, Φ2

2.U = {0,− 13 ,−1, 0} and Φ2

2.B ={8 + 2, 8 + 2, 7 + 2, 0 + 2}.In Fig. 3.3, the maximum function is presented. In fact, we have F2(t) =Φ1

2(t), i.e., F2.I = {7, 10, 15, 20, 25}, F2.U = {0,− 13 ,−

25 ,−1,− 2

5} and F2.B ={10, 10, 9, 7, 2}.385

STEP j = 3, k = 1. Since f3.U [1] = 0, this step can be done in an easier way.To obtain the function Φ1

3(t), it is only necessary to shift the diagram of thefunction F2(t) to the left by the value 0 and up by the value 0.

STEP j = 3, k = 2. The states considered are s′ = 0, 3, 5, 6, 8, 11, 13, 16, 18,21, 23. We have Φ2

3.I = {7 − 4, 10 − 4, 15 − 4, 20 − 4, 25 − 4, 23, 25}, Φ23.U =390

{0,− 13 ,−

25 ,−1,− 2

5 ,−2, 0} and Φ23.B = {14, 14, 13, 11, 6, 4, 0}.

STEP j = 3, k = 3. The states considered are s′ = 0, 1, 3, 4, 6, 9, 11, 14, 16, 19, 21.We have Φ3

3.I = {1, 4, 9, 11, 15 23 , 19, 21, 25}, Φ3

3.U = {0,− 13 ,−

25 ,−

12 ,−1,

14

Figure 3: Calculations in the example

15

Figure 4: Function F4(t)

− 25 ,−

12 , 0} and Φ3

3.B = {15, 14, 12, 11, 6 13 , 5, 0}. In this example, we do not cut

the point 15 23 as it is presented in Fig. 1. So, here we have two non-integer395

break points.

STEP j = 3, k = 4. Since f3.U [4] = 0, this step can be done in an easier way.To obtain the function Φ4

3(t), it is only necessary to shift the diagram of thefunction F2(t) to the left by the value 6 and up by the value 5.The functions Φ1

3(t) and Φ23(t) are presented in Fig. 3.4, and the functions Φ3

3(t)and Φ4

3(t) are shown in Fig. 3.5. In Fig. 3.6, the maximum function

F3(t) = max{Φ13(t),Φ2

3(t),Φ33(t),Φ4

3(t)}

is presented. So, we have F3.I = {1, 4, 9, 11, 15 23 , 21, 22 1

2 , 25},400

F3.U = {0,− 13 ,−

25 ,−

12 ,−1,− 2

5 ,−12 ,−

25} and F3.B = {15, 14, 12, 11, 6 1

3 , 4, 1}.

STEPS j = 4, k = 1, 2, 3 are performed in an easy way, i.e., to obtain thefunctions Φ1

4(t),Φ24(t) and Φ3

4(t), we have to shift the diagram of the functionF3(t) to the left by the value 0, 3, 4 and up by the value 0, 1, 4, respectively. InFig. 4, the maximum function F4(t) is presented.405

To find an optimal solution at the point s = 0, we can do backtracking. Wehave x4 = 4 and f4(x4) = 4, x3 = 6 and f3(x3) = 5, x2 = 5 and f2(x2) = 2as well as x1 = 10 and f1(x1) = 7. So, the optimal objective function value isF ∗(0) = 18.

In the GrA, we considered the following number of states s′ : 2+3+3+2 = 10410

(for j = 1), 8 + 4 = 12 (for j = 2, where 4 states were considered for k = 2),5 + 10 + 11 + 5 = 31 (for j = 3, where 5 states were considered for k = 1 andk = 4), 7 + 7 + 7 = 21 (for j = 4, i.e., during the shift of the diagram). So, intotal we considered 10+12 +31+21 = 74 states s′. In the DPA, approximately25(3 + 2 + 4 + 3) = 300 states will be considered. If we scale our instance to a415

large number M (i.e., we multiply all input data by M), the running time of theDPA increases by the factor M , but the running time of the GrA remains thesame. Of course, for each state in the GrA, we need more calculations than inthe DPA. However, this number is constant and the GrA has a better runningtime.420

16

5. An FPTAS based on the GrA

In this section, a fully polynomial-time approximation scheme (FPTAS) isderived based on the GrA presented in Section 3.

First, we recall some relevant definitions. For the optimization problem ofminimizing a function F (π), a polynomial-time algorithm that finds a feasible425

solution π′ such that F (π′) is at most ρ ≥ 1 times less than the optimal valueF (π∗) is called a ρ-approximation algorithm; the value of ρ is called a worst-case ratio bound. If a problem admits a ρ-approximation algorithm, it is said tobe approximable within a factor ρ. A family of ρ-approximation algorithms iscalled an FPTAS, if ρ = 1 + ε for any ε > 0 and the running time is polynomial430

with respect to both the length of the problem input and 1/ε.Let LB = max

j=1,...,nfj(A) be a lower bound and UB = n · LB be an upper

bound on the optimal objective function value.The idea of the FPTAS is as follows. Let δ = εLB

n . To reduce the timecomplexity of the GrA, we have to diminish the number of columns |Fj .B|considered, which corresponds to the number of different objective functionvalues b ∈ Fj .B, b ≤ UB. If we do not consider the original values b ∈ Fj .B butthe values b which are rounded up or down to the nearest multiple of δ values

b, there are no more than UBδ = n2

ε different values b. Then we will be able to

approximate the function Fj(t) into a similar function with no more than 2n2

εbreak points (see Fig. 5). Furthermore, for such a modified table representinga function F j(t), we will have

|Fj(t)− Fj(t)| < δ ≤ εF (π∗)

n.

If we do the rounding and modification after each step [2.2], the cumulativeerror will be no more than nδ ≤ εF (π∗), and the total running time of the nruns of the step [2.2] will be

O

(n2∑kj

ε

),

i.e., an FPTAS is obtained.In [28], a technique was proposed to improve the complexity of an approxi-

mation algorithm for optimization problems. This technique can be described asfollows. Let there exist an FPTAS for a problem with a running time boundedby a polynomial P (L, 1

ε ,UBLB ), where L is the input length of the problem in-

stance and UB, LB are known upper and lower bounds, respectively. Let thevalue UB

LB be not bounded by a constant. Then this technique enables us to find

in P (L, log log UBLB ) time values UB0 and LB0 such that

LB0 ≤ F ∗ ≤ UB0 < 3LB0,

i.e., UB0

LB0is bounded by the constant 3. By using such values UB0 and LB0, the

running time of the FPTAS will be reduced to P (L, 1ε ), where P is the same

17

Figure 5: Substitution of columns and modification of Fl(t)

polynomial. So, by using this technique, we can improve the FPTAS to have arunning time of

O

(n ·∑kj

ε(1 + log log n)

),

Finally, we only note that an FPTAS based on a GrA was presented in [29]435

for some single machine scheduling problems.

6. Concluding Remarks

In this paper, we used a graphical approach to improve a known pseudo-polynomial algorithm for a project investment problem and to derive an FPTASwith the best known running time.440

The practical usefulness of the graphical approach is not limited to such aproject investment problem or similar warehousing and lot sizing problems. Thegraphical approach can be applied to problems, for which a pseudo-polynomialalgorithm exists and Boolean variables are used in the sense that yes/no de-cisions have to be made. This is the case for many applications of capital445

budgeting in science and engineering. However, e.g., for the knapsack problem,the graphical algorithm mostly reduces substantially the number of states to beconsidered but the time complexity of the algorithm remains pseudo-polynomial[26]. On the other side, e.g., for the single machine scheduling problem of maxi-mizing total tardiness, such a graphical algorithm improved the complexity from450

O(n∑pj) to O(n2) [30]. It seems to be challenging for future research to study

other well-known combinatorial optimization problems with this approach toknow exactly its performance for different problems and to deduce more generalproperties.

Acknowledgments455

This work has been supported by grants RFBR 13-01-12108, 13-08-13190, 15-07-03141, 15-07-07489 and DAAD A/1400328 and HSE Faculty of Economics.

18

References

[1] Tavana, M., Khalili-Danghani, K., and Abtahi, A.R. 2013. “A fuzzy multi-dimensional multiple-choice model for project portfolio selection using an460

evolutionary algorithm.” Annals of Operations Research 206 (1): 449 - 483.

[2] Chang, P.-T., and Lee, J.-H. 2012. “A fuzzy DEA and knapsack formulationintegrated model for project selection.” Computers & Operations Research30: 112 - 125.

[3] Beaujon, G.J., Marin, S.P., McDonald, G.C. 2001. “Balancing and opti-465

mizing a portfolio of R & D projects.” Naval Research Logistics 48: 18 -40.

[4] Mavrotas, G., Diakoulaki, D. and Kourentzis, A. 2008. “Selection amongranked projects under segmentation, policy and logical constraints.” Euro-pean Journal of Operational Research 187 (1): 177 - 192.470

[5] Herbots, J., Herroelen, W., and Leus, R. (2007). “Dynamic order accep-tance and capacity planning on a single bottleneck resource.” Naval Re-search Logistics 54: 874 - 889.

[6] Kellerer, H., and Pferschy, U. 2004. “Improved dynamic programming inconnection with a FPTAS for the knapsack problem.” Journal of Combi-475

natorial Optimization 8: 5 - 11.

[7] Caprara, A., Kellerer, H., Pferschy, U. and Pisinger, D. 2000. “Approx-imation algorithms for knapsack problems with cardinality constraints.”European Journal of Operational Research 123 (2): 333 - 345.

[8] Mastrolilli, M., and Hutter, M. 2006. “Hybrid rounding techniques for480

knapsack problems. ” Discrete Applied Mathematics 154 (4): 640 - 649.

[9] Bazgan, C., Hugot, H., and Vanderpoorten, D. (2009). “Implementing anefficient FPTAS for the 0-1 multi-objective knapsack problem.” EuropeanJournal of Operational Research 198 (1): 47 - 56.

[10] Sristava, V. and Bullo, F. 2014. “Knapsack problems with sigmoid utilities:485

Approximation algorithms via hybrid optimization.” European Journal ofOperational Research 236 (2): 488 - 498.

[11] Guler, A., Nuriyev, U.G., Berberler, M.E., and Nurieva, F.(2012) “Algo-rithms with guarantee value for knapsack problems.” Optimization 61 (4):477 - 488.490

[12] Wu, F.L., Yen, Z., Hou, Y.H. and Ni, Y.X. 2004. “Applications of AI tech-niques to generation planning and investment.” 2004, IEEE Power Engi-neering Society General Meeting, Denver, 936 - 940.

19

[13] Li, X. M., Fang, S.-C., Tian, Y., and Guo, X. L. 2014. Expanded Modelof the Project Portfolio Selection Problem with Divisibility, Time Profile495

Factors and Cardinality Constraints.” Journal of the Operational Research,doi:10.1057/jors.2014.75.

[14] Dolgui, A., and Proth, J-M. 2010. Supply Chain Engineering: Useful Meth-ods and Techniques. Springer-Verlag.

[15] Schemeleva, K., Delorme, X., Dolgui, A., Grimaud, F., Kovalyov, M.Y.500

2013. “Lot-sizing on a Single Imperfect Machine: ILP Models and FPTASExtensions.” Computers and Industrial Engineering 65 (4): 561 – 569.

[16] Kellerer, H., Pferschy, U., and Pisinger, D. 2004. Knapsack Problems.Springer-Verlag, Berlin.

[17] McLay, L.A., and Jacobson, S.H. 2007. “Algorithms for the bounded set-up505

knapsack problem.” Discrete Optimization 4; 206 - 412.

[18] Posypkin, M.A., and Sigal, I.Kh. 2006. “Speedup Estimates for Some Vari-ants of the Parallel Implementations of the Branch-and-Bound Method.”Mathematics and Mathematical Physics 46 (12): 2189-2202.

[19] O’Neil, E.T., and Kerlin, S. 2010. “A Simple510

2O(√x) Algorithm for PARTITION and SUBSET

SUM.”.http://www.lidi.info.unlp.edu.ar/WorldComp2011-Mirror/FCS8171.pdf

[20] Bar-Noy, A., Golin, M.J., and Zhang, Y. 2009. “Online Dynamic Program-ming Speedups.” Theory Comput. Syst 45 (3): 429-445.515

[21] Eppstein, D., Galil, Z., and Giancarlo, R. 1988. “Speeding up DynamicProgramming.” In Proc. 29th Symp. Foundations of Computer Science.

[22] Wagelmans, A.P.M., and Gerodimos, A.E. 2000. “Improved Dynamic Pro-grams for Some Batching Problems Involving the Maximum Lateness Cri-terion.” Operations Research Letters 27: 109 –118.520

[23] Shaw, D.X., and Wagelmans, A.P.M. 1998. “An Algorithm for Single-ItemCapacitated Economic Lot Sizing with Piecewise Linear Production Costsand General Holding Costs.” Management Science 44 (6): 831–838.

[24] Kameshwaran, S. and Narahari, Y. 2009. “Nonconvex Piecewise LinearKnapsack Problems.” European Journal of Operational Research 192: 56–525

68.

[25] Gafarov, E.R., Dolgui, A., Lazarev, A.A., and Werner, F. 2014. “A Graph-ical Approach to Solve an Investment Optimization Problem.” Journal ofMathematical Modelling and Algorithms in Operations Research 13 (4): 597- 614.530

20

[26] Lazarev, A.A., and Werner, F. 2009. “A Graphical Realization of the Dy-namic Programming Method for Solving NP-hard Problems.” Computers& Mathematics with Applications 58 (4): 619 – 631.

[27] Aho, A.V., Hopcroft, J.E., and Ullman, J.D. 1983. Data Structures andAlgorithms. Addison-Wesley, London.535

[28] Chubanov, S., Kovalyov, M.Y., and Pesch, E. 2006. “An FPTAS fora Single-Item Capacitated Economic Lot-Sizing Problem with MonotoneCost Structure.” Mathematical Programming 106: 453 – 466.

[29] Gafarov, E.R., Dolgui, A., and Werner, F. 2014. “A Graphical Approachfor Solving Single Machine Scheduling Problems Approximately.” Interna-540

tional Journal of Production Research 52 (13): 3762 - 3777.

[30] Gafarov, E.R., Lazarev, A.A., and Werner, F. 2012. “Transforming aPseudo-Polynomial Algorithm for the Single Machine Total Tardiness prob-lem into a Polynomial One.” Annals of Operations Research 196: 247 – 261.

21