Computing lower bounds by destructive improvement: An application to resource-constrained project...

25
Theory and Methodology Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling Robert Klein * , Armin Scholl Institut f ur Betriebswirtschaftslehre, Technische Hochschule Darmstadt, Fachgebiet OR, Hochschulstrasse 1, D-64289 Darmstadt, Germany Received 1 July 1997; accepted 30 October 1997 Abstract In this paper, two meta-strategies for computing lower bounds (for minimization problems) are described. Con- structive (direct) methods directly calculate a bound value by relaxing a problem and solving this relaxation. Destructive improvement techniques restrict a problem by setting a maximal objective function value F and try to contradict (de- struct) the feasibility of this reduced problem. In case of success, F or even F + 1 is a valid lower bound value. The fundamental properties and dierences of both meta-strategies are explained by applying them to the well-known re- source-constrained project scheduling problem (RCPSP). For this problem, only a few constructive bound arguments are available in the literature. We present a number of extensions and new methods as well as techniques for reducing problem data which can be exploited within the destructive improvement framework. Comprehensive numerical ex- periments show that the new constructive bound arguments clearly provide better bounds than the former ones and that further really significant improvements are obtained through an appropriate application of destructive improve- ment. Ó 1999 Elsevier Science B.V. All rights reserved. Keywords: Scheduling theory; Project management; Bounding techniques 1. The problem We present dierent methods for computing lower bounds for the NP-hard resource-con- strained project scheduling problem (RCPSP). This well-known problem can be stated as follows: A project has to be realized such that its duration or, equivalently, its total completion time is mini- mized. Realizing the project requires performing a set of jobs J f1; ... ; ng, each of which has a given duration d j (in number of periods with j 1; ... ; n). No preemption of jobs is allowed, i.e., whenever a job has started in a period t it must be performed consecutively during the periods t; t 1; ... ; t d j 1. Due to technological re- strictions, precedence constraints, defining end- start relationships between the jobs, may exist. Within a project, jobs which may be performed concurrently compete for a number of dierent resource types (workers, machines, budgets). European Journal of Operational Research 112 (1999) 322–346 * Corresponding author. Tel.: +49 6151 16 3964; fax: +49 6151 16 6504; e-mail: [email protected] 0377-2217/99/$ – see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 7 ) 0 0 4 4 2 - 6

Transcript of Computing lower bounds by destructive improvement: An application to resource-constrained project...

Page 1: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Theory and Methodology

Computing lower bounds by destructive improvement:An application to resource-constrained project scheduling

Robert Klein *, Armin Scholl

Institut f�ur Betriebswirtschaftslehre, Technische Hochschule Darmstadt, Fachgebiet OR, Hochschulstrasse 1, D-64289 Darmstadt,

Germany

Received 1 July 1997; accepted 30 October 1997

Abstract

In this paper, two meta-strategies for computing lower bounds (for minimization problems) are described. Con-

structive (direct) methods directly calculate a bound value by relaxing a problem and solving this relaxation. Destructive

improvement techniques restrict a problem by setting a maximal objective function value F and try to contradict (de-

struct) the feasibility of this reduced problem. In case of success, F or even F + 1 is a valid lower bound value. The

fundamental properties and di�erences of both meta-strategies are explained by applying them to the well-known re-

source-constrained project scheduling problem (RCPSP). For this problem, only a few constructive bound arguments

are available in the literature. We present a number of extensions and new methods as well as techniques for reducing

problem data which can be exploited within the destructive improvement framework. Comprehensive numerical ex-

periments show that the new constructive bound arguments clearly provide better bounds than the former ones and that

further really signi®cant improvements are obtained through an appropriate application of destructive improve-

ment. Ó 1999 Elsevier Science B.V. All rights reserved.

Keywords: Scheduling theory; Project management; Bounding techniques

1. The problem

We present di�erent methods for computinglower bounds for the NP-hard resource-con-strained project scheduling problem (RCPSP).This well-known problem can be stated as follows:A project has to be realized such that its durationor, equivalently, its total completion time is mini-

mized. Realizing the project requires performing aset of jobs J � f1; . . . ; ng, each of which has agiven duration dj (in number of periods withj � 1; . . . ; n). No preemption of jobs is allowed,i.e., whenever a job has started in a period t it mustbe performed consecutively during the periodst; t � 1; . . . ; t � dj ÿ 1. Due to technological re-strictions, precedence constraints, de®ning end-start relationships between the jobs, may exist.

Within a project, jobs which may be performedconcurrently compete for a number of di�erentresource types (workers, machines, budgets).

European Journal of Operational Research 112 (1999) 322±346

* Corresponding author. Tel.: +49 6151 16 3964; fax: +49

6151 16 6504; e-mail: [email protected]

0377-2217/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved.

PII: S 0 3 7 7 - 2 2 1 7 ( 9 7 ) 0 0 4 4 2 - 6

Page 2: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Whenever the availability of a resource type is notsu�cient to satisfy the requirements, those jobsmust not be performed simultaneously and re-source constraints have to be observed. The basictype of the RCPSP considers m (renewable) re-source types, de®ned by the index setR � f1; . . . ;mg. In each period of the planninghorizon, a type r is available in a constant numberar of units �r � 1; . . . ;m�. Performing a job j re-quires ujr units of type r in each period. TheRCPSP is to ®nd a non-preemptive schedule of thejobs, i.e., a set of starting times, such that theprecedence and resource constraints are satis®edand the project duration is minimized.

The structure of a project can be represented bya network diagram with jobs as nodes, durationsas node weights, and precedence constraints asarcs. Arcs between two jobs which are also con-nected by any path containing more than one arcare redundant and can be omitted. Fig. 1 shows aproject with n � 12 jobs and one renewable re-source �m � 1� which is available with a1 � 4 unitsper period. Job 1 and job n are dummy start andterminal nodes with duration 0.

In the last decades, a large variety of solutionprocedures have been proposed for RCPSP.Among the heuristics, priority-rule based ap-proaches, based on the serial and parallel sched-uling schemes, are the most important (e.g.

Alvarez-Valdez and Tamarit, 1989; Boctor, 1990;Kolisch, 1996; Thomas and Salhi, 1997). For ex-actly solving RCPSP a number of branch andbound procedures have been developed (e.g. Stin-son et al., 1978; Talbot and Patterson, 1978;Christo®des et al., 1987; Carlier and Latapie, 1991;Demeulemeester and Herroelen, 1992; Mingozzi etal., 1995; Brucker et al., 1996; Sprecher, 1996).However, only small-sized and medium-sizedproblem instances with up to 30 jobs can be solvedexactly in a satisfactory manner. Since cleverbranching strategies are available this is mainlydue to the lack of strong lower bounds.

In Section 2, we present basic concepts forcomputing lower bounds for RCPSP proposed inthe literature so far. We restrict ourselves tomethods which can be realized with small com-putational e�ort by using the special problemstructure of RCPSP and exclude approaches basedon LP or Lagrangean relaxation. LP relaxationbased approaches have been presented by Chris-to®des et al. (1987), Mingozzi et al. (1995) as wellas Brucker et al. (1996). The latter two consider areformulation of RCPSP based on all ``feasible''subsets of jobs, i.e., all combinations of jobs,which may be performed simultaneously withoutviolating precedence or resource constraints. ItsLP relaxation corresponds to allowing preemptionof jobs. Since the number of feasible subsets grows

Fig. 1. Project network.

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 323

Page 3: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

exponentially with the number of jobs, the com-putational e�ort is very high, though the quality ofbounds obtained is good. Determining all feasiblesubsets in advance can be avoided by using thetechnique of column generation for solving theRCPSP with preemption as proposed in Weglarzet al. (1977). Lagrangean relaxation based ap-proaches can be found in Fisher (1973) andChristo®des et al. (1987).

Section 3 is devoted to introducing new directbound arguments which utilize relationships toother optimization problems and can favourablyexploit the special problem structure of RCPSP. InSection 4, the new meta-strategy ``destructive im-provement'' is discussed and contrasted with tra-ditional concepts of computing bounds. Section 5shows its application to RCPSP based on tech-niques for reducing the problem data and adapteddirect bound arguments. Computational resultsindicating considerably improved bound values forstandard benchmark data sets are reported inSection 6. Section 7 contains conclusions whichcan be drawn from this research and discussesfuture research tasks.

2. Literature based bound arguments for RCPSP

Critical path bound (LB1): The most obviouslower bound is based on omitting the capacityrestrictions. Then the minimal project duration isobtained by computing the length of a critical(longest) path in the project network. For de-scribing this calculation the following notation isintroduced:

A forward pass, which can be realized in O�n2�time, yields earliest starting and ®nishing times forall jobs, with LB1 (length of the critical path) beingequal to the earliest starting time of the terminaldummy job. Starting with ES � EF1 � 0 we obtain:

ESj � maxfEFiji 2 Pjg;EFj � ESj � dj for j � 2; . . . ; n: �1�

Subsequently, latest ®nishing and starting timesmay be computed by a backward pass beginningwith LFn � LSn � LB1:

LFj � minfLShjh 2 Fjg; LSj � LFj ÿ dj

for j � nÿ 1; . . . ; 1: �2�For each job j, the interval �ESj;LFj� de®nes atime window in which j has to be processed if theproject should be completed after LB1 time units.In our example, the critical path isC � h1; 3; 6; 8; 11; 12i with total duration LB1� 16periods (cf. Fig. 1).

Capacity bound (LB2): This rather simplebound argument, which can be computed inO�mn� time, discards all but one resource type. Abound value is computed as the total requirementof this resource divided by its per period avail-ability and rounded up to the next larger integer.In the example, we obtain LB2 � d66=4e � 17.

LB2 � maxXn

j�1

ujrdj

!=ar

& 'r � 1; . . . ;mj

( ):

�3�Critical sequence bound (LB3): A job j is said to

be incompatible to another job h if both jobs can-not be processed in parallel. This is true if they areeither related by precedence or their cumulatedcapacity requirement exceeds the supply for anyresource r. The critical sequence bound adds ca-pacity considerations to the critical path bound(cf. Stinson et al., 1978). Imagine the jobs of acritical path C being scheduled one immediatelyafter its predecessor. Processing some job j 62 C inparallel to the critical path requires that within thetime window �ESj;LFj� there is an interval withminimum length dj during which the residual ca-pacity is no smaller than ujr for each resource typer. Let emax

j be the maximal length of such an in-terval. In case of emax

j < dj, job j cannot be pro-cessed completely and the project cannot beterminated within less than LB1� dj ÿ emax

j peri-ods. Hence, a lower bound LB3 is de®ned as:

LB3 � LB1�max 0;max dj ÿ emaxj

���j 62 Cn on o

:

�4�

Pj=Fj set of jobs which immediately precede/follow job j

ESj=LSj earliest/latest starting time of job jEFj=LFj earliest/latest ®nishing time of job j

324 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 4: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Example: Consider the partial schedule built bythe critical path C � h1; 3; 6; 8; 11; 12i of our ex-ample given in Fig. 2. With LB1� 16 we obtainthe time windows and values emax

j of the remainingjobs shown in Table 1. For example, consider job7 which is incompatible to all jobs but 6. Due toES7 � 6 we get emax

7 � 1. Hence, job 7 requires 3further periods. The same is true for job 2 andLB3� 19.

Both determining the critical path as well ascomputing the values emax

j , can be done in O�n2�time. Since this is done consecutively, the overallcomplexity of LB3 is O�n2�, too. Two extensionsof LB3 have been proposed trying to considerseveral jobs, which cannot be processed withintheir time windows, simultaneously:· Demeulemeester (1992) determines a lower

bound on the minimal project completion timenecessary to schedule a further sequence of jobsalong to those on a critical path. The sequencestarts with a job h and ends with one of its (in-direct) successors j. Both jobs are chosen suchthat they cannot completely be processed withintheir time windows, i.e. emax

h < dh and emaxj < dj.

Additionally, the sequence contains all jobsbuilding a single path between h and j. The low-er bound value can be obtained by a pseudo-polynomial dynamic programming approachand is calculated for all possible combinationsof h and j.

· Brucker et al. (1996) consider a critical path aswell as an additional disjoint path through the

project network. The nodes on each path are in-terpreted as a task sequence building two jobs ina job-shop problem with the objective of mini-mizing the completion time. Task pairs fromthe di�erent jobs for which simultaneous pro-cessing causes a resource con¯ict in the originalRCPSP instance must not be executed in paral-lel, whereas this is possible for all other pairs.Identifying such imcompatible task pairs takesO�mn2� time and subsequently solving the job-shop problem, which gives a lower bound forRCPSP, has complexity O�n log n�. In order toimprove on this bound, additional jobs whichcan still not be scheduled within their time win-dows, are considered.Node packing bound (LB4): Another lower

bound is based on solving a weighted node pack-ing problem (cf. Mingozzi et al., 1995). It consistsof ®nding a subset S of jobs each of which is in-compatible to any other one within this set. Thesejobs must be scheduled sequentially thereby de-®ning a lower bound equal to the sum of theirdurations. A heuristic solution to this weightednode packing problem can be found as follows.First the set of all jobs is sorted and stored in anordered list. The ®rst job is removed from the listand added to S. All jobs being compatible to thisjob are deleted from the list. This process is re-peated until the list is empty. The total duration ofjobs in S de®nes a lower bound LB4. Di�erentsolutions can be obtained by applying severalsorting criteria (cf. Mingozzi et al., 1995; Demeu-lemeester and Herroelen, 1997). An obvious ap-proach is to set the jobs of a critical path at the®rst positions followed by the remaining jobs inincreasing order of the number of jobs which canbe processed in parallel to the job considered. Thisrule re¯ects the fact that as many jobs as possibleshould remain in the list after each iteration.Fig. 2. Computation of LB3.

Table 1

Time windows

Job 2 4 5 7 9 10

dj 3 1 3 4 6 4

ESj 0 0 3 6 7 7

LFj 5 5 8 12 16 16

emaxj 0 0 5 1 9 9

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 325

Page 5: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Additionally, the list is cyclically rotated and theprocedure is applied repeatedly. An improvedversion of the node packing bound does notcompletely delete from the list a job h beingcompatible to a job j just added to S (de Reyckand Herroelen, 1996b). Instead, if dh > dj, job hstays in the list with modi®ed durationdh :� dh ÿ dj. Since the list has to be processedonce completely and updating the remaining jobsrequires checking all resource constraints, thecomputational complexity of LB4 is O�mn2�, ifrotation is omitted.

Example: The list may be L � h3; 6; 8; 11; 4; 2;7; 5; 9; 10i according to the sorting criterion men-tioned above. The jobs 3,6,8,11 of the critical pathare successively included in S, because they areincompatible to each other. After considering job3, job 5 is deleted. Job 6 requires reducing theduration of jobs 2 and 7 to d2 � 1 and d7 � 2.Including job 8 and 11 results in deleting the jobs 9and 10. Now only job 4 and the remainders of 2and 7, which are incompatible to each other, re-main. Therefore, they are included in S resulting ina bound LB4 � 16� 1� 1� 2 � 20. The originalversion of the node packing bound, which con-siders complete jobs only, would give a lowerbound value of 17.

Parallel-machine bound (LB5): This bound isbased on the relaxation of RCPSP to a specialparallel-machine scheduling problem (cf. Carlierand Latapie, 1991) which can be described as fol-lows: A set S of jobs has to be performed on kidentical parallel machines with at most one jobbeing processed on each machine at a time. At thebeginning of the planning horizon, not all of thejobs are available immediately. This may, e.g., bedue to preprocessing or restricted availability ofmaterials. The amount of time which has to passbefore a job j becomes available on the machines iscalled head aj of j. Analogously, terminating a jobj may require additional time after leaving themachines, e.g., for post-processing, called tail @j ofj. The objective of the parallel-machine problem isto ®nd a sequence of all jobs for which the make-span, i.e., the amount of time necessary for ter-minating all jobs, is minimized.

A subset of jobs of the original RCPSP instancefor which it is known that at most k jobs of the

subset can be performed at a time may be inter-preted as a set S of jobs to be scheduled on kparallel machines. Additionally, heads and tails forthose jobs may be derived using the informationinherent to earliest and latest starting and ®nishingtimes. Since ESj is a lower bound on the startingtime of job j in the RCPSP instance, j cannot beavailable earlier on a ``machine''. Hence aj � ESj

for all j 2 S. Accordingly, tails @j may be deter-mined. At least @j � LFn ÿ LFj periods have topass after performing j before the whole project iscompleted. By analogy, having processed job j onone of the machines, a tail of @j periods is neces-sary for its termination.

To determine a set S of jobs for a given value ofk, Carlier and Latapie (1991) discard all but oneresource type r. Since for each type r several sub-sets of jobs de®ning a parallel machine problemmay exist, they present an approach for enumer-ating all of them, which results in a rather hightotal computational e�ort. Therefore, we describea restricted approach to determine a single subset Sfor each resource type r: S contains all jobs j withtheir demand ujr not being less thanumin

r � bar=�k � 1�c � 1, because at most k of themcan be processed in parallel. Performing a job jrequires at least b � bujr=umin

r c machines. There-fore, b copies of j are included in S, i.e., j is rep-licated bÿ 1 times.

Exactly solving a parallel-machine problemde®ned by a set S of jobs yields a lower bound onthe completion time of the underlying RCPSP in-stance. Since the parallel-machine problem is NP-hard, the e�ort may usually be too high. Never-theless, bound arguments on hand for the parallel-machine problem can be applied. For S0 � S beingan arbitrary subset of S, a lower bound on themakespan can be calculated by

M�S0� �minfajjj 2 S0g �Xj2S0

dj=k

26663777

�minf@jjj 2 S0g: �5�The maximum of all those values M�S0� for anyS0 � S de®nes a lower bound on RCPSP. It can becomputed in O�n3� by systematically consideringall pairs of values ah and @i with h; i 2 S and

326 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 6: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

ah6 ai; @h P @i. The maximal set S0 correspondingto ah and @i being the minimum head and tail inEq. (5) contains all jobs j 2 S (including h and i)which ful®ll ah6 aj and @j P @i. The computa-tional e�ort can be reduced to O�n2� by consider-ing the pairs �ah; @i� following a lexicographicordering. Then the sets S0 and the correspondingsums of job durations are derived from the pre-ceding pair by a constant time update. An alter-native and more complex method of computing S0

is presented in Carlier and Latapie (1991).For each resource type r, a set S is constructed

as described above and the maximal M�S0� withS0 � S is computed. The maximum of all boundvalues obtained in this manner serves as LB5.

Example: Applying a forward and backwardpass to our project of Fig. 1 results in the headsand tails of jobs given in Table 2. If we considerk � 1 (one-machine problem) for the single re-source r � 1, we get umin

r � 3 and S � f3; 4; 8g.Evaluating Eq. (5) for all subsets of S gives themaximal bound value M�S0 � f3; 4g� � 0� 5�1� 11 � 17, whereas M�f3; 4; 8g� � 15. For k � 2,we obtain umin

r � 2 and S � f3; 4; 8; 7; 2g all jobsrequiring at least b � 1 machine. The maximalbound value is determined by M�f2; 3; 4g� �0� d9=2e � 11 � 16. For k � 3, we again obtainumin

r � 2 and S � f3; 4; 8; 7; 2g yielding the weakervalue M�f2; 3; 4g� � 0� d9=3e �11 � 14. In caseof k � 4, we have umin

r � 1 so that each job iscontained in S in b � ujr copies. Hence,

M�S� � 0� Pj ujrdj=ar

l m� 0 � 17: This value is

maximal, because no subset of S gives a bettervalue in Eq. (5). The best overall bound value isLB5 � 17.

Remark 1. For each value of k, the computationalcomplexity of LB5 is O�mn2�. As indicated by theexample in case of k � 4, LB5 always gives resultsat least as good as LB2 when k � ar is examined

for each resource type r. Furthermore, it can beseen that it may be helpful, in particular when ar issmall, to determine the number b of copies morecarefully, because, e.g., for k � 2 the jobs withujr � 3 actually could be duplicated.

3. New lower bound arguments for RCPSP

Extended node packing bound (LB6): LB4 canbe improved by applying additional bound argu-ments to a residual project instance consisting ofthe jobs remaining in the list after each iteration.These jobs are considered with their reduced du-rations. The corresponding precedence network isderived from the original one by deleting all nodesrepresenting jobs removed from the list, preservingoriginal direct and indirect precedence relations.The best bound value obtained for the residualinstance may be added to the total duration of jobscurrently in S thereby de®ning a lower bound onthe initial instance in each iteration. The maximumof those values serves as LB6.

To keep the computational e�ort low, we onlyuse the simple bound arguments LB1 and LB2 forevaluating the residual projects. Due to applyingLB1 anyway, it is no longer appropriate to put thejobs of the critical path at the beginning of the list.Instead, jobs are sorted in increasing order of thenumber of jobs which may be processed in parallelas proposed by Demeulemeester and Herroelen(1997). Ties are broken according to longer jobdurations so that among a set of pairwise com-patible jobs the longest one is preferred.

In each of O�n� iterations, updating S and re-moving jobs from the list requires time O�mn�.Applying LB1 and LB2 to the residual projecttakes time O�n2� and O�m�, respectively. Thus, thetotal complexity is O�n�mn� n2��.

Table 2

Heads and tails of jobs

2 3 4 5 6 7 8 9 10 11

aj � ESj 0 0 0 3 5 6 7 7 7 12

LFj 5 5 5 8 7 12 12 16 16 16

@j � LFn ÿ LFj 11 11 11 8 9 4 4 0 0 0

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 327

Page 7: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Example: Using the adapted sorting criterion,we obtain the list L � h3; 4; 11; 8; 7; 2; 6; 9; 10; 5i.After successively including the pairwise incom-patible jobs 3,4,11, and 8 in S and removing 5,9,and 10 from the list, we obtain

Pj2S dj � 15. For

the residual instance de®ned by the remaining jobs2,6, and 7 with original durations, the critical pathconsists of the jobs 2 and 7 with a length ofLB1� 7. The capacity bound only givesLB2 � d�3 � 2� 2 � 1� 4 � 2�=4e � 4. Hence, abound value of 22 is reached in this iteration. Sinceno better value is found in another iteration,LB6� 22.

Generalized node packing bound (LB7): Wedescribe a further extension of the node packingbound LB4. In general, one may identify a subsetS of the jobs any triplet of which is incompatible,i.e., no subset of three or more jobs can bescheduled in parallel, either due to precedence orresource constraints. Furthermore, some jobs fromS (forming a subset S1) may be incompatible toany other (single) job of S and must be performedexclusively. Then a lower bound on the projectduration is de®ned as follows �LB2�S� denotes theapplication of LB2 to a subset S of jobs):

LB7�S� �Xj2S1

dj

�max LB2�S8<: ÿS1�;

Xj2SÿS1�

dj

0@ 1A,2

3777������

9=;: �6�

The set S may be determined heuristically as incase of LB4. For example, each job may be given apriority value equal to the number of compatibletriplets in which it is contained. The jobs areconsidered in increasing order of the priority val-ues. Ties are broken in favour of longer job du-rations. A job is added to S whenever it isincompatible to any pair of jobs already con-tained. If it is even incompatible to any single jobof S, it also becomes a member of S1. Otherwise, itsinclusion causes at least one job to be removedfrom S1 (but not from S). Always after adding ajob, the expression LB7(S) is evaluated and themaximum of all those values serves as LB7. Due tochecking triplets of jobs concerning m resourcetypes, LB7 requires O�mn3� time.

Example: Sorting the jobs as mentioned aboveyields the list L � h3; 8; 6; 4; 7; 11; 2; 5; 9; 10i. Afterfour iterations S1 � S � f3; 8; 6; 4g is obtained.Now, job 7 has to be added. Since it is compatibleto 6, both are members of S but not of S1. Thecurrent bound is LB7�S� � 11�maxfd�2 � 1� 4�2�=4e; d6=2eg � 14. The maximum bound valueLB7(S) � 20 is obtained two iterations later forS1 � f3; 8; 4; 11g and S � S1 [ f6; 7; 2g. After in-cluding 5 and 9, the process stops, because job 10cannot be added due to, e.g., the compatible triplet2,9,10.

Remark 2. LB7 may be extended. According toLB4, jobs may be included in S or S1 with reduceddurations rather than deleting them completely.So, one part of a job may belong only to S andanother one also to S1. In the example, a boundvalue of LB7(S)� 21 is yielded for S �f3; 8; 6; 4; 7;11; 2g. The increase compared to above is due tothe jobs 7 and 2, now partially contained in S1 withdurations 2 and 1, respectively.

Additionally, LB1 and LB2 are applied to anappropriately de®ned residual instance as done forLB6. Beyond these extensions, LB6 and LB7 canbe generalized by considering k-tuples �k � 4� ofjobs, which cannot be processed simultaneously.The computational complexity is O�mnk�. Instead,in order to avoid such high computational e�ort,all but one capacity constraint can be discarded.Then, di�erent sets S could be determined like inthe case of LB5.

One-machine bound (LB8): Within the frame-work of LB4 a weighted node packing problem issolved in order to ®nd a subset S of jobs which areincompatible to each other. Since at most one jobout of this set can be performed at a time, a cor-responding one-machine problem can be de®nedwhich is a special case of the parallel-machineproblem �k � 1� considered by LB5. Followingthis idea, LB8 proceeds as follows. A subset S ofjobs is determined heuristically according to LB4.In each iteration of constructing S, formula (5) isapplied for S0 � S yielding a lower bound M(S).Note that Eq. (5) can easily be evaluated by up-dating its expressions in constant time. Therefore,the complete process takes time O�mn2�.

328 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 8: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Since LB8, in contrast to LB4, considers headsand tails, a more suitable sorting criterion for thepriority list is used. For each job j, the center of itstime window is computed by CTj � �ESj�LFj�=2. Starting with an arbitrary job q, the jobsare added to S in order of increasing absolutedistances of their centers CTj to CTq. Ties arebroken in favour of the smallest number of jobswhich can be processed in parallel to the jobconsidered. This sorting rule follows the idea thatit is advantageous to select jobs for S which havesimilar values of heads and tails, respectively. Inthis case, the sum of the minimum terms of Eq. (5)tends to become large. The maximum of the valuesM(S) obtained during constructing S de®nes alower bound LB8(q).

Example: We consider q � 3 with ES3 � 0;LF3 � 5 and CT3 � 2:5 (see Table 2). Applyingthe sorting criterion results in the listL � h3; 4; 2; 5; 6; 7; 8; 9; 10; 11i. Following the usualprocess of constructing S we add the jobs 3,4,2thereby discarding 5 and 6 and reducing the du-rations of 9 and 10 to 3 and 1, respectively. Thejobs j 2 f3; 4; 2g may start immediately at timeaj � 0. Since all have the tail @j � 11, we obtainM�f3; 4; 2g� � 0� 9� 11 � 20. Two iterations la-ter, we get M�f3; 4; 2; 7; 8g� � 0� 18� 4 � 22which is the maximal value de®ning LB8(q� 3).

In order to construct di�erent sets S, each job inturn is used as the start job q yielding an over-all lower bound LB8 � maxfLB8�q�j q � 2; . . . ;nÿ 1g.

Two-machine bound (LB9): It is obvious, thatLB7 and LB5 may be combined in the samefashion as described above. In this case, a parallel-machine problem with k � 2 is present, because atmost two jobs may be processed simultaneously.The set of tasks to be considered is determined asfollows. Starting with an arbitrary job q, the heu-ristic of LB7 is applied combined with the sortingcriterion of LB8. This results in a set S and asubset S1, which contains all jobs being incom-patible to all other jobs in S. As is the case withLB8, formula (5) is evaluated always after addinga job to S. For this purpose, a set S0 is formedwhich contains all jobs from S and a copy of eachjob currently in S1. Following Remark 2, a jobmay be part of S1 with a reduced duration. Then,

only this part is duplicated. By analogy with LB8,the bound LB9 is determined as the maximumvalue found starting with each job as q once. Eachof those iterations takes time O�mn3�.

Example:The jobs are added toS according to thelist described for LB8 starting with q � 3. In the ®fthiteration, S � f3; 4; 2; 5; 6g. The subset S1 consists ofa part of job 3 with reduced duration 2 and a part ofjob 2 with reduced duration 1. The remainder ofthose jobs has been removed from this set due toadding the compatible jobs 5 and 6, respectively. Theset S0 for applying Eq. (5) is composed of all jobs in Sand additional copies of the jobs 3 and 2 with theirreduced durations. We get M�f3; 2; 30; 20; 4; 5;6g� � 0�d17=2e� 8 � 17. After assigning the jobs 8and 7, which additionally become members of S1

with reduced durations of 2, M�f3; 4; 2;30; 20; 5; 6; 7; 8; 70; 80g� � 0� d30=2e � 4 � 19 de®n-ing the maximal bound value for q � 3 is found.

Precedence bound 1 (LB10): This bound argu-ment is based on considering incompatible pairs ofjobs. For any such pair �h; j� either a precedencerelation exists or their cumulated capacity re-quirement exceeds the supply of any resource typer. In the latter case, a so-called disjunctive prece-dence relation can be de®ned, because h either mustbe ®nished before j can be started or vice versa (cf.Balas, 1970).

Therefore, lower bound values can be com-puted by testing both directions of the disjunctiveprecedence relation between any incompatible pairof jobs. Consider such an incompatible pair �h; j�.First a directed arc is temporarily introduced fromh to j thereby de®ning a test project TP�h; j�. Tothis test project any bound argument for RCPSPcan be used to compute a lower bound LB�h; j� onthe project duration given that h is performedbefore j. Analogously, a second test projectTP�j; h� is built by introducing a directed arc fromj to h and a lower bound LB�j; h� is determined. Alower bound on the original RCPSP instance isgiven by the minimum of LB�h; j� and LB�j; h�. Alower bound LB10 is obtained by applying this testto all incompatible pairs, computing LB1 for alltest projects and taking the maximal value found.

Example: Consider the incompatible job pair 7and 8. Introducing the directed arc (7,8) andapplying LB1 results in LB(7,8) � 19, while the

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 329

Page 9: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

reverse arc (8,7) results in LB(8,7)� 20. Sinceevaluating the other incompatible pairs does notgive improved bounds, LB10� 19.

Precedence bound 2 (LB11): LB11 is an exten-sion of LB10 to incompatible triplets of jobs. If nopair of jobs contained in such a triplet is incom-patible, the resource con¯ict can be resolved byadding a single directed arc between one pair at atime, in total de®ning six test projects (three pairs,two directions). Otherwise, disjunctive precedencerelations between the jobs of each resource-in-compatible pair have to be directed, whereas ex-isting precedence relations are preserved. Due to sixsequences of three jobs being possible, up to six testprojects have to be considered. These test projectsare evaluated by applying a bound procedure,giving a lower bound for each of them. Among thevalues obtained, the minimum represents a lowerbound on the original RCPSP instance. Examiningall triplets of incompatible jobs and taking themaximal value yields a lower bound LB11.

Example: Consider the triplet (2,3,4). Each pairof those jobs is incompatible due to resourceconstraints. For each of the six test projects, rep-resenting all possible sequences, LB1� 20. Sincenone of the other triplets yields an improvedbound, LB11� 20.

Remark 3. If the evaluation of test projects isrestricted to computing LB1, LB10 and LB11 can becalculated e�ciently using heads and tails. Consideran incompatible pair �h; j�. The length of the longestpath using arc �h; j� is equal to ah � dh � dj � @j. Ifthis length exceeds LB1 for the initial project,introducing �h; j� leads to a new critical path inTP�h; j� containing the jobs h and j. Sequences forincompatible triplets can be evaluated in the samefashion. Since these computations take constanttime, LB10 can be computed in O�mn2� time, whichis needed for determining all incompatible pairs.The time complexity of LB11 is O�mn3�.

4. Meta-strategies for computing lower bounds

Traditionally, lower bounds are computed byrelaxing the problem considered and solving therelaxation exactly (or solving the dual of the re-

laxation heuristically). Usually, the most compli-cating restrictions are relaxed or even omitted. Byrelaxing complicating restrictions important in-formation about the special problem structureoften get lost such that bounds may be ratherweak. Therefore, Lagrangean relaxation methodsincorporate restrictions into the objective functionby penalizing violations such that as many aspossible of those constraints can be ful®lled withinthe solution of the relaxation (see e.g. Fisher,1973). Another approach, which is called additivebounding, successively solves di�erent relaxationsof the problem and combines results via the con-cept of reduced costs (see e.g. Fischetti and Toth,1989). However, such bound computations areusually expensive and success in achieving strongbounds is questionable. These traditional tech-niques of computing lower bounds are referred toas constructive or direct methods because they di-rectly tackle the problem by constructing solutionsto relaxed problems.

We want to introduce a di�erent approachwhich is called destructive improvement. Withinthis new meta-strategy, lower bounds are obtainedby thoroughly examining trial bound values LB.For example, one may consider a bound value LBfor a problem P determined by any constructivemethod. The value LB is utilized to restrict the setof feasible solutions by setting LB as the maximalvalue of the objective function allowed. If this re-stricted set of feasible solutions is empty, the ob-jective function value of the optimal solution of Pmust exceed the value LB. Therefore LB+1 is anew lower bound (provided that the objectivefunction values are integral). In order to ®nd outwhether a feasible solution with its objectivefunction value not exceeding LB may exist or not,two steps are performed.

To be more speci®c, let us consider a generalformulation of problem P as an integer linearprogram where the linear objective function f �x� isrestricted to integer values whenever the vector x isintegral:

�P� Minimize f �x� �7�subject to A � x � b;

x P 0 and integer: �8�

330 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 10: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

A restricted problem P�LB� is obtained by addi-tionally enforcing f �x� to be equal to LB:

f �x� � LB: �9�The ®rst step consists of applying reduction tech-

niques to the restricted problem P�LB� in order to®nd additional constraints (cutting planes) or tomodify problem parameters so as to strengthen theproblem data. Possibly, a contradiction occurs andfeasibility of P�LB� is precluded. Otherwise, in asecond step the restricted and strengthened prob-lem is relaxed and solved, e.g. by applying a con-structive bound argument. If the relaxation has nofeasible solution, P�LB� is infeasible, too, and LBis no valid objective function value. If neither stepis able to exclude the possible feasibility of P�LB�the process stops and LB serves as lower boundvalue. (Sometimes, a feasible solution of the re-laxation is even feasible for P and the process stopswith an optimum.) Otherwise, LB is increased by 1and the two steps are repeated. The principle ofdestructive improvement just outlined is illustratedin Fig. 3.

An example problem P:

Minimize f �x� � 3 � x1 � 2 � x2 �I�subject to 9 � x1 � x2 P 25; �II�

4 � x1 � 7 � x2 P 50; �III�x1; x2 P 0 and integer: �IV�

Traditionally, the LP-relaxation of P is built bydropping the integrality constraints of the vari-ables and solved by some LP-solver. The optimalLP-solution of the example is x� � �2:12; 5:93�with f �x�� � 18:22 (Fig. 4). Since the cost coe�-cients in the objective function are integral, LB� 19 is a ®rst lower bound on the optimal objec-tive function value.

The destructive improvement method startswith this value of LB and introduces the additionalconstraint 3 � x1 � 2 � x2 � 19 to de®ne the re-stricted problem P�19�. In order to strengthenP(19), (sharper) lower and upper bounds on thevalues of x1 and x2 are computed. This is achievedby solving the LP-relaxation of P�19� with theobjectives of minimizing and maximizing x1 andx2, respectively. This results in the restrictions:x1 2 �2:06; 2:53� and x2 2 �5:69; 6:40�.

Obviously, no integer value is feasible for x1

within P�19� and, hence, LB is no feasible objec-tive function value of P. Thus, LB is increased to20 and P�20� is constructed from P by adding theequation 3 � x1 � 2 � x2 � 20. As intervals of thevariables we obtain x1 2 �2; 3:08�; x2 2 �5:38; 7�.Due to the integrality of the variables the upperbound on x1 and the lower bound on x2 can be setto 3 and 6, respectively. Adding those boundariesof the variables to P�20� and solving the LP-re-laxation yields the solution x� � �2; 7� withf �x�� � 20. This solution is integral and, therefore,optimal for P�20�. Since LB � 20 is a lower boundfor problem P, the solution is optimal for P, too,

Fig. 3. Destructive improvement.

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 331

Page 11: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

and the procedure stops. The additional con-straints of P�19� and P�20� are shown in Fig. 4.The bold parts of the lines indicate the feasibleregions of the LP-relaxations.

Remark 4. Whenever the objective function has alarge number of possible values one should preferanother search method (e.g. binary search) thansuccessively considering values LB, LB + 1,. . .Starting with an interval �LB;UB� of possiblevalues, the mean element M � b�LB�UB�=2c ischosen and the restriction f �x� 2 �LB;M � is addedto build P�M�. If feasibility of P�M� can beprecluded, then LB is set to M � 1. Otherwise,UB is set to M . The search is repeated until LB �UB. If the objective function is real-valued, Mmust not be rounded and LB is set to M instead ofM � 1 in case of infeasible P�M�. The search stopsif the remaining interval is su�ciently small.

Remark 5. Sometimes, it is possible to utilizeinformation drawn from the reduced P�LB� or thesolution of its relaxation in order to increase LB bymore than 1. As an example consider the criticalsequence bound LB3 from Section 2 which may beinterpreted as some type of destructive improve-ment method. It starts with a critical path andrestricts the schedule to the critical path length LB.If it can identify another job which cannot beprocessed within its time window (completely), thelower bound can immediately be increased bydj ÿ emax

j . The other bounds of Section 2 and 3 aretypical direct methods.

5. Destructive improvement for RCPSP

In the following, we describe how the destruc-tive improvement technique can be adapted toRCPSP. First of all, di�erent reduction techniquesare introduced. Secondly, modi®cations of directbound arguments which can bene®t from the re-duction in problem data are presented.

5.1. Reduction techniques

Reduction by forward and backward pass: Themost intuitive reduction technique has alreadybeen presented in Section 2. It consists of com-puting earliest and latest starting and ®nishingtimes by a forward and a backward pass usingLFn � LB1 to start the backward pass. The re-sulting time windows of jobs can be exploited byother reduction techniques and adapted boundarguments.

Reduction by subprojects: This technique isbased on identifying and evaluating subprojects ofthe original RCPSP instance. A subproject SP�h; j�is de®ned by a pair of jobs �h; j� with j being atransitive successor of h and contains all jobs iwhich are successor of h as well as predecessor of j.Between h,j and the jobs of SP�h; j�, the originalprecedence constraints are preserved. The jobs hand j serve as dummy start and terminal nodes(with duration 0) of the subproject, respectively. Ifthe optimal project completion time of SP�h; j� isT �, the processing of job j starts at least T � periods

Fig. 4. Problems P and P(LB).

332 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 12: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

after job h has been ®nished. Hence, T � or a lowerbound on this value de®nes a minimum time lag khj

between h and j. Whenever khj exceeds the lengthof the critical paths of at least one subprojectSP�h; j�, applying a forward and backward pass tothe overall project considering minimum time lagsresults in sharper time windows. Note that thismay improve the bound by more than 1 as men-tioned in Remark 5.

Minimum time lags may be derived by applyingany (computationally inexpensive) bound to asubproject. Thus, the computational complexity ofthis reduction technique depends on the complex-ity of the bound arguments used and the numberof subprojects being of order O�n2�.

Example: The pair (1,6) builds a subprojectcontaining the jobs 3 and 4. The nodes 1 and 6 arethe dummy start and terminal node, respectively.Since 3 and 4 cannot be processed concurrently, alower bound on the completion time of SP�1; 6� is6. Hence, due to LS6 � 5, no feasible schedule canexist for LB � 16.

In general, any pair of jobs �h; j� with j being atransitive successor of h excluding �1; n� forms asubproject. But not all of these subprojects have tobe evaluated. Consider a subproject SP�h; j� withjob j having only one predecessor i. Since i does notcompete for resources with other jobs of SP�h; j�and is a terminal job of a smaller subproject SP�h; i�,the time lag between h and j can simply be computedby khj � khi � di. Analogously, the time lag khj of asubproject SP�h; j�with the start node having only asingle successor i is obtained by khj � di � kij.Hence, only the subprojects SP�h; j� which havestart and terminal nodes with at least two successorsand two predecessors, respectively, have to beevaluated. For all pairs of jobs �h; j� with j being adirect successor of h, khj � 0:

When computing earliest and latest starting and®nishing times using minimum time lags, respec-tively, the set of all (direct and indirect) predeces-sors P �j and followers F �j of a job have to be

considered. Then the following formulae have tobe used in the forward and the backward pass, thelatter being started with LFn � EFn:

ESj � maxfEFn � khj h 2 P �j��� g;

LFj � minfLSh ÿ kjh h 2 F �j��� g: �10�

If EFn > LB, a contradiction is present and LB isincreased to EFn.

Example: The following subprojects have to beevaluated: SP�1; 6�; SP�1; 7�; SP�1; 11�; SP�3; 11�;SP�3; 12�, and SP�6; 12�. The time lags obtainedfor these subprojects by computing LB1 throughLB4 are given in Table 3. Since a subsequent for-ward pass results in ES12 � 19, LB can be in-creased from 16�� LB1� to 19.

Reduction by precedence: If the project com-pletion time is restricted to LB some jobs mustprecede others to which they are not related byprecedence. Following the logic of LB10, two testprojects are evaluated for each incompatible pairof jobs h and j by computing lower boundsLB�h; j� and LB�j; h� on the project durations.Going beyond the capabilities of LB10, one of thearcs, say �h; j�, can temporarily be ®xed wheneverLB�h; j� � LB < LB�j; h�. Such a problem reduc-tion (which is independent of being able to in-crease LB) is valid as long as LB < LB�j; h�. Thatis, additional arcs can be ®xed using former re-ductions which are still valid for the current LB. IfLB < LB�h; j� � LB�j; h� is true for some incom-patible pair currently considered, a contradictionoccurs and LB can be increased. The new value isset to LB�h; j� or the minimum project durationfor which any of the arcs ®xed earlier becomesinvalid, whatever is smaller. Note that therebybound increments larger than 1 are possible (cf.Remark 5).

By considering such additional precedence re-lations in the forward and the backward passsharper time windows for the jobs are obtained

Table 3

Minimum time lags

SP(h,j) SP(1,6) SP(1,7) SP(1,11) SP(3,11) SP(3,12) SP(6,12)

khj 6 8 15 9 13 9

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 333

Page 13: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

which may be exploited by other reduction tech-niques. Further precedence relations are addedconsidering incompatible triplets as in case ofLB11. Then, only such arcs �h; j� can be ®xed forwhich any test project containing the reverse arc�j; h� gets a bound exceeding LB.

Reduction by precedence is an iterative processwhich is active as long as reductions are possible.Initially, all incompatible pairs and triplets aredetermined which takes O�mn3� time. Always afterincreasing LB or ®xing an arc, another iteration isperformed which has complexity O�n3� due tochecking all (predetermined) pairs and tripletsagain provided that LB1 is used to evaluate thetest projects.

Example: Examining the incompatible pair ofjobs 7 and 8 gives LB�7; 8� � 19 and LB�8; 7� �20. Hence, the arc (7,8) can be ®xed as long asLB � 19 is true. The feasibility of LB � 19 isprecluded by additionally considering the incom-patible jobs 2 and 3 because of LB�2; 3� � 21 andLB�3; 2� � 24. As a consequence, the overallbound LB can be increased to 20 and the arc (7,8)must be removed. Note that it is not possible toincrease LB to 21, because this value, obtained forTP�2; 3�, was based on ®xing the arc (7,8), which isno longer valid.

Reduction by core times: This reduction tech-nique is based on the following idea: For a certainLB, earliest and latest starting and ®nishing timeshave been computed. Whenever the latest startingtime LSj of a job j is smaller than its earliest ®n-ishing time EFj, job j must be active during�LSj;EFj� as long as LB is valid. This time intervalis called core time of a job.

The reduction is started with scheduling all jobs(only) during their core times. If a resource con¯ictoccurs in any period, LB can be increased. Oth-erwise, for each job we look for feasible time in-tervals during which it can be processedcompletely (in addition to the schedule built by thecore times of the other jobs). In case only one in-terval is available for a job j, the time window�ESj;LFj� is restricted to this interval thereby alsorestricting time windows of predecessors and suc-cessors, respectively. This may result in new orenlarged core times. If no interval exists, LB can beincreased.

Always after increasing LB or getting improvedcore times, another iteration of the reductionprocess is started. The time complexity O�mn2� ofeach iteration is due to searching possible timeintervals for each of n jobs. The search requireschecking O�n� changes in the resource pro®le,which occur when core times start or end, andobserving m resource types.

Example: The Gantt chart in Fig. 5 shows eachjob, starting as early (light grey beam) and as late(dark grey beam) as possible (for LB�LB1� 16).The hatched bars are the core times. For example,the core time of job 3, which lies on the criticalpath, is equal to its complete duration whereas job2 has a core time of only a single period and jobs 4and 10 have none.

Fig. 6 contains capacity oriented Gantt chartsfor core times. Jobs are represented by rectangleswith horizontal extents in length of their core timesand vertical extents uj1. Starting with the criticalpath length LB� 16 (cf. Fig. 6(a)), we recognizethat in some periods the capacity requirementsexceed the availability (periods 3,9,10) and LB isincreased to 17.

For LB� 17, the latest starting times are in-creased by one reducing the core times (Fig. 6(b)).Since there is still a resource con¯ict in period 10,LB is increased again giving Fig. 6(c) without aresource con¯ict caused by core times. Checkingfor feasible time intervals yields that processing job7 within its time window [6,14] completely beforeor after job 8, which is incompatible to 7, is im-possible (Fig. 6(c)). LB is increased to 19 resultingin Fig. 6(d).

The only time interval during which job 7 cannow be processed is [6,10]. Hence its latest startingtime is set to LS7 � 6. Due to this reduction, thelatest starting and ®nishing times of its predeces-sors can be decreased, too. With LS2 � 0 andLS5 � 3 the modi®ed chart in Fig. 7 is obtained.Due to the core times of jobs 2 and 7, job 3 cannotbe processed completely. For the resultingLB� 20, no reduction is possible at all.

Applying di�erent reduction techniques: Usually,not only one reduction technique will be applied.Instead, it is more promising to use di�erenttechniques subsequently before trying to contra-dict the feasibility of the reduced problem in-

334 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 14: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Fig. 6. Reduction by core times.

Fig. 5. Gantt chart for LB� 16.

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 335

Page 15: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

stance. Therefore, it is necessary to put the generalframework described in Section 4 (cf. Fig. 3) intomore concrete terms. The following steps areperformed, after determining an initial boundvalue LB by direct methods: Using the current LBas trial project duration, the reduction process isstarted by a forward and a backward pass usingminimum time lags (see below). Afterwards, ad-ditional precedence constraints are introduced andcore times are evaluated. This is continued untilno further reduction is possible or a contradictionin the problem data occurs. In the ®rst case, directbound arguments are applied to the reduced in-stance (see Section 5.2). If feasibility of LB can beprecluded, the bound value is increased and thereduction process is restarted maintaining onlyreductions which are still valid for the new LB.

Reduction by subprojects is computationallymore expensive than the other reduction concepts(cf. Section 6.3). Therefore, this technique is ap-plied only once before starting the destructiveimprovement process to determine minimum timelags between the jobs. All relevant subprojects aredetermined and sorted according to non-decreas-ing numbers of jobs contained. Considering themin this order ensures that a subproject is not con-sidered earlier than all the ones it includes. That is,larger subprojects can use minimum time lags be-tween their jobs obtained by evaluating smallersubprojects. A minimum time lag for any sub-project SP�h; j� is computed in the same manner asthe lower bound for the complete project. That is,we start with applying direct bound arguments tothe RCPSP instance de®ned by SP�h; j� and per-form the destructive improvement process as de-scribed above.

5.2. Lower bound arguments for contradictingfeasibility

Within the destructive improvement approach,lower bound arguments are used in order to con-tradict the feasibility of a reduced problem whenthis is not possible during the reduction phase.Hence, not the value obtained is of interest but ananswer to the question, whether a feasible schedulecan exist or not. LB3;LB5, and LB8 through LB11can be applied directly to the reduced problem,pro®ting from the sharper time windows (headsand tails), whereas LB2, LB4, LB6 and LB7 mustbe strengthened in order to be successful in thedestructive improvement context.

LB2: Consider a time interval I � �ts; tt� ��0;LB� and the set SI of jobs which have to beperformed completely or partially within I givenLB as maximal project duration. That is, SI con-tains all jobs j with EFj > ts and LSj > tt whichare active within I for at least d 0j �minfEFj ÿ ts; tt ÿ LSj; djg periods. ComputingLB2 for this interval reveals whether or not it isnot precluded to perform all jobs from SI within aninterval of length tt ÿ ts:

LB20�SI� � maxXj2SI

ujrd 0j=ar

& 'r � 1; . . . ;mj

( ):

�11�If LB20�SI� > tt ÿ ts, the current bound value LB iscontradicted and increased. In order not to ex-amine all possible intervals, ts and tt are restrictedto the earliest starting and latest ®nishing times ofthe jobs.

Example: With LB� 16, we obtain the Ganttchart in Fig. 5. Examining I � �2; 12�; SI containsthe jobs 5, 6, 7, 8 with their initial durations andthe jobs 2, 3 and 9 with d 02 � 1, d 03 � 3 and d 09 � 2.Since d�3 � 1� 2 � 1� 4 � 2� 5 � 3� 1 � 2� 3 � 3� 2�1�=4e � d41=4e � 11 exceeds the length of I, nofeasible schedule can exist and LB is increasedto 17.

Node packing bounds LB4, LB6 and LB7:Within the framework of destructive improve-ment, the notion of incompatibility may be ex-tended. Besides precedence and resourceincompatibility, two jobs are also incompatible to

Fig. 7. New core times for LB� 19.

336 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 16: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

each other if concurrent processing is impossibledue to their time windows. Hence, if the timewindows of a job j just added to set S and a job hremaining in the list only overlap partially, theduration of h is reduced by the maximal number ofperiods the two jobs can be performed in parallel.

6. Computational results

We report on results of comprehensive numer-ical experiments based on well-known standardbenchmark data sets. After comparing the e�ec-tiveness of direct bounding procedures the ap-proach of destructive improvement is examinedthoroughly by evaluating di�erent combinationsof reduction techniques and bound arguments. Alltests are performed on a Pentium PC with 166MHz using the operating system Windows 95. Thebound arguments and the destructive improve-ment procedures have been coded in C and com-piled by means of Borland C��5:0.

6.1. Data sets and complexity measures

The ®rst standard benchmark set (PATT) hasbeen collected from the literature by Patterson(1984). Since all instances of this data set can besolved very e�ciently by modern branch andbound procedures, Kolisch et al. (1995) have de-veloped a problem generator in order to createmore challenging instances. With this generator,two new data sets (SMFF, SMCP) have been as-sembled serving as reference sets in most of therecent publications (e.g. Mingozzi et al., 1995;Brucker et al., 1996; Demeulemeester and Her-roelen, 1997).

Comprehensive computational experiments canbe found in the literature comparing and evaluat-ing the e�ectiveness of exact as well as heuristicprocedures for solving RCPSP. They all show thateven though RCPSP is NP-hard, a large number ofinstances considered can be solved very quickly,whereas others remain computationally intractablefor a restricted amount of computation time.Hence, di�erent complexity measures are proposedtrying to predict the complexity of an instance in

terms of computation time (Kolisch et al., 1995; deReyck and Herroelen, 1996a; Thomas and Salhi,1997). In this paper the measures are used in orderto describe the data sets considered as well as ex-amine the e�ectiveness of destructive improvementmore systematically. The following measures aremost popular. A detailed discussion can be foundin Kolisch et al. (1995).· The network complexity NC is de®ned as the ra-

tio of nonredundant precedence relations to thenumber of jobs.

· The resource factor RF equals the average num-ber of resource types used by a job.

· The resource strength RS is determined for eachresource type r. It considers the per period avail-ability ar, the maximum demand of a single jobumax

r � maxfujrjj � 2; . . . ; nÿ 1g as well as thepeak demand upeak

r which arises when all jobsare started at their earliest starting times ob-tained by a simple forward pass. These parame-ters are related by computing RSr � �arÿumax

r �=�upeakr ÿ umax

r ). If ar < umaxr �RSr < 0� for

any resource type r, there obviously does not ex-ist a feasible schedule whereas in case ofar � upeak

r �RSr � 1� for all resource typesr � 1; . . . ;m an optimal solution can be ob-tained by a simple forward pass.

In general, the following relationships between thecomplexity measures and expected computationtimes have been observed in the literature. A de-creasing network complexity leads to longer com-putation times, because more combinations of jobsbecome feasible due to the decreasing number ofprecedence relations. Considering the resourcemeasures, computation times tend to get longerwhen the average number of jobs using a resourcetype increases or when the availability of a re-source type is scarce compared to its demand. Thisis the case for an increasing resource factor and adecreasing resource strength.

Table 4 shows the values of the main character-istics and complexity measures of the data setsconsidered. ``#inst.'' denotes the number of in-stances contained in each set. Since PATT has beencollected from the literature, every instance hasdi�erent values of the measures. Therefore, a rangeof the values obtained is given in each column, with£RS containing the average value of all resource

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 337

Page 17: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

types. For three instances, RSr is larger than 1 for allresource types and, hence the optimal objectivefunction value equals the length of the critical path.Though the di�erent bound arguments as well as thedestructive improvement technique may not im-prove on LB1 in this case, the instances have beenconsidered for the sake of completeness.

SMFF and SMCP have been generated byKolisch et al. (1995) using the measures as controlparameters for the generating process. For eachinstance, RSr possesses the same value for all re-source types. In case of SMFF, the values given forNC, RF, and £RS have systematically beencombined resulting in 48 di�erent combinations.For each of them, a group of ten instances hasbeen generated. Those 12 groups with an averageresource strength of 1.0 have been excluded fromthe tests, because LB1 always yields the optimalobjective function value. That is, only 360 in-stances of this set are considered.

6.2. Lower bounds

We examine the e�ectiveness of existing andnew bound arguments. They are compared with

respect to the following measures (UB is the op-timal or best known objective function value):· av.dev: average relative deviation of the bound

LBi from UB in % (relative deviation:�UB ÿ LBi�=UB � 100%)

· max.dev: maximum relative deviation of LBifrom UB in %

For PATT and SMFF the optimal solutions to allinstances are known. In case of SMCP, the bestknown solutions are used. Computation times areomitted, because they are negligible for all in-stances.

Tables 5 and 6 show the results obtained forexisting and new bound arguments, respectively.The procedures performing best for each set aremarked with an asterisk. LB5 has been computedfor all resource types and up to ®ve ``machines''.For LB4, LB6, and LB7, di�erent sets S are ob-tained by rotating the priority list as described forLB4.

Comparing LB4 and LB6 reveals that the ex-tension is very successful. LB5 is clearly outper-formed by the arguments LB6 to LB9 which arebased on incompatible pairs and triplets and con-sider all resources. This indicates that discardingall but one resource is often a too strong relaxation

Table 5

E�ectiveness of existing lower bound arguments

Data set LB1 LB2 LB3 LB4 LB5 LBold

PATT

av.dev 13.53 24.17 10.73 11.32 10.62* 7.62

max.dev 51.56 57.89 43.75 36.00 32.81 28.00

SMFF

av.dev 12.28 38.50 8.87 8.19* 8.76 5.41

max.dev 54.72 68.85 45.28 36.21 32.22 26.09

SMCP

av.dev 9.22 39.92 6.31 6.00* 6.95 4.29

max.dev 50.67 64.00 40.00 27.50 36.36 26.58

Table 4

Characteristics of data sets

Data sets #inst. n m NC RF £RS

PATT 110 7±51 1±3 1.08±3.05 0.25±1 0.00±1.42

SMFF 480 (360) 32 4 1.5, 1.8, 2.1 0.25, 0.5, 0.75, 1 0.2, 0.5, 0.7, 1

SMCP 200 12, 22, 32, 42 1±6 1.5 0.5 0.1, 0.3, 0.4, 0.5, 0.6, 0.8, 0.9

338 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 18: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

for the data sets considered. This observation hasbeen con®rmed by adapting bounds for the relatedbin packing problem, such as described in Scholl etal. (1997), which also show large deviations. Thesuccess of the procedures LB7 and LB9 shows thatthe increase in the computational complexity,caused by checking for triplets, pays o�. Com-paring LB6, LB7 and their relatives LB8, LB9 in-dicates that di�erent strategies should be appliedin order to ®nd incompatible subsets. Despite thesimplicity of their approaches, the precedencebased bound arguments LB10 and LB11 performnearly as good as the old bounds especially forPATT.

Comparing the best bound values obtained byexisting and new procedures (columns LBold andLBnew) shows that the new bound arguments clearlyoutperform the existing ones. For PATT the aver-age as well as the maximum deviations are nearlyhalved. The superiority is underlined by comparingLBnew with the maximum LB� of all bound argu-ments. That is, the existing bound arguments can beomitted without any loss in bound quality.

Remark 6. Partially, the superiority of the new(and extended) bounds is caused by their con-struction. For example, the bounds LB3, LB10and LB11 start from computing LB1 such thatthey dominate this basic bound. Furthermore, the

bound LB4 and its derivatives LB6 and LB8dominate LB1 if the set S generated contains thejobs of the critical path. Since LB6 directly extendsLB4 there is also a dominance relationship whenthe same sorting criterion is used. Furthermore,LB5 dominates LB2 as indicated in Remark 1.

Some of the bound arguments show a poorperformance considering the measures av.dev. andmax.dev. So, it has to be examined if their appli-cation can be justi®ed at all. Table 7 shows thequota of instances over all data sets, for which LBiis one of the arguments yielding the maximumbound value (denoted by % max). It reveals thatsome of the bound arguments showing large de-viations surprisingly obtain the best bound for alarge portion of the instances, in particular LB3,LB10 and LB11. This indicates that they stronglydepend on the problem structure.

6.3. Destructive improvement

In the following, we examine the e�ectiveness ofthe destructive improvement approach by com-bining di�erent reduction techniques and bound-ing procedures and applying them to thebenchmark sets. The combinations of the reduc-tion techniques considered are given in Table 8.

Table 7

Quotas of maximum bound values

LBi LB1 LB2 LB3 LB4 LB5 LB6 LB7 LB8 LB9 LB10 LB11

% max 44.54 1.97 59.70 46.36 52.88 60.61 59.70 69.39 59.40 57.42 62.73

Table 6

E�ectiveness of the new lower bound arguments

Data set LB6 LB7 LB8 LB9 LB10 LB11 LBnew LB�

PATT

av.dev. 7.43 5.89* 7.78 5.87* 11.16 8.63 3.74 3.74

max.dev. 24.00 25.00 28.00 25.00 43.75 39.06 13.04 13.04

SMFF

av.dev. 6.05 6.48 5.28* 7.00 9.69 8.98 4.05 4.05

max.dev. 26.98 23.88 34.33 25.86 49.60 44.34 19.74 19.74

SMCP

av.dev. 5.07 5.33 4.03* 5.83 7.21 6.34 3.43 3.43

max.dev. 26.58 35.38 27.85 32.31 44.00 40.00 26.58 26.58

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 339

Page 19: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

If reduction by subprojects is employed, thelower bound arguments and the destructive im-provement technique are applied to the initialproject as well as to all subprojects. Since thecomputational e�ort per instance strongly dependson the chosen combination, the average compu-tation time (av.cpu) in seconds is introduced inaddition to the measures used above.

Pure reduction process: In order to compare thepotential of the di�erent reduction techniques,they have been applied in the destructive im-provement process without using any lower boundarguments except for LB1. That is, only the re-duction step is performed (cf. Fig. 3). The resultsare given in Table 9. The combination RS hasbeen omitted from this test, because it does notimprove on LB1 until any other bound argumentor reduction technique is applied. The settings RPand RC perform nearly identical concerning av.-dev. whereas for max.dev. RP is slightly betterthan RC but requires considerably larger compu-tation times. Nevertheless, each technique yieldssmaller values for av.dev. than the best existingbounds except for SMFF. Combining them to

RPC leads to a further decrease of av.dev. Thiscombination already obtains better average devi-ations than LB� for SMCP though its computationtakes less time.

If subprojects are evaluated the bound qualityis greatly improved in particular for PATT andSMFF. However, the average computation timesare increased, too. This is particularly true, ifsubprojects as well as additional precedence rela-tions are examined. Using all reduction techniquesclearly obtains the best results, indicating that theycomplement each other well. RSPC outperformsLB� for all data sets concerning av.dev. In general,the bound arguments lead to smaller maximumdeviations, because the large variety of approachesis successful even for special structures of probleminstances.

Destructive improvement with all bound argu-

ments: In order to examine the full potential of thedestructive improvement technique, the same test isrepeated using all bounding procedures. The re-duction process is started with the best bound valueLB�. The results in Table 10 show that if only onereduction technique is applied, RC is most suc-

Table 9

Destructive improvement without bound arguments

Data set LBold LB� RP RC RPC RSP RSC RSPC

PATT

av.dev 7.62 3.74 6.37 6.21 5.70 4.06 3.94 3.16

max.dev 28.00 13.04 32.00 38.00 32.00 23.44 26.56 23.44

av.cpu 0.01 0.08 0.03 0.01 0.03 0.35 0.07 0.36

SMFF

av.dev 5.41 4.05 5.92 6.14 5.54 4.24 4.50 3.81

max.dev 26.09 19.74 32.80 33.71 32.80 29.35 31.25 29.35

av.cpu 0.01 0.09 0.05 0.01 0.05 0.25 0.07 0.27

SMCP

av.dev 4.29 3.43 3.67 3.71 3.24 2.82 2.90 2.39

max.dev 26.58 26.58 29.33 31.71 29.33 29.23 29.33 26.67

av.cpu 0.01 0.08 0.05 0.01 0.05 0.14 0.04 0.14

Table 8

Combinations of reduction techniques

Reduction by RS RP RC RPC RSP RSC RSPC

Subprojects j j j j

Precedence relations j j j j

Core times j j j j

340 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 20: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

cessful. It obtains the smallest values for av.devtaking the shortest computation times. By the wayof contrast, RS takes much more computation timebut hardly improves on LB�. If two reductiontechniques are combined, RPC is the fastest strat-egy, though it obtains clearly worse results thanRSP and RSC. Evaluating subprojects by applyingall bound arguments and reduction techniques iscomputationally rather expensive. Nevertheless,the results for RSPC underline the power of de-structive improvement. The values of av.dev arenearly halved compared to those of LB�.

Restricted application of bounds: In order toreduce computation times, it seems to be appro-priate to compute only a subset of the boundingprocedures within the destructive improvementprocess. The subset which turned out to be mostsuccessful in relation to computation times con-tains the arguments LB2, LB8, and LB9. Table 11shows that restricting the initial bound and thereduction process to this subset reduces the com-putation times to approximately one third. Ingeneral, the rather small loss in bound quality ismore than compensated by the gain in speed. Forexample, RPC can be computed as fast as LB� butyields much smaller average deviations, in partic-ular for SMCP. The fastest combination RC stillclearly outperforms LB� for all data sets.

In¯uence of problem complexity: Most of theexisting and new bound arguments are based onthe incompatibility of pairs or triplets of jobs. For

the data sets considered, the number of incom-patible pairs and triplets strongly depends on theaverage resource strength of an instance. For ex-ample, the maximum number of incompatiblepairs of an SMFF instance with £RS � 0:2 is 151,whereas it is only 22 for £RS � 0:7. Therefore, itmay be promising to select the bound argumentsand the reduction techniques used according to,e.g., the resource strength of an instance. In orderto ®nd out which combination should be selected,the instances of each data set have been dividedinto three classes subject to their resource strength,containing those instances with £RS 2 �0; 1=3�;£RS 2 �1=3; 2=3�, and £RS P 2=3, respectively.The results obtained as well as a more detaileddiscussion can be found in the appendix. In gen-eral, the following observations can be made. Forthe ®rst class, the combination RPC together withthe bound arguments LB2, LB8 and LB9 repre-sents the best compromise between bound qualityand computational e�ort. For the two otherclasses the computation of lower bound argumentsshould be omitted, completely. The combinationRSC clearly outperforms LB� for those classes,concerning the measures av.dev and av.cpu. Forthe last class of PATT and SMCP, the lowerbounds computed by RSC are equal to the optimalproject durations for all instances.

The in¯uence of other measures of problemcomplexity such as NC and RF on the perfor-mance of bound arguments is much less signi®cant

Table 10

Destructive improvement with all bound arguments

Data set LBold LB� RS RP RC RPC RSP RSC RSPC

PATT

av.dev 7.62 3.74 3.43 3.10 3.03 2.89 2.28 2.14 1.97

max.dev 28.00 13.04 13.04 13.04 13.04 13.04 13.04 13.04 13.04

av.cpu 0.01 0.08 1.19 0.18 0.16 0.18 2.85 2.56 2.85

SMFF

av.dev 5.41 4.05 3.99 3.25 3.06 2.88 2.64 2.48 2.25

max.dev 26.09 19.74 19.74 19.74 19.74 19.74 19.74 19.74 19.74

av.cpu 0.01 0.09 0.67 0.22 0.18 0.22 1.63 1.44 1.64

SMCP

av.dev 4.29 3.43 3.43 2.09 1.90 1.67 1.74 1.57 1.33

max.dev 26.58 26.58 26.58 24.05 26.58 24.05 20.00 20.00 20.00

av.cpu 0.01 0.08 0.33 0.21 0.17 0.21 0.82 0.72 0.82

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 341

Page 21: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

than for £RS. Hence, it does not seem to be ap-propriate to select reduction techniques and boundarguments on the basis of those measures.

Comparison with LP-based bounds: Finally, wecompare destructive improvement with LP-basedbound arguments based on a reformulation ofRCPSP as presented by Mingozzi et al. (1995) andBrucker et al. (1996). In both papers, the resultsgiven indicate that the LP-based approach clearyoutperforms existing methods such as described inSection 2. Table 12 shows the results for PATTand SMFF obtained by Mingozzi et al. (1995), allbound arguments and destructive improvement.The results for SMCP have been omitted becausethe optimal solutions are not available for all in-stances. The values for SMFF refer to the subsetof instances evaluated by Mingozzi et al. (1995).The indices for the di�erent destructive improve-ment con®gurations represent the bound argu-ments used.

Already RPC289 obtains better results than theLP-based approach. Whereas the latter has proved

to be computationally too expensive within abranch and bound procedure (cf. Mingozzi et al.,1995), this con®guration of destructive improve-ment can be evaluated in an average time of lessthan 0.1 s. RSPC reduces the average deviation forPATT and SMFF by more than 40% and 30%,respectively.

7. Summary and conclusions

The ®rst part of the article (Sections 1±3) isdevoted to describing existing direct bound argu-ments and presenting a number of new ones for theresource-constrained project scheduling problem(RCPSP). The new arguments advantageouslyexploit special problem structures and relation-ships to other combinatorial optimization prob-lems such as machine scheduling. They allowtackling a problem from di�erent directions inorder to obtain sharper bound values than beingpossible by former bound arguments.

Table 12

Destructive improvement vs. LP-based approaches

Data set LP-based LB� RPC289 RPCall RSPCall

PATT

av.dev 3.29 3.74 3.00 2.89 1.97

max.dev 14.60 13.04 13.04 13.04 13.04

SMFF

av.dev 6.38 6.77 5.72 5.49 4.42

max.dev 23.71 19.74 19.74 19.74 19.74

Table 11

Destructive improvement with LB2, LB8, LB9

Data set LBold LB� RS RP RC RPC RSP RSC RSPC

PATT

av.dev 7.62 3.74 4.51 3.31 3.19 3.00 2.38 2.21 2.08

max.dev 28.00 13.04 21.74 15.38 13.04 13.04 13.04 13.04 13.04

av.cpu 0.01 0.08 0.26 0.06 0.04 0.06 0.88 0.61 0.90

SMFF

av.dev 5.41 4.05 4.61 3.37 3.18 2.99 2.73 2.56 2.34

max.dev 26.09 19.74 19.74 19.74 19.74 19.74 19.74 19.74 19.74

av.cpu 0.01 0.09 0.14 0.09 0.06 0.09 0.54 0.36 0.55

SMCP

av.dev 4.29 3.43 3.95 2.18 1.97 1.76 1.81 1.63 1.41

max.dev 26.58 26.58 27.85 26.58 27.85 26.58 20.00 20.00 20.00

av.cpu 0.01 0.08 0.08 0.09 0.06 0.10 0.29 0.20 0.30

342 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 22: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

In Section 4, we introduce a new meta-strategyfor computing lower bounds (on minimizationproblems), which is called destructive improve-ment. The basic idea of this approach is to provethat no feasible solution for a trial bound valuecan exist. In case of success, the trial value may beincreased and the process is repeated for a largervalue. Two steps are performed to furnish theproof. Assuming that the objective function valuemust not exceed the trial one, the problem data isreduced. If this does not result in a contradiction,modi®ed direct bounding procedures, taking ad-vantage of the reduced data, are applied.

Section 5 describes how to apply the meta-strategy to RCPSP. Di�erent techniques for re-ducing the problem data are proposed. Further-more, direct lower bound methods are adaptedsuch that they can successfully be used within thedestructive improvement framework.

Comprehensive numerical tests have beenperformed on di�erent standard data sets (Sec-tion 6). They show that the new bounding pro-cedures clearly outperform the ones described inthe literature. The results obtained are furtherimproved by the destructive improvement tech-nique. It turns out, that for some problemclasses, the direct bound arguments may even beomitted completely, without a severe loss inbound quality. Nevertheless, depending on thereduction techniques applied, the destructiveimprovement technique may consume a consid-erable amount of computation time. This leadsto the following conclusions:· Considering subprojects is the computationally

most expensive reduction technique. In orderto reduce the overall computational e�ort, heu-ristic rules can be incorporated to determinewhich subprojects should be evaluated.

· In case that the destructive improvement pro-cess considerably improves on a lower boundobtained by direct methods, a lot of trial boundvalues have to be considered, if they are exam-ined incrementally. Hence, more versatile searchprocedures such as binary search should be ap-plied.

· Within a branch and bound procedure, di�erentcon®gurations of destructive improvement maybe applied depending on the state of the enu-

meration process. In particular, the root nodeshould be tackled by the full power of destruc-tive improvement. This e�ort can be compensat-ed, if the branching process is able to takeadvantage of the information gained duringthe reduction process.

· In order to examine the potential of the destruc-tive improvement technique as a meta-strategyfor calculating lower bounds, it has to be ap-plied to other capacity constrained combinatori-al optimization problems such as bin packing,assembly line balancing and job shop schedulingproblems.

Acknowledgements

We want to thank Professor Dr. AndreasDrexl, Dr. Rainer Kolisch, and Dr. ArnoSprecher for providing us with the data sets.Furthermore, we are grateful to two anonymousreferees which gave valuable suggestions for im-proving the paper.

Appendix A

We examine the in¯uence of the resourcestrength on the performance of the destructiveimprovement technique. The instances of eachdata set are subdivided into three classes C1,C2,and C3 subject to their resource strengths. Theclasses contain the instances with £RS 2 �0;1=3�;£ RS 2 �1=3; 2=3�; and £RS P 2=3, re-spectively. Among the combinations of reductiontechniques given in Table 8, RPC, RSC and RSPChave been selected for examination. The ®rst tworepresent the most promising compromise con-cerning computation time and bound quality. Thelast one is chosen, because it uses the full potentialof the destructive improvement technique. Thesecombinations are applied without any bound ar-guments, together with LB2, LB8, and LB9 as wellas with all arguments.

The results obtained can be found in Table 13.Combinations giving the best bound qualityare marked with `*', whereas `+' marks those

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 343

Page 23: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Table 13

Results for di�erent resource strengths

Destructive improvement

Without arguments With LB2, LB8, LB9 With all arguments

LBold LB� RPC RSC RSPC RPC RSC RSPC RPC RSC RSPC

PATT

C1

av.dev 5.02 1.77 13.96 11.65 7.50 1.21�* 1.77 1.21 1.21 1.77 1.21

max.dev 10.53 5.26 32.00 26.56 23.44 5.26 5.26 5.26 5.26 5.26 5.26

av.cpu 0.01 0.04 0.02 0.02 0.12 0.03 0.12 0.20 0.09 0.60 0.68

C2

av.dev 8.49 4.35 5.56 3.74 3.18� 3.48 2.59 2.47 3.35 2.49 2.34*

max.dev 28.00 13.04 21.88 15.62 13.04 13.04 13.04 13.04 13.04 13.04 13.04

av.cpu 0.01 0.07 0.03 0.07 0.35 0.06 0.57 0.84 0.18 2.38 2.66

C3

av.dev 3.49 0.96 0.96� 0.00* 0.00 0.96 0.00 0.00 0.96 0.00 0.00

max.dev 25.00 12.50 12.50 0.00 0.00 12.50 0.00 0.00 12.50 0.00 0.00

av.cpu 0.01 0.12 0.03 0.11 0.60 0.09 1.24 1.75 0.28 5.13 5.62

SMFF

C1

av.dev 10.55 7.58 14.50 11.55 9.65 6.91� 5.75 5.22 6.59 5.53 4.95*

max.dev 26.09 19.74 32.80 31.52 29.35 19.74 19.74 19.74 19.74 19.74 19.74

av.cpu 0.01 0.10 0.08 0.08 0.39 0.11 0.36 0.65 0.22 1.58 1.90

C2

av.dev 4.27 3.52 1.77� 1.60� 1.47* 1.71 1.60 1.47 1.68 1.60 1.47

max.dev 18.87 15.09 11.32 11.32 11.32 11.32 11.32 11.32 11.32 11.32 11.32

av.cpu 0.01 0.08 0.04 0.07 0.22 0.09 0.36 0.51 0.21 1.36 1.52

C3

av.dev 1.29 0.94 0.30� 0.28 0.27* 0.30 0.28 0.27 0.30 0.27 0.27

max.dev 10.91 9.09 5.45 5.45 5.45 5.45 5.45 5.45 5.45 5.45 5.45

av.cpu 0.01 0.08 0.03 0.07 0.19 0.08 0.36 0.48 0.20 1.38 1.51

SMCP

C1

av.dev 12.15 9.25 16.75 15.34 13.13 8.26 7.32� 6.77� 7.70 6.86 6.25*

max.dev 26.58 26.58 29.33 29.33 26.67 26.58 20.00 20.00 24.05 20.00 20.00

av.cpu 0.01 0.10 0.09 0.04 0.24 0.12 0.19 0.36 0.28 0.79 0.97

C2

av.dev 3.21 2.67 0.98 0.80 0.55� 0.69 0.72 0.52* 0.69 0.72 0.52

max.dev 23.53 20.59 21.82 8.82 8.82 8.82 8.82 8.82 8.82 8.82 8.82

av.cpu 0.01 0.07 0.05 0.04 0.13 0.09 0.20 0.29 0.20 0.70 0.79

C3

av.dev 0.56 0.43 0.00�* 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

max.dev 5.26 3.92 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

av.cpu 0.01 0.08 0.03 0.03 0.10 0.08 0.20 0.26 0.19 0.75 0.80

344 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346

Page 24: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

combinations which yield a good bound qualitywith short computation times appearing to bemost favourable. In general, the results for thePatterson data set are less expressive than for theothers, because it has not been generated system-atically. For example, the maximum number ofjobs varies considerably in the di�erent classes.Therefore, the following indications are mainlybased on the new data sets. For C1, RSPC com-bined with all bound arguments must be applied toobtain the best results. In order to avoid the highcomputation times, reduction by subprojects aswell as the bound arguments except for LB2, LB8,and LB9 should be omitted. Considering C2 andC3, even those arguments can be excluded withoutconsiderable loss in bound quality. It is remark-able that pure RSC without any bound argumentsalways outperforms LB� for C2 and C3, thoughrequiring the shorter computation times.

References

Alvarez-Valdez, R., Tamarit, J.M., 1989. Heuristic algorithms

for resource-constrained project scheduling: A review and

an empirical analysis, in: Slowinski, R., Weglarz, J. (Eds.),

Advances in Project Scheduling, Elsevier, Amsterdam, pp.

113±134.

Balas, E., 1970. Project scheduling with resource constraints,

in: Beale, E.M.L. (Ed.), Applications of Mathematical

Programming Techniques, Elsevier, New York, pp. 187±

200.

Boctor, F.F., 1990. Some e�cient multi-heuristic procedures for

resource-constrained project scheduling. European Journal

of Operational Research 49, 3±13.

Brucker, P., Schoo, A., Thiele, O., 1996. A branch and bound

algorithm for the resource-constrained project scheduling

problem. European Journal of Operational Research (to

appear).

Carlier, J., Latapie, B., 1991. Une m�ethode arborescente pour

r�esoudre les probl�emes cumulatifs. Recherche Op�erationelle

25, 311±340.

Christo®des, N., Alvarez-Valdes, R., Tamarit, J.M., 1987.

Project scheduling with resource constraints: A branch and

bound approach. European Journal of Operational Re-

search 29, 262±273.

Demeulemeester, E., 1992. Optimal Algorithms for Various

Classes of Multiple Resource-Constrained Project Schedul-

ing Problems. Unpublished Ph.D. Thesis, Department of

Applied Economic Sciences, Katholieke Universiteit Le-

uven.

Demeulemeester, E., Herroelen, W., 1992. A branch-and-

bound procedure for the multiple resource-constrained

project scheduling problem. Management Science 38,

1803±1818.

Demeulemeester, E., Herroelen, W., 1997. New benchmark

results for the resource-constrained project scheduling

problem. Management Science 43, 1485±1492.

De Reyck, B., Herroelen, W., 1996a. On the use of the

complexity index as a measure of complexity in activity

networks. European Journal of Operational Research 91,

347±366.

De Reyck, B., Herroelen, W., 1996b. A branch-and-bound

procedure for the resource-constrained project scheduling

problem with generalized precedence relations, Research

Report No. 9613, Department of Applied Economic

Sciences, Katholieke Universiteit Leuven.

Fischetti, M., Toth, P., 1989. An additive bounding procedure

for combinatorial optimization problems. Operations Re-

search 37, 319±328.

Fisher, M.L., 1973. Optimal solution of scheduling problems

using Lagrange multipliers: Part I. Operations Research 21,

1114±1127.

Kolisch, R., 1996. Serial and parallel resource-constrained

project scheduling methods revisited ± Theory and compu-

tation. European Journal of Operational Research 90, 320±

333.

Kolisch, R., Sprecher, A., Drexl, A., 1995. Characterization

and generation of a general class of resource-constrained

project scheduling problems. Management Science 41,

1693±1703.

Kolisch, R., Sprecher, A., 1996. PSPLIB ± A project scheduling

problem library. European Journal of Operational Research

96, 205±216.

Mingozzi, A., Maniezzo, V., Ricciardelli, S., Bianco, L., 1995.

An exact algorithm for project scheduling with resource

constraints based on a new mathematical formulation.

Management Science (to appear).

Patterson, J.H., 1984. A comparison of exact approaches for

solving the multiple constrained resource, project scheduling

problem. Management Science 30, 854±867.

Scholl, A., Klein, R., J�urgens, C., 1997. BISON: A fast hybrid

procedure for solving the one-dimensional bin packing

problem. Computers and Operations Research 24, 627±

645.

Sprecher, A., 1996. Solving the RCPSP e�ciently at modest

memory requirements. Manuskripte aus den Instituten f�ur

Betriebswirtschaftslehre der Universit�at Kiel, No. 425.

R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346 345

Page 25: Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling

Stinson, J.P., Davis, E.W., Khumawala, B.M., 1978. Multiple

resource-constrained scheduling using branch and bound.

AIIE Transactions 10, 252±259.

Talbot, F.B., Patterson, J.H., 1978. An e�cient integer

programming algorithm with network cuts for solving

resource-constrained scheduling problems. Management

Science 24, 1163±1175.

Thomas, P.R., Salhi, S., 1997. An investigation into the

relationship of heuristic performance with network-resource

characteristics. Journal of the Operational Research Society

48, 34±43.

Weglarz, J., Blazewicz, J., Cellary, W., Slowinski, R., 1977.

ALGORITHM 520: An automatic revised Simplex method

for constrained resource network scheduling. ACM Trans-

actions on Mathematical Software 3, 295±300.

346 R. Klein, A. Scholl / European Journal of Operational Research 112 (1999) 322±346