Concurrent Crashing and Overlapping in Product...

17
OPERATIONS RESEARCH Vol. 52, No. 4, July–August 2004, pp. 606–622 issn 0030-364X eissn 1526-5463 04 5204 0606 inf orms ® doi 10.1287/opre.1040.0125 © 2004 INFORMS Concurrent Crashing and Overlapping in Product Development Thomas A. Roemer Sloan School of Management, Massachusetts Institute of Technology, 30 Wadsworth Street, E53-387, Cambridge, Massachusetts 02142, [email protected] Reza Ahmadi Anderson School of Management, University of California at Los Angeles, Los Angeles, California 90095, [email protected] This research addresses two common tools for reducing product development lead times: overlapping of development stages and crashing of development times. For the first time in the product development literature, a formal model addresses both tools concurrently, thus facilitating analysis of the interdependencies between overlapping and crashing. The results exhibit the necessity of addressing overlapping and crashing concurrently, and exhibit general characteristics of optimal overlapping/crashing policies. The impact of different evolution/sensitivity constellations on optimal policies is investigated, and comprehensive guidelines for structuring development processes are provided. For the special case of linear costs, an efficient procedure is presented that generates the efficient time-cost trade-off curves and determines the corresponding optimal overlapping/crashing policies. The impact of key parameters and the robustness regarding their estimates is illustrated with a simple two-stage example. Subject classifications : product development processes; overlapping; crashing; development leadtimes; development costs. Area of review : Manufacturing, Service, and Supply Chain Operations. History : Received June 2001; revision received May 2002; accepted August 2003. 1. Introduction The increasing importance of fast product development has given rise to a large body of literature dedicated to decreasing product development cycles. Traditional discus- sions have focused primarily on “crashing” that is work- ing with higher work intensities to reduce development times. A more recent body of literature addresses the challenges and benefits of “overlapping” of traditionally sequential stages, yet to date, no literature exists that inte- grates both concepts in one modeling framework. In this research, we therefore look at the costs of accelerated prod- uct development and address both overlapping and crash- ing. By exhibiting the interdependencies between the two, we illustrate the need to address them concurrently and to provide general guidelines for overlapping and crash- ing policies. In particular, by adopting a framework from Krishnan et al. (1997), we investigate how upstream evolu- tion and downstream sensitivity interact with crashing deci- sions. Based on a simple example, we provide an in-depth discussion of the benefits of optimal policies for a variety of different scenarios and provide a comprehensive toolset for managers to structure their processes, to allocate and plan required resources, and to make budget and timing decisions for new product introductions. 1.1. Example The central notion of this research is presented in the sim- plified two-stage problem depicted in Figure 1. Consider a process that consists of two stages, up- and downstream, where downstream depends on the output information from upstream. If no measures are taken to reduce the completion time, then the natural process structure is to first complete the upstream task and then proceed with the downstream task under complete information (Figure 1a). When crashed, the work intensities during the stages are increased (as indicated by the wider bars in Figure 1b) and the duration of each task is reduced from say T i to T i ¯ r i · T i T i , where 0 < ¯ r i 1 expresses the maximum level of crashing at stage i. Alternatively, to reduce the lead time, the stages may be performed partially in parallel or overlapped (Figure 1c). Now, however, because downstream starts without com- plete information, additional work might be necessary to accommodate unforeseen upstream developments. Conse- quently, total downstream time increases to say T i + h i y i , where the amount of additional work h i y i is a function of the overlap y i between the stages. The dark gray area in Figure 1c indicates the additional downstream work caused by overlapping. Finally, the two approaches can be com- bined as in Figure 1d, where both methods are concurrently employed. For the remainder of the paper, we will refer to reducing development lead times, be it through crashing, through overlapping, or through a combination of both, as process compression. If the functional form for h i is given and the associ- ated crashing costs are known, then under fairly general 606

Transcript of Concurrent Crashing and Overlapping in Product...

OPERATIONS RESEARCHVol. 52, No. 4, July–August 2004, pp. 606–622issn 0030-364X �eissn 1526-5463 �04 �5204 �0606

informs ®

doi 10.1287/opre.1040.0125©2004 INFORMS

Concurrent Crashing and Overlapping inProduct Development

Thomas A. RoemerSloan School of Management, Massachusetts Institute of Technology, 30 Wadsworth Street, E53-387,

Cambridge, Massachusetts 02142, [email protected]

Reza AhmadiAnderson School of Management, University of California at Los Angeles, Los Angeles, California 90095, [email protected]

This research addresses two common tools for reducing product development lead times: overlapping of development stagesand crashing of development times. For the first time in the product development literature, a formal model addresses bothtools concurrently, thus facilitating analysis of the interdependencies between overlapping and crashing.The results exhibit the necessity of addressing overlapping and crashing concurrently, and exhibit general characteristics

of optimal overlapping/crashing policies. The impact of different evolution/sensitivity constellations on optimal policies isinvestigated, and comprehensive guidelines for structuring development processes are provided.For the special case of linear costs, an efficient procedure is presented that generates the efficient time-cost trade-off

curves and determines the corresponding optimal overlapping/crashing policies. The impact of key parameters and therobustness regarding their estimates is illustrated with a simple two-stage example.

Subject classifications : product development processes; overlapping; crashing; development leadtimes; development costs.Area of review : Manufacturing, Service, and Supply Chain Operations.History : Received June 2001; revision received May 2002; accepted August 2003.

1. IntroductionThe increasing importance of fast product developmenthas given rise to a large body of literature dedicated todecreasing product development cycles. Traditional discus-sions have focused primarily on “crashing” that is work-ing with higher work intensities to reduce developmenttimes. A more recent body of literature addresses thechallenges and benefits of “overlapping” of traditionallysequential stages, yet to date, no literature exists that inte-grates both concepts in one modeling framework. In thisresearch, we therefore look at the costs of accelerated prod-uct development and address both overlapping and crash-ing. By exhibiting the interdependencies between the two,we illustrate the need to address them concurrently andto provide general guidelines for overlapping and crash-ing policies. In particular, by adopting a framework fromKrishnan et al. (1997), we investigate how upstream evolu-tion and downstream sensitivity interact with crashing deci-sions. Based on a simple example, we provide an in-depthdiscussion of the benefits of optimal policies for a varietyof different scenarios and provide a comprehensive toolsetfor managers to structure their processes, to allocate andplan required resources, and to make budget and timingdecisions for new product introductions.

1.1. Example

The central notion of this research is presented in the sim-plified two-stage problem depicted in Figure 1. Consider a

process that consists of two stages, up- and downstream,where downstream depends on the output informationfrom upstream. If no measures are taken to reduce thecompletion time, then the natural process structure is tofirst complete the upstream task and then proceed with thedownstream task under complete information (Figure 1a).When crashed, the work intensities during the stages areincreased (as indicated by the wider bars in Figure 1b) andthe duration of each task is reduced from say Ti to T̃i ∈�r̄i · Ti� Ti�, where 0< r̄i � 1 expresses the maximum levelof crashing at stage i.Alternatively, to reduce the lead time, the stages may be

performed partially in parallel or overlapped (Figure 1c).Now, however, because downstream starts without com-plete information, additional work might be necessary toaccommodate unforeseen upstream developments. Conse-quently, total downstream time increases to say Ti+hiyi�,where the amount of additional work hiyi� is a functionof the overlap yi between the stages. The dark gray area inFigure 1c indicates the additional downstream work causedby overlapping. Finally, the two approaches can be com-bined as in Figure 1d, where both methods are concurrentlyemployed. For the remainder of the paper, we will refer toreducing development lead times, be it through crashing,through overlapping, or through a combination of both, asprocess compression.If the functional form for hi is given and the associ-

ated crashing costs are known, then under fairly general

606

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 607

Figure 1. Different compression strategies.

Effort caused by Crashing

Effort caused by Overlapping

Effort caused by both,Overlapping and Crashing

(c)

(d)

(a)

(b)

(a) No Overlapping or Crashing

(b) Crashing only

(c) Overlapping only

(d) Crashing and Overlapping

Downstream

Upstream

Upstream

Downstream

Upstream

Downstream

Upstream

Downstream

conditions, the procedure suggested in Roemer et al. (2000)can be employed to calculate the time-cost trade-off curvesfor overlapping. Deriving the corresponding efficient fron-tier for crashing is straightforward if crashing costs areknown.To illustrate, suppose crashing costs Ki for stage i are

linear in the amount of time crashed; i.e., Ki = ki · Ti− T̃i�and overlapping costs C are linear in h, i.e., C = ch.Furthermore, let h= y− 1− e−�y�/�, with � being a con-stant. For this scenario, Figure 2 depicts the efficient fron-tiers assuming the values in Table 1. Figure 2 shows thatboth curves are convex increasing over the developmenttime decrease. The range of feasible development times isbetween 160 units of time (days), the standard duration,and 131 days for either overlapping or crashing. Due to thelinearity of crashing costs, overlapping is initially the morelucrative alternative, but eventually crashing will becomepreferable for large decreases in development time.After reviewing some of the related literature in the next

section, we will formally model the problem of jointly over-lapping and crashing for a multistage process in §3. Ourfocus will be on structuring the development process interms of overlapping and crashing, under particular consid-eration of time and budget constraints. Section 4 addressesthe interdependencies between overlapping and crashingand demonstrates the necessity for an integrative frame-work. Properties for optimal solution policies are derivedin §5, and a solution procedure for the special case of lin-ear costs is presented. General insights and a concludingdiscussion follow in the subsequent sessions.

Table 1. Two-stage example.

Upstream Downstream

Standard duration Ti 70 90Maximum crashing r̄i 0�80 0�83Crashing costs ki 10 20Rework costs c 17Constant � 0�03

Figure 2. Overlapping vs. crashing costs.

$-160 155 150 145

Duration140 135

$100

Cost

s

$200

$300

$400

$500

$600

$700

overlapping onlycrashing only

2. Literature ReviewManagement of product development projects has receivedconsiderable attention during the last decade from boththe academic community and practitioners (Krishnan andUlrich 2001). One focal point of this literature is thestructure of development processes (e.g., Eppinger 2001)and corresponding product architectures (e.g., McCord andEppinger 1993) and modularization (e.g., Baldwin andClark 2000 or Hoedemaker et al. 1999). A second focalpoint of the literature is how to narrow down designchoices over the course of the development project. Rec-ommendations include clearly defined sequential stage-gate processes (e.g., Cooper and Kleinschmidt 1996), earlyproblem solving or “frontloading” (Thomke and Fujimoto2000), and highly flexible development processes withdelayed real-time definition of product attributes (Iansitiand MacCormack 1997, Kalyanaram and Krishnan 1997,Bhattacharya et al. 1998).Common to all of these approaches is that they address

the overall structure of the development process, and thatthey therefore take a relatively high-level look at productdevelopment. Implementation of these approaches typicallyinvolves upper management and spans across functional andorganizational borders. Rewards can be ample and moreefficient development processes often lead to shorter devel-opment times, lower costs, and higher product quality. Wetherefore strongly recommend exploring the full potentialof these approaches before employing the methods sug-gested here. Indeed, in an earlier and related paper (Ahmadiet al. 2001), we have proposed such a global approach to(re-)structure a product development process, yielding dras-tic improvements in both development time and costs. How-ever, at some point management will have to decide on anoverall approach to the product development process. Oncethis decision stands, managers typically must (at least in theshort term) operate within the parameters of the establishedprocess structures. At this point, flexibility in acceleratingdevelopment processes is often limited to the tools identifiedby Graves (1989), particularly overlapping and crashing. Inthis paper, we take the principal view that the overall struc-ture of the process has been established and—for the time

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development608 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

being—cannot be challenged. As such, we are more con-cerned with the operational approach of when to schedulework, as opposed to the more strategic approach of how toschedule what type of work in the extant literature.Implicitly, such a distinction already exists within the

overlapping literature, giving rise to two somewhat dissim-ilar views of overlapping. In the first view, overlapping isconsidered necessary to avoid costly late changes or prod-uct obsolescence (e.g., MacCormack et al. 2001). Thus,in addition to accelerating the development process, over-lapping also leads to an overall reduction in costs. Thefindings by Clark and Fujimoto (1989), Imai et al. (1985),and Takeuchi and Nonaka (1986) greatly contributed to thisview. The alternative view considers overlapping “a calcu-lated risk” (Smith and Reinertsen 1995), where inherentlysequential activities are performed in parallel to reducedevelopment time. Mansfield et al. (1972) provide earlyempirical support for this view. Whereas in the first viewoverlapping is particularly promising if the outside world ishighly dynamic and characterized by uncertainty, this viewconsiders overlapping most appropriate for stable environ-ments (e.g., Cordero 1991). Two of the most influential andrigorous empirical studies of development projects (Eisen-hardt and Tabrizi 1995, Terwiesch and Loch 1999) lendsupport to this claim.Common to almost all surveys is the conclusion that

observed overlapping levels were typically not aligned withoverall goals such as profit maximization or lead timeminimization. Evidently, overlapping levels are commonlydetermined on an ad hoc basis, rather than on solid analyt-ical grounds, yielding inefficient overlapping strategies. Inpart, this may be due to the lack of analytical approachesto overlapping, because to date only a few analytical mod-els of overlapping exist. Ha and Porteus (1995) discusshow to optimally schedule design reviews between twooverlapping stages under informational interdependenciesbetween the stages. Krishnan et al. (1997) provide a model-based framework for overlapping that identifies conditionsunder which various types of overlapping are appropriatefor a pair of coupled activities. Loch and Terwiesch (1998)study the impact of upstream engineering changes on opti-mal overlapping policies, and Roemer et al. (2000) discusstime-cost trade-offs for multistage overlapping processes.In this research we will also address time-cost trade-

offs in a multistage setting, but we will explicitly modelthe impact of upstream changes on downstream progress.More central, though, to this research is the question of howoverlapping and crashing of development processes inter-relate. To keep our analysis simple and aligned with theextant literature, our model of crashing closely resemblesthose initially introduced by Kelley and Walker (1959) andBerman (1964). In particular, we will assume that crash-ing costs are either linear or convex, and that crashing islimited both from above and below. In particular, workingbelow minimum work intensity implies increasingly inef-ficient usage of available resources. Conversely, there is

also ample evidence that excessive crashing can becomecounterproductive, leading to Brooks’ (1995) famous state-ment that “…adding manpower to a late software projectmakes it later.” Other researchers make similar observations(e.g., Abdel-Hamid and Madnick 1990), and either suggeststrongly convex crashing costs (e.g., Putnam and Myers1992) or assume explicit upper limits on productivity (e.g.,Boehm 1981). As in the extant modeling-based crashingliterature, we will assume that the level of crashing beyondwhich it becomes counterproductive is known and limit ouranalysis to crashing strategies below this level.As with overlapping, we thus consider crashing as a tool

that helps compress product development time at additionalcost. This does not imply that any attempt to reduce devel-opment times will invariably lead to an increase in develop-ment effort. In fact, findings in Clark and Fujimoto (1991),that Japanese car manufacturers spend fewer person hourson development projects, yet have shorter lead times thantheir competitors, seem to indicate the opposite. Similarly,Pisano (1997) shows that development hours and lead timesare positively correlated in a sample of 23 chemical andbiotechnology companies. These results reconfirm our ear-lier statement that finding an appropriate structure for theproduct development process is paramount. By no meansdo they suggest that within the same overall process struc-ture, increased effort levels will invariably lead to longerdevelopment times.This, then, is the principal setting of our paper. We inves-

tigate how, in the short term, a company can decreasetime to market by employing a mixed strategy of overlap-ping and crashing. Thus, our view is primarily an opera-tional one concerned with time-cost trade-offs, rather thana long-term strategic one that aims to improve overall pro-cess efficiency. Under fairly general assumptions, we willshow how crashing and overlapping are interdependent andprovide general guidelines for compressing developmenttimes. Because our definition of overlapping is based on thenotion of working under uncertainty in a mostly sequen-tial process, our analysis presupposes largely sequentialprocesses. Because these have been shown to be particu-larly successful in stable environments, our analysis is nat-urally better suited for stable environments than for highlydynamic ones.

3. The General ModelThroughout the paper we assume a product developmentprocess that consists of n+ 1 sequential stages, where theprincipal information exchange between consecutive designstages is unidirectional—from upstream to downstream. Inaddition, the regular duration Ti of stage i is known, wherethe regular duration of stage i is the expected time ittakes to perform stage i without overlapping and crashing.Ahmadi et al. (2001) show how to obtain such an overallprocess structure and how to estimate stage durations.In contrast, the actual stage duration depends on both

crashing and overlapping. In particular, the actual stage

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 609

duration decreases under crashing. To keep the model asgeneral as possible and to reflect existing crashing practicesmore accurately, we explicitly allow for different crashinglevels during a single stage. In particular, we assume thatany given time interval dt of stage i can be crashed torit� ·dt, with rit� ∈ �r̄i�1�. Note that crashing is essentiallya consequence of increased work intensities and that bothterms are directly related. We therefore say that if intervaldt has been crashed to length ridt, then the work intensityduring interval dt was r−1i . For the remainder of the paperwe will largely use the terms work intensities and crashinginterchangeably, unless the context requires a more precisedistinction. Means to increase work intensities throughout aproject include working overtime, adding staff, or utilizingmore experienced staff or better equipment.In contrast to crashing, the actual stage duration in-

creases under overlapping. Because overlapping betweenstages i− 1 and i increases uncertainty in stage i (down-stream), some of the downstream work during the overlapmay be obsolete due to unforeseen upstream developments.Consequently, the overall downstream duration increasesby the amount of obsolete work, henceforth referred to asrework Hi. Throughout the model-based overlapping liter-ature, downstream uncertainty is considered to decrease asupstream progresses. Therefore, Hi should be an increas-ing function of the overlap. Moreover, because upstreamprogress and downstream work content depend on therespective work intensities, Hi should also be a function ofthe work intensities during stage i and stage i− 1. We willformally establish this functional relationship in §4.Similar to most models in the crashing literature, we will

assume that an earliest project-starting time S is given andthat the project must be completed by a given and fixeddue date D. Similarly, to allow for a better comparisonbetween project structures, we will hold the quality of theproject outcome constant. Finally, we assume that the deci-sion maker is risk neutral, strictly focusing on developmentcosts. The objective is therefore to find feasible crashingand overlapping policies that minimize total developmentcosts subject to a due-date constraint. For ease of explo-ration, we impose the additional constraints that no stagemay commence or terminate before its predecessors—thatis, the principal process structure must remain intact—andthat at most two stages can be processed concurrently. Thelatter restriction enforces a classical Sashimi-style solution(Imai et al. 1985), where each stage overlaps only with itsimmediate predecessor. This structure often obtains natu-rally if the principal information flow is unidirectional andis therefore not overly restrictive. Imposing it provides aclearer picture of the interdependence between crashing andoverlapping and helps isolate different phenomena. To for-mulate the problem, we introduce the following notation:n+ 1� number of process stages.i� indices for the process stages, i= 0�1� � � � � n.Ti� time required to complete design stage i without over-

lapping and crashing.

yi� overlap duration between design stages i − 1 and i;y0 ≡ 0.r−1i t�� work intensity of stage i at time t.r̄−1i � maximum work intensity.Hiyi� ri� ri−1�� expected time increase at stage i as a

function of overlapping and crashing; H0 ≡ 0.ciHi�� rework costs at stage i as a convex increasing

function of the design time increase.kir

−1i �� crashing costs for stage i as a convex increasing

function of the work intensity.si� starting time of stage i.di� completion time of stage i; d−1 ≡ s0.S� earliest project starting time.D� project deadline.C� project cost.We define the Overlapping-Crashing Problem (OCP)

as follows:

minC =n∑i=1ciHi�+

n∑i=0

∫ di

si

kir−1i t��dt (1)

subject to∫ di

si

dt

rit�= Ti+Hi for i= 0�1� � � � � n� (2)

yi = di−1 − si for i= 1�2� � � � � n� (3)

s0 � S� (4)

dn �D� (5)

0� yi �min�di−1 −di−2�di− si� for i= 1�2� � � � � n� (6)

1� r−1i t�� r̄−1i for i= 0�1� � � � � n� (7)

The objective function (1) minimizes the additional costscaused by rework and crashing over the decision vec-tors y ≡ �y1� � � � � yn� and r−t�≡ �r−10 t�� � � � � r

−1n t��. The

first term in the objective function addresses the costscaused by the additional rework at each stage. We assumethat the costs of rework are a convex function of theamount of rework and note that linear relationships prevailin many applications. The second term captures crashingcosts. Following the general consensus in the crashing lit-erature (e.g., Foldes and Soumis 1993), we model crashingcosts as a convex, increasing function kir

−1i � of the work

intensity. Diminishing returns, labor regulations on over-time, or additional training and communication require-ments (e.g. Brooks 1995) are all contributing factors forconvex increasing crashing costs. Assuming that kir

−1i � is

Riemann integrable, crashing costs for each stage can bedetermined by integration over the stage duration.Constraints (2) and (3) recursively define the start and

completion times for the stages. Note that the differencebetween the boundaries on the left-hand side of (2), di− si,yields the actual stage duration, whereas the right-hand sideis the stage duration without crashing. The overlaps yi aredefined in (3) as the difference between the completiontime of stage i− 1 and the starting time of stage i. Con-straints (4) and (5) maintain that the project is not started

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development610 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

prematurely and that it completes on time. Note that anyfeasible solution to the OCP with s0 > S can be replacedby an otherwise identical solution, where s0 = S. Becausethe resulting solution is also feasible but finishes earlier atthe same costs, we can without loss of generality limit our-selves to solutions where constraint (5) is tight. Constraints(6) preserve the Sashimi-style character of the solution andwarrant in conjunction with constraints (3) that no stagecommences or terminates before any of its predecessors.In accordance with the extant literature (e.g., Berman

1964) and without loss of generality, we define the regu-lar work intensity as the cost-minimizing work intensity.Working below standard intensity leads to higher costsbecause of inefficiencies such as underutilization of equip-ment or suboptimal numbers of setup times. Consequently,constraints (7) bound the work intensities from below byunity. In the classical crashing literature, crashing is eitherexplicitly bounded from above (e.g., Kanda and Rao 1984or Leachman and Kim 1993) or a bound is implied bythe shape of the cost curves (e.g., Berman 1964 or Falkand Horowitz 1972). In this research, we follow the priorapproach and assume that crashing costs are convex in thework intensity, which is explicitly bounded from above.Such limits occur, for example, if technical systems reachtheir capacity limits, if personnel requirements exceed theavailable person power, or if adding additional personnelbecomes counterproductive due to increased communica-tion requirements.Allowing flexible work intensities during each stage does

not just model reality more accurately, but also allows thestudy of the impact of different crashing strategies on theamount of rework. Because the amount of rework at astage depends on how much work has been accomplishedupstream, timing of crashing within a stage impacts theamount of rework. The next section formally explores theinterrelationship between crashing and rework and estab-lishes a functional relationship.

4. Interdependencies Between Crashingand Overlapping

To identify optimal policies, it is essential to determinethe extended rework Hi and how it is impacted by thedecision vectors y and r−t�. In the overlapping literature,the uncertainty resolution between up- and downstream hasbeen modeled in several closely related ways. Krishnanet al. (1997) introduce the concept of upstream evolu-tion and downstream sensitivity to capture how uncertaintydecreases during the period of overlapping. A similar con-cept is employed by Loch and Terwiesch (1998), who studythe effects of upstream evolution and downstream depen-dence, a concept akin to that of sensitivity, on the optimaloverlap duration. In their model, the influence of upstreammodifications on downstream rework is reflected in animpact function, a concept also employed by Carrascosaet al. (1998) in conjunction with their probability of change

function as a measure of upstream evolution. Similarly,Roemer et al. (2000) introduced the probability of reworkas a (nondecreasing) function Pi of the overlap between twostages. While this framework directly addresses upstreamevolution through the shape of Pi, downstream sensitivityis addressed only indirectly by its magnitude and assumedto be constant over the progress of downstream. In thisresearch, we will integrate the probability of rework func-tion with the impact function to more generally addressthe interdependencies between upstream evolution, down-stream sensitivity, rework, and work intensities.In particular, we assume that under normal work inten-

sities, if stage i makes a prediction at time t ∈ �si� di−1�,to be updated dt units of time later, then with probabilityPit� this prediction will turn out to be wrong and someportion qit� · dt of the work performed between predic-tion and update must be performed again. We refer to thefunctions Pit� and qit� as the standard probability andstandard impact function, respectively, indicating that thosefunctions apply to the case of standard work intensities.In accordance with the extant literature, we assume thatPit� is nonincreasing and qit� nondecreasing. Note thatthe total rework hiyi� under standard work intensities canbe obtained by integrating the product of impact and reworkprobability (PI-curve) over the entire interval �si� di−1�, thatis, over the overlap yi as indicated in Figure 3.Thus, whereas Roemer et al. (2000) assume that invari-

ably all work under erroneous assumptions was futile, weacknowledge that some fraction of the work performedmight be salvageable, and that this amount might change(decrease) over time, as more and more downstream workhas been completed already. In a second critical departure,we explicitly take work intensities into account as a mea-sure for work accomplished. The implications are threefold.First, because the impact of upstream changes is predomi-nantly dependent on how much downstream work has beenperformed already (e.g., Loch and Terwiesch 1998), andbecause with work intensities r−1i �� the amount of workperformed by time t is

∫ tsid�/ri��, the impact Qit� at

time t given work intensities r−1i �� must be

Qit�= qi(∫ t

si

d�

ri��

)� (8)

Figure 3. Probability of rework, impact, andPI-functions.

t0

τ τ + dt di – 1

Pi (τ)

qi (τ)

Si

PI-Function

qi (t)

Pi (t)

ExpectedRework

yi

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 611

Figure 4. Impact of crashing strategies on probabilityof rework.

si

0

Lo IntensityLo-Hi IntensityMed IntensityHi-Lo IntensityHi Intensity

d ′′ di – 1t

Πi (t)

V V IV

III II I

IVIIIIII

i – 1 d ′i – 1

Second, the probability of rework is no longer sim-ply a function of time, but rather of how much upstreamwork still has to be accomplished. In particular, at timet ∈ �si� di−1�, di−1 − t units of time are left until upstreamterminates. However, because the work intensity duringthe remaining time might be above regular intensity, theremaining work content is

∫ di−1t

d�/ri−1��. Because unfin-ished upstream work is the primary cause of downstreamuncertainty, the probability of rework with upstream crash-ing becomes

!it�= Pi(∫ di−1

t

d�

ri−1��

)� (9)

where Pi"� is the standard probability of rework functionfor regular work intensity.Figure 4 demonstrates how the probability function (I)

for standard work intensities changes with the work inten-sity of stage i − 1. In all five scenarios, stage i starts atthe same time, but the upstream (stage i− 1) work intensi-ties differ after that point. The average work intensities inScenarios II–IV are the same, yielding identical upstreamtermination times. The different shapes are caused solelyby different timing patterns of work intensities. ScenarioII starts out with regular work intensity before eventu-ally proceeding with maximum work intensity. Conversely,Scenario IV begins with maximum work intensity and thenreturns to regular work intensity. Finally, in Scenario IIIthe work intensity remains constant throughout the stage.Note how the cumulative probability—that is, the areaunder the curve—becomes smaller the earlier work inten-sities increase. When working with maximum work inten-sity throughout, as in Scenario V, the probability curve ispushed towards the origin, further reducing the area underthe curve. Note also that the overall shape remains the samefor Scenarios I, III, and IV, where the work intensities areconstant over the stage, whereas the shapes in Scenarios IIand IV are different due to the varying work intensities.The final implication of changing work intensities is to

note that if the probability of rework at time t ∈ �si� di−1�is !it�, and if the work intensity of stage i at this timeis r−1i t�, then in terms of work content at normal work

intensity, the expected amount of rework caused during theinterval �t� t + dt� is qit�!it� · r−1i t�dt. Thus, the totalexpected amount of rework is

Hi =∫ di−1

si

Qit� ·!it�rit�

dt� (10)

and substituting (9) and (8) into (10) yields

Hi =∫ di−1

si

[qi( ∫ t

sid�/ri��

)rit�

·Pi(∫ di−1

t

d�

ri−1��

)]dt� (11)

We thus have shown that the expected rework Hi at stagei is a function of its overlap with stage i− 1 and the workintensities during stages i − 1 and i. Assuming that theprobability of rework and impact functions can be obtainedfor standard work intensities, rework can easily be com-puted for a given overlapping-crashing policy.

5. Optimal Overlapping-CrashingStrategies

Based on the derived expression for the rework (11), thissection characterizes optimal solution structures that canbe exploited to solve special instances of the OCP. In anySashimi-style approach to overlapping, three distinct phasesarise within each stage as depicted in Figure 5. In phasei/1, stage i overlaps with stage i − 1 and, according to(11), the crashing policy during this interval affects onlythe amount of rework of stage i. In phase i/2, stage i isthe only stage in process. As can be seen from (11), thecrashing policy during this interval has direct impact on theamount of rework at any stage. Finally, in phase i/3, stagei+ 1 is concurrently being processed. The crashing policyduring this interval now affects exclusively the amount ofrework at stage i+ 1.This structure enables us to investigate the effects of the

crashing policies on the amount of rework in isolation. Inparticular, we can isolate the impact of downstream crash-ing (during phase i/1) from that of upstream crashing (dur-ing phase i/3). Focusing on these three phases, we willcharacterize optimal solution structures for the OCP in thissection. For the special case of linear costs we develop anefficient solution procedure that easily extends to the caseof piecewise linear (convex) cost curves.

Figure 5. Three phases of a stage.

si di – 1 Si + 1 ditime

Stage i

Stage i + 1

Phase i /1 Phase i /2 Phase i /3

Stage i – 1

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development612 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

Figure 6. Delayed downstream crashing.

timedi – 1Si

Stage i

Stage i – 1

ri = 1ri = ri

5.1. Optimal Solution Structure

To develop a general structure for optimal crashing poli-cies, we next investigate the impact of crashing during eachphase on the amount of rework. Note that at the beginningof stage i, that is, during phase i/1, there is uncertaintyregarding the output of stage i−1. The harder stage i worksunder this uncertainty, the more work is wasted if it turnsout to be based on wrong assumptions. Consequently, dur-ing phase i/1, stage i has an incentive to delay as muchwork content as possible, that is, to work with low intensity.We will refer to this phenomenon as delayed downstreamcrashing. In Proposition 1 we will show that this intuitionis indeed correct, and that in any optimal solution, the workintensity is nondecreasing during phase i/1 as depicted inFigure 6.

Proposition 1 (Delayed Downstream Crashing). Ifsi � u < v � di−1, then r−1i u� � r−1i v� in any optimalsolution. Moreover, if r−1i u�= r−1i v� in any optimal solu-tion, then (at least) one of the following conditions musthold: (i) r−1i v� = r̄−1i , (ii) r−1i u� = 1, or (iii) the firstderivative of kir

−1i u�� is discontinuous at r−1i u�.

To enhance readability of this paper, all proofs are pre-sented in Appendix 1. Next, we turn our attention to phasei/3. During this phase, stage i+ 1 is working under uncer-tainty and requires information, in the form of completedwork, from stage i (upstream) as fast as possible. Thus, theearlier that work is completed during phase i/3, the lessrework will be required at stage i+1. We refer to this phe-nomenon as upstream upfront crashing and demonstrate inProposition 2 that the work intensity is nonincreasing inthe last phase of each stage, as depicted in Figure 7.

Proposition 2 (Upstream Upfront Crashing). If si+1 �u < v � di, then r−1i u� � r−1i v� in any optimal solu-tion. Moreover, if r−1i u�= r−1i v� in any optimal solution,then (at least) one of the following conditions must hold:(i) r−1i v�= r̄−1i , (ii) r−1i u�= 1, or (iii) the first derivativeof kir

−1i u�� is discontinuous at r−1i u�.

Finally, we turn our attention to the intermediate phase(i/2). During this phase, only stage i is being processed,and neither of the forces that impact work intensities in theother two phases is active. As a consequence, we should

Figure 7. Upstream upfront crashing.

timediSi + 1

Stage i

Stage i + 1

ri = 1

ri = ri–

expect two properties of the optimal crashing policies dur-ing phase i/2. First, work intensities should strictly exceedthose during the other two phases unless they are at theextreme points or where marginal crashing costs are dis-continuous. Second, work intensities during this phaseshould comply with the established results in the litera-ture, i.e., should (must) be kept constant (for strictly con-vex cost curves). The following proposition formalizes ourexpectations.

Proposition 3. Let u ∈ �di−1� si+1�. If v �di−1� si+1�, thenr−1i u� � r−1i v� in any optimal solution. Moreover, ifr−1i u� = r−1i v� in any optimal solution, then (at least)one of the following conditions must hold: (i) r−1i v�= r̄−1i ,(ii) r−1i u� = 1, or (iii) the first derivative of kir

−1i u��

is discontinuous at r−1i u�. Else, if v ∈ �di−1� si+1�, thenr−1i u�= r−1i v� if kir−1i � is strictly convex; otherwise, anoptimal solution exists where r−1i u�= r−1i v�.We have thus shown that we can limit our attention to

remarkably simple crashing policies. In each stage the workintensity initially increases monotonically, and after reach-ing a plateau of constant intensity, will eventually decreaseagain. This has several implications. First, implementingstrategies where intensities increase and decrease graduallyis usually smoother than trying to force rapidly alternatingwork intensities onto people or machines. Second, manysuccessful projects follow such a pattern naturally. Person-nel is built up after ground rules have been determinedin each stage and is reduced towards work wrap-up. AtToyota, for example, most design engineers are only tem-porarily assigned full time to a single project during peakperiods, but rotate otherwise between different projects(Ward et al. 1995). Conversely, they point to the pitfallsof the “end-of-term-syndrome” where work is delayed asmuch as possible towards a deadline. Our results indicatethat in this case the increased work intensities towards theend of the stage (Phase 3) will adversely impact total devel-opment costs. The result thus emphasizes the importance ofsetting proper incentive structures for stage managers thataim at minimizing system costs, rather than stage costs thatmanagers are typically held responsible for. Third, evenif the probability of rework functions cannot be estimatedreliably, the results provide general guidelines to structure

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 613

the work intensities over the course of the stages. If theprobabilities can be estimated, though, then it is still dif-ficult to compute exact levels of work intensities, becausethis requires explicit functional forms for all r−1i t�. In thenext section, we will therefore restrict our focus to linearcrashing costs and show how to solve the problem underthis assumption. Because most managerially relevant costcurves can be approximated by piecewise linear functions,the generality of our results suffers little from this addi-tional assumption.

5.2. The Linear Cost Case

In this section, we will discuss the case where crash-ing costs ki are linear in the time crashed, and reworkcosts ci are linear in the amount of rework Hi. We willfirst show that under these conditions, remarkably simpleoptimal solution structures obtain and will subsequentlyexploit them to reformulate the OCP as the OCP/Lin.In Appendix 2, we provide an efficient solution for theOCP/Lin. We also note that the extension to piecewise lin-ear costs is straightforward, albeit at considerable notationaltedium. The observation is nevertheless important, becausemost managerially relevant cost curves can be approxi-mated by piecewise linear functions. Thus, the methodsdescribed here also constitute the basis for solving the gen-eral case discussed before. The following proposition illus-trates the simplicity of optimal crashing policies in the caseof linear costs.

Proposition 4 (Linear Costs). There is an optimal pol-icy r−1i t� and xi� zi � 0 associated with it, such thatr−1i t�= r̄−1i for t ∈ �si + xi�di − zi� and r−1i t�= 1 other-wise.

Thus, under a linear cost regime, each stage switches thelevel of work intensity at most twice, from regular intensityat the beginning, to maximum intensity during an interme-diate period, back to regular intensity towards the end (seeFigure 8). With the principal structure of the optimal solu-tion known, it is now sufficient to determine the optimaloverlap for each stage, and the optimal timing for the twochanges in work intensity. With xi and zi as defined in the

Figure 8. Optimal crashing policy for piecewise linearcosts.

•••

•••

•••

•••Stage n

Stage i + 1

yi

xi zi

ri

Stage i

Stage i – 1

Stage 0

corollary, we can thus formulate the Overlapping Crash-ing Problem with Linear Costs (OCP/Lin), as follows:

minC =n∑i=1ci ·Hi

+n∑i=0ki · 1− r̄i� · Ti+Hi− xi− zi�� (12)

r̄i · Ti+Hi�+ 1− r̄i� · xi+ zi�= di− si ∀i� (13)

xi+ zi � Ti+Hi ∀i� (14)

xi� zi � 0 ∀i� (15)

and (3) to (6).The objective function (12) is the sum of rework and

crashing costs, whereas constraint (13), similarly to con-straint (2), defines the stage durations and thus, in unisonwith constraint (3), their start and completion times. Con-straints (14) bound the durations of regular work intensityby the respective maximum stage durations and (15) are thenonnegativity constraints. Given the simple optimal solu-tion structure, the expected rework in Equation (10) cannow be expressed as

Hi =∫ xi

0Qit� ·!it�dt+

1r̄i

∫ yi

xi

Qit� ·!it�dt (16)

with probability of rework function

!it�= Pi(max�yi− zi−1� t�− t

r̄i−1+ yi−max�yi− zi−1� t�

)(17)

and impact function

Qit�= qi(min�t� xi�+

max�0� t− xi�r̄i

)� (18)

The above expressions greatly facilitate solving the OCP,because only three variables per stage must be com-puted, rather than the infinite vector r−1i t�. Moreover, with(17) and (18), rework probabilities and impact can sim-ply be expressed in terms of the standard probabilitiesand impacts. To compute solutions, we define the marginalcosts '"�= (C/("� ·(dn/("�−1. It follows by the Kuhn-Tucker conditions that all free variables in an optimal solu-tion must yield the same marginal costs '. After somealgebra, this yields for given marginal costs ' the followingnecessary conditions for optimality:

(Hi(yi

= '

ri · '+ ki�− ki− ci� (19)

(Hi(xi

=− 1− ri� · '+ ki�ri · '+ ki�− ki− ci

� (20)

(Hi(zi

=− 1− ri−1� · '+ ki−1�ri · '+ ki�− ki− ci

� (21)

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development614 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

Table 2. The decision variables as a function of the Lagrangian multiplier '.

Multiplier ' Overlap yi Start crashing xi End crashing zi−1

−'�min�ki−1� ki� qi ·Piyi�='

r̄i'+ ki�− ki − cixi = yi zi−1 = Ti−1 +Hi−1 − xi−1

ki <−'< ki−1 qi ·Piyi�=r̄i'+ ki�− ki

r̄i'+ ki�− ki − ciqi ·Piyi − xi�=

r̄i'+ ki�r̄i'+ ki�− ki − ci

zi−1 = Ti−1 +Hi−1 − xi−1

ki−1 <−'< ki qi ·Pi( yi − zi−1

r̄i−1+ zi−1

)= '

r̄i'+ ki�− ki − cixi = yi qi ·Pizi−1�=

−ki−1r̄i'+ ki�− ki − ci

max�ki−1� ki� qi ·Pi( yi − zi−1

r̄i−1+ zi−1

)= r̄i'+ ki�− kir̄i'+ ki�− ki − ci

qi ·Piyi − xi�=r̄i'+ ki�

r̄i'+ ki�− ki − ciqi ·Pizi−1�=

−1− r̄i�'+ ki�− ki−1r̄i'+ ki�− ki − ci

�−'� ki−1 + ki−'� ki−1 + ki qi ·Pi

( yi − zi−1r̄i−1

+ zi−1)= r̄i'+ ki�− kir̄i'+ ki�− ki − ci

qi ·Pi( yi − xi − zi−1

r̄i−1+ zi−1

)qi ·Pizi−1�=

−r̄i · ki−1r̄i'+ ki�− ki − ci

= r̄i'+ ki�r̄i'+ ki�− ki − ci

In general, finding a solution to this system of equa-tions depends on the properties of functional forms of (17)and (18) such that uniqueness of solutions is not alwayswarranted. However, constant impact functions are often-times suitable approximations (Carrascosa et al. 1998), inwhich case (18) are constants qi� yielding for the expectedrework

Hiqi

= r̄i−1r̄i

·[hi

(max�0� yi− xi− zi−1�

r̄i−1+ zi−1

)−hizi−1�

]+himin�yi� zi−1��+

(1r̄i− 1

)·himin�yi− xi� zi−1��

+ r̄i−1 ·[hi

(max�0� yi− zi−1�

r̄i−1+ zi−1

)−hi

(max�0� yi− xi− zi−1�

r̄i−1+ zi−1

)]� (22)

Equations (19) to (21) simplify to five cases, as shown inTable 2. In Appendix 2, we present a computationally effi-cient solution procedure that determines the efficient fron-tier of time-cost trade-offs through a binary search over themarginal costs ', utilizing these results. Moreover, fromthis scenario we can derive insights on how evolution andsensitivity impact crashing and overlapping.

6. Insights and ResultsIn this section, we will use the previous results to derivegeneralized guidelines for overlapping and crashing underdifferent scenarios, particularly regarding different patternsof information evolution and stage sensitivity to upstreamchanges. Moreover, in the context of the simple two-stageexample presented earlier, we will discuss how differentparameter values impact the value of optimal strategies.

6.1. The Impact of Evolution and Sensitivity

Note that the left-hand side of the entries in Table 2 arethe PI-functions, that is, the product of probability andimpact. The shape of these functions reflects some of

the notions of upstream evolution and downstream sen-sitivity as defined by Krishnan et al. (1997). In partic-ular, the slope of the curve, that is, the rate at whichthe probability of rework vanishes, represents the speed atwhich upstream information evolves. On the other hand,high (low) PI-values, and, in particular, high (low) valuesfor qi are indicative of high (low) downstream sensitivity.Figure 9 shows generic PI-functions that map onto the fourevolution-sensitivity constellations identified by Krishnanet al. (1997). For example, functions SH and SL evolveslowly—that is certainty regarding the upstream output isonly obtained when upstream nears finalization. However,sensitivity to upstream changes is much higher for SH, indi-cated by higher values of the PI-values. Curves FH and FLalso differ in their relative sensitivities, but here informa-tion evolves much faster than with SH and SL—that is, theprobability of rework vanishes faster in these cases.This interpretation, together with the results from

Table 2, helps derive general guidelines for overlappingand crashing for the four different scenarios. First, notethat given marginal costs ', and in particular the optimalmarginal costs, uniquely define the qi ·Pi values in Table 2,implying that the degree of overlapping y depends only on

Figure 9. Evolution and sensitivity for different proba-bility curves.

1

0

SH - Slow Evolution, High Sensitivity FH - Fast Evolution, High Sensitivity

FL - Fast Evolution, Low SensitivitySL - Slow Evolution, Low Sensitivity

SH

SL

FH

FL

q iP i

(t)

0

t

yi , (yi – xi), zi – 1

di – 1

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 615

upstream evolution and downstream sensitivity. Similarly,the optimal starting time for crashing x� is determinedby the stage’s sensitivity and by the speed of upstreamevolution. In contrast, the time to resume normal inten-sity z� depends on the stage’s evolution and downstreamsensitivity.Moreover, Figure 9 implies that overlapping yi� and

downstream crashing during the overlap yi− xi� are high-est for FL and lowest for SH. In contrast, upstream crash-ing is more extensive for SH (z is smaller) than for FL.Thus, under fast evolution and low sensitivity, the focusshould be primarily on overlapping and downstream crash-ing, whereas for the other extreme combination, slow evo-lution and high sensitivity, the focus should be on upstreamcrashing.Values for FH and SL will generally be between those

for the two extreme constellations, but no particular biastowards overlapping or crashing either stage exists, suggest-ing a more “balanced” approach. Interestingly, compressingstages with slow evolution and low sensitivity yields bene-fits primarily if overall process compression is high; other-wise, it is more advisable to compress stages with fast evo-lution and high sensitivity instead. To see this, note first thatdPiyi�/d−'� and dPiyi− xi�/d−'� are strictly positiveand that dPizi−1�/d−'� is strictly negative. Therefore, if−' is low, or equivalently, if overall compression is lim-ited, overlap yi and downstream compression yi − xi willbe lower for SL than for FH. Similarly (because Pizi−1�is large for small '), the value for z in Scenario SL willbe larger than that for Scenario FH, so that there will beless crashing during the overlap under the SL regime thanunder FH. The reverse is true when total development timeis compressed considerably. Figure 10 summarizes thesefindings and provides guidelines for management when theprobability of rework function cannot be computed reliably,but if evolution and sensitivity patterns can be assessedroughly.

Figure 10. Crashing and overlapping for differentevolution-sensitivity-constellations.

SENSITIVITY

Low High

EV

OL

UT

ION

Slow

Fast

BalancedOverlapping &

Up-/DownstreamCrashing

BalancedOverlapping &

Up-/DownstreamCrashing

PrimarilyOverlapping &Downstream

Crashing

Main Relevance:High Compression

PrimarilyUpstreamCrashing

6.2. Results for the Two-Stage Example

To gain additional qualitative insights, we solved the sim-ple two-stage example presented earlier for a variety ofdifferent parameters and with additional constraints to com-pare optimal results to those that better reflect commonindustry practices. Furthermore, we evaluated the impactof evolution and sensitivity on the performance of subop-timal solutions and how parameter errors impact productdevelopment time and cost.

Mixed vs. Pure Strategies. First, we compared themixed approach of jointly crashing and overlapping withthe two pure approaches of crashing only and overlappingonly. To better understand the dynamics of the approaches,we first computed the smallest possible duration dmin� forthe joint approach by setting crashing and overlapping lev-els to their maximum possible values. Not surprisingly,dmin decreased considerably, from 131 days for the pureapproaches to 107.3 days for the mixed approach. We thensolved the OCP for every integer value between 160 daysT0 + T1� and 131 days and computed the additional costsof the pure strategies relative to the optimal mixed strategy.On average, pure overlapping was 36% more expensivethan the mixed strategy, whereas pure crashing was on aver-age more than four times as expensive as the mixed strat-egy. However, this comparison is strongly biased towardsoverlapping, because the linear cost assumption for crash-ing makes crashing for low process compression relativelyunprofitable. For piecewise linear cost functions with ini-tially lower crashing costs, we should therefore expect thatpure crashing becomes relatively less costly, whereas pureoverlapping will become relatively more costly, becausethe optimal mixed strategy will employ more crashingearlier on.

Optimal vs. Common Standard Strategies. More in-teresting than the comparison with the two pure strategiesis to benchmark the optimal mixed policy with other mixedpolicies. Therefore, we looked at three plausible alterna-tives. Because stable workloads are oftentimes considereddesirable, Alternative I keeps crashing at a constant levelover the entire duration of each stage. Another typicalmanagement tool for development projects is to provideseparate budgets for each stage, especially when stages cor-respond to different organizational units. This is taken intoaccount in Alternative II, where both stages receive a fixedproportion1 of an available budget. Crashing of the stagesis performed optimally according to the results of thispaper, but in accordance with the budget constraint. Finally,combining both approaches, Alternative III assumes budgetallocation and constant crashing simultaneously.Figure 11 compares the strategies relative to the optimal

solution. The results suggest that in this case penalties forworking with constant work intensity (Alternative I) or forbudgeting stages (Alternative II) are not overly excessive,averaging 5%̇ over all durations in both cases. Interestingly,

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development616 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

Figure 11. Performance of nonoptimal mixed strategies.

100%

110%

120%

130%

140%

159 155 150 145 140 135 130 125 120 115 110

Duration

% o

f Opt

imal

Cos

ts

I: Constant Crashing

III: Budgeted: Constant Crashing

II: Budgeted: Optimal Crashing

penalties for crashing are strongly dependent on the overalllevel of process compression and can be as high as 18% formedium levels of overall compression. In comparison, thecurve for Alternative II is much flatter, with a maximumpenalty below 12%. In Alternative III, repercussions of thetwo alternatives are compounded, resulting in an averagecost penalty of 13%. It is also noteworthy that the penaltycan be substantially higher than the sum of the individualpenalties.In general, computing this penalty facilitates a better

understanding of the disadvantages of traditional policies.These disadvantages can then be compared with poten-tial benefits. For example, if changing work intensities arefacilitated by adding more person power, then there is therisk that additional communication and training require-ments actually slow down the process (Brooks 1995).While the model addresses this by imposing an upperbound on the work intensity, overly optimistic estimatesof the upper bound may indeed induce the paradox ofslowing down work by adding person power. Computingcurves such as in Figure 11 help evaluate that trade-off.It is then up to management to decide whether or not tobuffer against the risk of overly aggressive work intensi-ties by adopting a (suboptimal) strategy of constant workintensities.

Impact of Upstream Evolution and Downstream Sen-sitivity. The overall shape in Figure 11 also prevailedwhen solving the problem with different parameters. In par-ticular, we varied the values for a and q, the latter havingso far been implicitly set to unity. Note that a large (small)absolute value for q reflects high (low) sensitivity, whereaslarge positive (negative) values for a yield fast (slow) evo-lution patterns. For all evolution-sensitivity constellationsthe three distinct areas eminent in Figure 11 could beobserved. It is noteworthy, though, that fast evolution tendsto move the two humps closer together, indicating a con-sistently worse performance for suboptimal strategies whenevolution is fast. The reason for this is that for fast evo-lution, overlapping becomes more and more lucrative, sothat in suboptimal solutions more and more crashing is per-formed during the overlap, making both humps wider until

they eventually merge into one continuous ridge for veryfast evolution. Low sensitivity, on the other hand, tendedto suppress the first hump in the curve, while pronouncingthe second hump more distinctly. The reason for the firstphenomenon is that medium compression for low sensitiv-ity can be accomplished almost exclusively by overlapping,thus closely following the optimal strategy. Only for highlevels of compression does crashing become lucrative, butnow crashing is relatively more expensive because of thelarge overlaps. In summary, this suggests that suboptimalstrategies underperform significantly when processes withlow downstream sensitivity must be highly compressed,or for medium compression of processes with high down-stream sensitivity.To investigate the impact of evolution and sensitivity

on the average cost increase for the three different mixedpolicies, we computed the costs for fast a=−0�01� andslow a = 0�03� evolution over a range of sensitivitiesfrom �q� = 0�1 to 1� The results in Figure 12 suggest that,with decreasing sensitivity, the costs of constant crashingapproach those of the optimal policy, whereas the budgetedapproaches diverge strongly for low sensitivities. The rea-son for this behavior is that decreasing sensitivity makesoverlapping more and more lucrative for downstream, whilebudgeting prevents downstream from overlapping more. Onthe other hand, because for low sensitivity the costs ofoverlapping become minimal, there is very little benefitin optimizing crashing policies during overlapping, so thatconstant crashing policies are near optimal for very lowsensitivities. Interestingly, for slow evolution extrema occurat an interior point.2 In this case, for very high sensitivi-ties overlapping is employed minimally, so that the impactof suboptimal crashing during the overlap is overall lesspronounced than for lower values of sensitivity where over-lapping is more aggressively employed. Similarly, the costsof budgeting are increasing again relative to the optimalstrategy, because more downstream budget is spent on over-lapping than in the optimal solution. Together, these driverslead to the inverse relation of the performance of constantcrashing and of budgeting stages relative to stage sensitivityas illustrated in Figure 12.

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 617

Figure 12. Relative advantage of optimal policies as a function of evolution and sensitivity.

0.2100%

110%

% o

f Opt

iona

l Cos

ts

120%

130%

110%

120%

130%

140%

100%

0.4 0.6Impact Factor |q |

Low Sensitivity High Sensitivity

Fas

t Evo

lutio

nS

low

Evo

lutio

n

0.8 1.0

a = –0.01

a = –0.03

Budgeted and Crashed ConstantlyBudgeted and Crashed OptimallyConstantly Crashed Only

It should be noted that the magnitude of the curves ishighly dependent on the cost parameters. However, the gen-eral shapes of the functions, and thus the general insights,are quite robust for different cost assumptions. Moreover,in the two-stage example the negative impact of budget-ing is likely to be overestimated relative to multistage sce-narios. The reason is simply that the degrees of freedomfor upstream are reduced to crashing only. Conversely, theimpact of constant crashing is underestimated in the two-stage example, because crashing during overlap has adverseeffects on downstream only. In multistage environments,constant crashing has adverse impacts on both upstreamand downstream for all but the first and the last stages.Again, this impacts chiefly the magnitude of the curvesbut not their overall shapes. By focusing on the two-stageexample, we were therefore able to exhibit the impact ofevolution and sensitivity more clearly, without sacrificinggeneral insights.

Sensitivity to Parameter Errors. To evaluate the sen-sitivity to estimation errors, we looked at the case where theparameter estimates were a= 0�03 and q = 0�5 and com-puted the corresponding optimal policies for three differenttarget completion times D ∈ *145� 130� 115+�. We thencomputed the actual completion times for these policies, ifthe true value of one parameter at a time or both param-eters concurrently differed by ±10%� ±20%� or ±50%from those estimates. The average absolute3 deviation fromthe target completion times for the resulting 3 × 18 sce-narios was only 1�6%. However, even in the worst caseD = 115� a = 0�045� q = 0�75�, the resulting completion

time of 124�2 days was only 8% above the target dura-tion of 115 days. This suggests that the general structure ofthe optimal schedule is quite robust in terms of completiontimes. Deviations from the target tended to increase withthe degree of overall compression; group averages for D ∈*145� 130� 115+ were 0.3%, 1.3%, and 2.5%, respectively.To evaluate the costliness of estimation errors, we com-

puted for all 54 scenarios the optimal costs for the result-ing actual completion times and compared them to the truecosts. For example, for our worst case the cost of finish-ing in 124�2 days was $504. Had the true parameters beenknown at the beginning, then the same completion timecould have been accomplished at the cost of $386. Thus,in this case total costs for the resulting completion timeare 30% above optimal costs. Overall, the average costsfor all 54 scenarios were 5.4% above optimal costs. Again,higher compression targets were more sensitive to estima-tion errors; group averages for D ∈ *145� 130� 115+ were1.5%, 5.5%, and 9.1%, respectively.

The Value of Updated Information. The worst casewas further analyzed to determine how updating informa-tion during the development process impacts costs andduration. In particular, we assumed that the process startswith a compression policy based on the estimates a= 0�03and q = 0�5, and that at some time during the developmentprocess, the true values a = 0�045 and q = 0�75 becomeknown. From then on, the process continues optimally forthe remaining time. Figure 13 shows how costs and dura-tion depend on the timing of discovering the true parameter

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development618 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

Figure 13. The value of updated information.

0%100%

Time when true parameters are discovered[% of Development Time]

% o

f Opt

imal

Cos

ts

% o

f Tar

get D

urat

ion

110%

120%

130%

20% 40% 60% 80% 100%100%

103%

106%

109%CostDuration

values. For instance, even if the true parameters are dis-covered when 50% of the development process is alreadycompleted, the project can still be completed on target, butat costs that are 18% higher than if the true values hadbeen known at the beginning. No additional costs occurwhen the true parameters are discovered within the first10% of development time, but discovering them after 90%project completion will no longer yield any benefits in ourcase. The results indicate that, while overall robustness ofsolutions to estimation errors is already fairly high, updat-ing estimates during the development process and adjustingthe compression policy for the remaining time can furtherincrease robustness.

7. ConclusionThis research has discussed two common tools for expedit-ing product development processes, overlapping and crash-ing. By exhibiting the interdependencies between these twotools, we have bridged the gap between two separate litera-ture streams and demonstrated that the decisions to overlapand crash should be made jointly to exploit their full poten-tial. For cases where all input information (i.e., cost andrework functions) is obtainable, we introduced an algorithmthat computes the efficient frontier of time-cost trade-offs.Knowledge of the frontier serves as a powerful tool to mar-ket entry and timing, as well as to budgetary decisions.The efficiency of the algorithm also allows for analyzingresults regarding the sensitivity to input parameters. Highlyoptimistic (pessimistic) assumptions can be employed tobound the efficient frontier from below (above). Also, thereader should note that the underlying problem structure isdynamic and that system feedback evolving over time canbe used to resolve the problem with updated information.For multistage problems in particular, the problem can beresolved after the completion of each stage for the remain-ing stages, with the actual stage completion time as aninput parameter, rather than a decision variable. Similarly,the problem can be resolved when updated informationregarding the probability of rework emerges. As Figure 13and the corresponding discussion demonstrate, the value ofsuch updated information can be substantial. Thomke andBell (2001) suggest resolving technical uncertainty through

appropriate testing strategies, which essentially leads toimproved probability of rework functions at the cost ofadditional testing. The algorithm proposed here can beemployed to evaluate the benefits of reducing the probabil-ity of rework and to compare them to the costs of testing.The testing strategies in Thomke and Bell may also help

determine the probability of rework functions as input forour model. So far no rigorous research exists that addressesthe question of how to best estimate probability of reworkfunctions. In our experience, design engineers typically feltquite comfortable in providing probability functions. Otherresearchers (e.g., Carrascosa et al. 1998) report similarobservations. In several projects in the aerospace industry,we found that development projects evolved around a com-prehensive list of parameters to be determined (“TBDs”).At the beginning all parameters are considered TBDs, andan essential part of project planning is to establish whenTBDs are supposed to be locked in. The pattern of howTBDs vanish is often a good starting point for estimat-ing probability of rework functions. Future research shouldexplore best practices in establishing rework functions andinvestigate their robustness.However, the insights derived in the paper help in struc-

turing the overall process even if the probability of reworkcannot be assessed properly. In particular, we have shownthat the work intensities for each stage should always fol-low a certain pattern, from initially increasing to decreas-ing towards the end. Moreover, if costs are (piecewise)linear, then changes are stepwise, and therefore particu-larly suitable for implementation. Such policies should beimplemented even if the necessary data for optimization areunavailable.Additional questions arise from this insight. First, does

this pattern remain optimal if the assumption of Sashimi-style solutions is abandoned? It is straightforward to showthat Propositions 1 and 2 remain valid in that case as well.However, in the second phase of each stage, the desire todelay action now opposes that of generating informationfor downstream, so that erratic patterns of changing workintensities may prevail.Second, implementing strategies where stages concen-

trate their work efforts towards the middle may face someresistance, particularly when stages are performed by inde-pendently managed teams. It is partly human nature todelay important decisions. For stage managers, this meansthat they might tend to delay work content towards the endof each stage. Unfortunately, such behavior is also rational,if only in a myopic, stage-focused manner. Equation (11)reveals why. Given the choice of increasing work intensityat time u or v u < v�, the amount of rework is less ifthe intensity is increased at time v, because Piv�� Piu�.Consequently, the “rational” stage manager who minimizeseffort and costs at the stage level will try to reap the bene-fits of delaying decisions, thus robbing downstream of theadvantage of earlier information. The results are subopti-mal intensity structures on the process level that are not

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 619

in accordance with the findings in this paper. Conventionalincentive structures tend to reward stage managers that keeptheir costs low and thus encourage suboptimality. Findingincentive structures that result in crashing and overlappingstructures as suggested in this research is an interesting andchallenging problem for future research.Third, a related problem is that of resource availabil-

ity during the progress of each stage. Implicitly, it wasassumed that resources are readily available wheneverneeded. This may not be the case if personnel is stillinvolved in a previous project or must be withdrawn toserve on future ones. Consequently, consecutive projectsshould be looked at concurrently. The insights derived herecan serve as the basis to study such a comprehensivedynamic view of product development processes.The results from the previous sections provide further

guidelines for compressing development processes. Theroadmap in Figure 10 shows general overlapping and crash-ing patterns for different evolution and sensitivity constella-tions under low and high overall compression. In particular,it exhibits that the optimal time to start crashing a stagedepends on its sensitivity to changes and the speed ofupstream evolution. Conversely, when to stop crashing astage depends on the speed of its evolution and downstreamsensitivity. Thus, even if the probability of rework andimpact functions cannot be quantified, the general structureof crashing can be determined based on a simple assess-ment of the relative values of evolution and sensitivity.The generalized insights from the two-stage example

illustrate that under low sensitivity, optimal crashing pat-terns provide few additional benefits over constant workintensities, but that stage budgeting may have particularlydamaging effects. As Figure 12 shows, the converse is notnecessarily true for slow evolution. Moreover, the subop-timality gap tends to be widest (in relative terms) whenevolution is fast. These results indicate, therefore, when amore thorough investigation of crashing and overlappingpatterns is warranted.In conclusion, we would like to point out four possible

limitations and pitfalls to the approaches presented here.First, we have implicitly assumed that all stages are onthe critical path of the project. Development projects thatexhibit a network structure are not fully captured by ourapproach. Berman (1964) introduces the concept of com-plex junctions, where more than one activity originatesand/or terminates and shows that in any optimal solutionthe sum of time-cost slopes of activities leading into a junc-tion must be equal in value and be opposite in sign to thesum of time-cost slopes emanating from a complex junc-tion. In principal, we could simply adopt this approach, butthis would require assessing joint probability of rework andimpact functions for multiple overlaps. Given the difficul-ties in determining these, we see limited value in extendingour approach to this case. Instead, we encourage managersto apply the general insights derived in this paper to deter-mine overlap and crashing for complex junctions.

Second, some evidence suggests that the benefits of over-lapping vanish if the pace of technical progress within anindustry is high or if innovations are radical (e.g., Cordero1991, Eisenhardt and Tabrizi 1995, or Terwiesch and Loch1999). While we do not have enough empirical evidenceto back these studies, they are both intuitively appealingto us and in accordance with our own observations. Asreported in an earlier paper (Roemer et al. 2000), probabil-ity of rework-based approaches worked well for the devel-opment of rocket engines, particularly after structuring theprocess as suggested in Ahmadi et al. (2001) and Ahmadiand Wang (1999). The methods suggested in these papersand the present one require extensive knowledge of theexisting process. In incremental innovations with only mod-erate technical progress and in situations where the under-lying fundamentals are well understood and tested, thesedata are typically available and the suggested methods areappropriate. In fast-paced environments or for completelynew products, this information is often not available andthe approaches suggested here may fail. Fortunately, this isnot a major limitation of this research, because the over-whelming majority of product development projects are inthe realm of incremental innovations and moderate to lowtechnical progress (Whitney 1990).Third, our model is deterministic and therefore does not

directly address risk. While many facets of risk can beaddressed by running sensitivity analyses on the input dataand by dynamically applying the OCA as more informa-tion evolves over the course of the development process,our approach does not discriminate between overlappingand crashing. However, because the former’s principal tenetis to work under uncertainty, overlapping can be consid-ered as inherently more risky than crashing (at least aslong as crashing is limited to “reasonable” work intensi-ties). Risk-averse project managers may thus prefer costliersolutions with less overlapping (and hence less risk) thanour approach suggests. To some extent, simple modifica-tions of our model can remedy this. For example, if riskcan be characterized by the maximum level of uncertaintyin a project, then the probability of rework can be lim-ited from above in the OCP/Lin, and only minor changesto the OCA are necessary to solve the resulting problem.Similarly, schedule and cost uncertainties can be mitigatedby imposing upper bounds on Hi and ci ·Hi, respectively.Repeated solving with different upper limits thus generatesa time-cost-risk profile for a project. Contingency risks,on the other hand, remain outside the model’s scope. Ofcourse the model can reactively be applied to the emerg-ing postcontingency scenario, but it is unable to predict orproactively plan for contingencies.Finally, our model should not serve as an excuse to

neglect proper upfront work—that is, determining suitableprocess structures and product development strategies first.On the contrary, their full potential can only be realizedif based on solid upfront work. Moreover, by no meansshould the reader conclude from our model that crash-

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development620 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

ing will always lead to shorter development times. The“crashing paradox” (Brooks 1995) is real and should notbe ignored. Any overly optimistic estimate of the maxi-mum work intensity presents the danger of inducing thisparadox. While our model helps to determine the potentialbenefits from changing work intensities, it is up to man-agement to make sure that overall crashing levels remainwithin reason.

Appendix 1. Proofs of Propositions

Proof of Proposition 1. Define the Lagrangian multiplier'w as

'w =(∑nj=0

(cjHj�+

∫ djsjkjr

−1j t��dt

)(riw�

·[(∑nj=0dj − sj − yj�(riw�

]−1� (23)

Note that the first factor in (23) is the change of the objec-tive function value due to a small change in the work inten-sity at time w. Similarly, the second term is the changeof the completion time dn, derived by recursively solving(3). Intuitively, (23) expresses the cost of decreasing thecompletion time by changing the work intensity at time w.Clearly, if that cost is higher at some time, say u, thanat another time, say v, and if both variables are “free”—that is, at neither bound, then by (slightly) increasing workintensity at time v and (slightly) decreasing work intensityat time u, the same completion time can be obtained atlower cost. More formally, Lagrangian Multiplier Theoryrequires that 'u = 'v for all free variables. After simplify-ing (23), we will exploit this fact and show that for freevariables, the necessary condition can only hold if r−1i u� <r−1i v� for u < v. The first term in the numerator of (23)simplifies to

(∑nj=0 cjHj�

(riw�= (ciHi�

(riw�= c′iHi�

(Hi(riw�

(24)

with

(Hi(riw�

=− dt

r2i w�·[Qiw�·!iw�+

∫ di−1

wQ′it�·

!it�

rit�dt

]≡− dt

r2i w�·.iw�� (25)

whereas if k′ir−1i w�� exists, the second term yields

(∫ djsjkj r

−1j t��dt

(riw�

=− dt

r2i w�·[k′ir

−1i w��+

r ′i di�ridi�

· 1−.iw�� · k′ir−1i di��]�

(26)

The denominator in turn becomes

(∑nj=0dj − sj − yj�(riw�

= (di− si�(riw�

� (27)

To evaluate the right-hand side of (27) we take the deriva-tive with respect to riw� on both sides of (2), which yieldsfor the left-hand side of (2)

(∫ disidt/rit�

(riw�=− dt

r2i w�+ (di− si�

(riw�· 1ridi�

(28)

and for the right-hand side

(Ti+Hi�(riw�

=−.iw�r2i w�

dt (29)

so that

(di− si�(riw�

= ridi�

r2i w�· 1−.iw�� ·dt� (30)

Inserting these results into (23) yields then for 'u = 'v,c′iHi� ·.iu�+ k′ir−1i u��

1−.iu�

= c′iHi� ·.iv�+ k′ir−1i v��1−.iv�

� (31)

Note that the Lagrangian multipliers must be positive inany optimal solution; otherwise, costs and project durationcould be reduced concurrently, contradicting optimality. Itfollows from (31) that .iw� < 1. Consequently, becauseri−1t� is strictly positive, u < v implies !iu� > !iv�yielding for any nondecreasing impact function

.iu�−.iv�=Qiu� ·!iu�−Qiv� ·!iv�+

∫ v

uQ′it� ·

!it�

rit�dt

>!iv� ·[Qiu�−Qiv�+

∫ v

uQ′it� ·dt

]= 0� (32)

Thus, (31) can only hold if k′ir−1i u�� < k

′ir

−1i v��,

which, by the convexity of kir−1i �, implies r−1i u� <

r−1i v�. �

Proof of Proposition 2. As in the proof of Proposition 1,'u = 'v must hold for the Lagrangian multipliers as definedin (23) whenever neither of the first two conditions hold.In this case, however, the derivatives are different and thefirst term in the numerator simplifies to

(∑nj=0 cjHj�

(riw�= (ci+1Hi+1�

(riw�= c′i+1Hi+1�

(Hi+1(riw�

(33)

with

(Hi+1(riw�

= dt

r2i w�·∫ di

w

Qi+1�� ·!′i+1��

ri+1��d�/ (34)

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product DevelopmentOperations Research 52(4), pp. 606–622, © 2004 INFORMS 621

whereas, if k′ir−1i w�� exists, the second term yields

i+1∑j=1

(∫ djsjkjr

−1j t��dt

(riw�

=(ki+1r

−1i+1di��+

(di+1(riw�

− dt

r2i w�

[k′ir

−1i w��+

r ′i di�ridi�

· kir−1i di��])� (35)

The denominator in turn becomes

(∑nj=0dj−sj−yj�(riw�

= (di+1(riw�

=ri+1di+1�·(Hi+1(riw�

� (36)

Inserting these results into 'u = 'v yields, finally,

k′ir−1i u��+ �r ′i di�/ridi�� · kir−1i di��∫ yi+1u�Qi+1�� ·!′

i+1��/ri+1���d�

= k′ir−1i v��+ �r ′i di�/ridi�� · kir−1i di��∫ yi+1v�Qi+1�� ·!′

i+1��/ri+1���d�� (37)

Note that !i+1t� is increasing, and both Qi+1�� andri+1t� are strictly positive. Hence, for u< v the denomina-tor on the left-hand side of (37) is strictly larger than that onthe right-hand side. Equation (37) can therefore only holdif k′ir

−1i u�� > k

′ir

−1i v�� which implies by the convexity

of kir−1i �, that r

−1i u� > r

−1i v�. �

Proof of Proposition 3. Let v < di−1. In that case, (31)must still hold (with .iu� = 0) and the proposition fol-lows. Similarly, if v > si+1, then (37) must hold, wherethe lower boundary on the integral on the left-hand sideis now 0 instead of u, and again the proposition followsdirectly. Finally, let v ∈ �di−1� si+1�. In that case, both (31)and (37) simplify to k′ir

−1i u��= k′ir−1i v��, and r−1i u�=

r−1i v� follows for strict convexity, whereas otherwise thecost function is linear between r−1i u� and r

−1i v� and both

values can be changed to the arithmetic middle withoutchanging completion time or costs. �

Proof of Proposition 4. It suffices to show that thereis an optimal solution such that r−1i t� ∈ *1� r̄−1i + ∀t. Theproposition then follows directly from the preceding propo-sitions. Suppose not, and let there be an interval W suchthat 1< r−1i t� < r̄

−1i for all t ∈W . If this interval is during

phase i/1, then (31) is violated for all u� v ∈ W , becausek′ir

−1i u�� = k′ir−1i v�� = ki� j . Likewise, if the interval is

during phase i/3, (37) is violated and such a policy can-not be optimal. Note that during phase i/2, k′ir

−1i u�� =

k′ir−1i v��= ki� j holds for any r−1i t�. Thus, optimal solu-

tions with r−1i t� *1� r̄−1i + are possible during this inter-val, but an equivalent solution must exist where r−1i t� =r̄−1i over some continuous interval and r−1i t� = 1 for theremaining time of the entire stage. �

Appendix 2. Solution ProcedureIn this appendix we present a solution procedure (OCA)for the OCP/Lin. To that end, we observe that the marginalcosts strictly decrease with yi. For xi and zi−1, costs arestrictly increasing as long as xi � yi and zi−1 � yi, respec-tively. Beyond these points they remain constant at 'xi�=−ki and 'zi−1�=−ki−1. This property allows us to mapmarginal costs onto efficient solutions, thus deriving in eachiteration of the algorithm a point on the optimal time-costtrade-off curve.

Overlapping Crashing Algorithm (OCA)

Step 0-1 (Initiation). '=H0=x0=y0=yn+1=d−1=0.Step 0-2 (Feasibility Check). xi = zi = 0,

yi =min�r̄i−1 · Ti−1 +Hi−1�� r̄i · Ti+Hi��.If dn >D: Stop, “The Problem is Infeasible.”Else, if dn <D: set −'̄=M “a large number.”

Do While dn �=DStep 1. '= '̄+'�/2, I = *i � ki =−'+.Step 2. For i=� 1�2� � � � � n do:

For the current value of ' determine zi−1� yi,and xi according to Table 2.

If yi >min�di−1 −di−2�di− si�,then yi =min�di−1 −di−2�di− si�.

If −'� kn, then zn = Tn+Hn− xn, else zn = 0.Step 3. Determine dn.

If dn <D, then '̄= '.Else If dn >D, then '= ',

zi = yi+1 ∀ i ∈ I ; Determine dn.If dn <D, then

zi=yi+1+D−dn�·Ti+Hi−yi−yi+1�∑j∈I1−rj�·Ti+Hi−yi−yi+1�

∀i∈ I/

Loop

After initiating and checking for feasibility, the mainloop performs a binary search over the marginal costs, com-puting the values of the decision variables from Table 2.After comparing the resulting due date with the requiredone, the marginal costs are adjusted appropriately and anew iteration begins until the resulting due date coincideswith the required due date. Because the binary search canbe performed in logM steps (with M being a sufficientlylarge number), and because the values for xi� yi, and zi canbe computed in at most logD steps, the overall compu-tational effort of the algorithm is On · log '̄ · logD�. Animportant characteristic of the OCA is that it generates anoptimal schedule for dn in each iteration, thus yielding theefficient time-cost frontier for a project rather than a singlesolution for a given due date D.

Endnotes1. In this case, we determined the proportions relative tothe standard durations and crashing costs of the stages,such that of a given budget B, Stage 1 receives b1 =

Roemer and Ahmadi: Concurrent Crashing and Overlapping in Product Development622 Operations Research 52(4), pp. 606–622, © 2004 INFORMS

BT1k1/T0k0 + T1k1� and Stage 0 receives the remainder,b0 = B− b1.2. All three cost curves for fast evolution also have anextreme point, but for values of q > 1. Because this wouldrequire that any work performed erroneously causes moretime to redo than it initially took, we cut the curve off atq = 1.3. Note, that over- (under-)estimating of the parametersleads to shorter (longer) than planned development times.

AcknowledgmentsThe authors are indebted to two anonymous referees whoseinvaluable feedback on the paper “Time-Cost Trade-Offsin Overlapped Product Development” (Roemer, Ahmadi,and Wang 2000) planted the seed for this research. Theauthors also gratefully acknowledge the contributions of thereferees to this paper, which led to a much improved finalversion.

ReferencesAbdel-Hamid, T., S. E. Madnick. 1990. Software Project Dynamics: An

Integrated Approach. Prentice-Hall, Englewood Cliffs, NJ.Ahmadi, R., R. H. Wang. 1999. Managing development risk in product

design processes. Oper. Res. 47(2) 235–246.Ahmadi, R., T. A. Roemer, R. H. Wang. 2001. Structuring product devel-

opment processes. Eur. J. Oper. Res. 130 539–558.Baldwin, C. Y., K. B. Clark. 2000. Design Rules. MIT Press, Cambridge,

MA.Berman, E. B. 1964. Resource allocation in a PERT network under contin-

uous activity time-cost functions. Management Sci. 10(4) 734–745.Bhattacharya, S., V. Krishnan, V. Mahajan. 1998. Managing new prod-

uct definition in highly dynamic environments. Management Sci. 44S50–S64.

Boehm, B. W. 1981. Software Engineering Economics. Prentice-Hall,Englewood Cliffs, NJ.

Brooks, F. P. 1995. The Mythical Man Month. Addison-Wesley, Reading,MA.

Carrascosa, M., S. D. Eppinger, D. E. Whitney. 1998. Using the designstructure matrix to estimate product development time. Proc. ASMEDesign Engrg. Tech. Conf., Atlanta, GA.

Clark, K. B., T. Fujimoto. 1989. Overlapping problem solving in productdevelopment. K. Ferdows, ed. Managing International Manufactur-ing. Elsevier Science, Amsterdam, The Netherlands.

Clark, K. B., T. Fujimoto. 1991. Product Development Performance. Har-vard Business School Press, Boston, MA.

Cordero, R. 1991. Managing for speed to avoid product obsolescence:A survey of techniques. J. Product Innovation Management 8(4)283–294.

Cooper, R. G., E. J. Kleinschmidt. 1996. Winning businesses in productdevelopment: The critical success factors. Res. Tech. Management39(4) 18–29.

Eisenhardt, K. M., B. N. Tabrizi. 1995. Accelerating adaptive processes:Product innovation in the global computer industry. Admin. Sci.Quart. 40 84–110.

Eppinger, S. D. 2001. Innovation at the speed of information. HarvardBus. Rev. 79(January) 3–11.

Eppinger, S. D., D. E. Whitney, R. P. Smith, D. A. Gebala. 1994. A model-based method for organizing tasks in product development. Res.Engrg. Design 6 1–13.

Ettlie, J., S. A Reifeis. 1987. Integrating design and manufacturing todeploy advanced manufacturing technology. Interfaces 17(6) 63–74.

Falk, J. E., J. L Horowitz. 1972. Critical path problems with concavecost-time curves. Management Sci. 19 446–455.

Foldes, S., F. Soumis. 1993. PERT and crashing revisited: Mathematicalgeneralizations. Eur. J. Oper. Res. 64 286–294.

Fulkerson, D. R. 1961. A network flow computation for project costcurves. Management Sci. 7(2) 167–178.

Graves, S. B. 1989. The time-cost tradeoff in research and development:A review. Engrg. Costs Production Econom. 16 1–9.

Ha, A. Y., E. L. Porteus. 1995. Optimal timing of reviews in concurrentdesign for manufacturability. Management Sci. 41(9) 1431–1447.

Hoedemaker, G. M., J. D. Blackburn, L. N. Van Wassenhove. 1999. Limitsto concurrency. Decision Sci. 30(1) 1–17.

Iansiti, M., A. MacCormack. 1997. Developing products on Internet time.Harvard Bus. Rev. 75(5) 108–117.

Imai, K., I. Nonaka, H. Takeuchi. 1985. Managing the new product devel-opment process: How the Japanese companies learn and unlearn.K. B. Clark, R. H. Hayes, C. Lorenz, eds. The Uneasy Alliance.Harvard Business School Press, Boston, MA.

Kalyanaram, G., V. Krishnan. 1997. Deliberate product definition: Cus-tomizing the product definition process. J. Marketing Res. 34(2)276–285.

Kanda, A., U. R. K. Rao. 1984. A network flow procedure for projectcrashing with penalty nodes. Eur. J. Oper. Res. 16 174–182.

Kelley, J. E., M. R. Walker. 1959. Critical-path planning and scheduling.Proc. Eastern Joint Comput. Conf., 160–173.

Krishnan, V., K. T. Ulrich. 2001. Product development decisions: A reviewof the literature. Management Sci. 47(1) 1–21.

Krishnan, V., S. D. Eppinger, D. E. Whitney. 1997. A model-based frame-work to overlap product development activities. Management Sci.43(4) 437–451.

Leachman, R. C., S. Kim. 1993. A revised critical path method fornetworks including both overlap relationships and variable-durationactivities. Eur. J. Oper. Res. 64 229–248.

Loch, C. H., C. Terwiesch. 1998. Communication and uncertainty in con-current engineering. Management Sci. 44(8) 1032–1048.

MacCormack, A., R. Verganti, M. Iansiti. 2001. Developing products on“Internet time”: The anatomy of a flexible development process.Management Sci. 47(1) 133–150.

Mansfield, E., J. Rapaport, J. Schnee, S. Wagner, M. Hamburger. 1972.Research and Innovation in the Modern Corporation. Norton, NewYork.

McCord, K. R., S. D. Eppinger. 1993. Managing the integration problemin concurrent engineering. Working paper #3594-93-MSA, SloanSchool of Management, Massachusetts Institute of Technology, Cam-bridge, MA.

Pisano, G. P. 1997. The Development Factory. Harvard Business SchoolPress, Boston, MA.

Putnam, L. H., W. Myers. 1992. Measures for Excellence: Reliable Soft-ware on Time, Within Budget. Yourdon Press, Englewood Cliffs, NJ.

Roemer, T. A., R. Ahmadi, R. H. Wang. 2000. Time-cost trade-offs inoverlapped product development. Oper. Res. 48(6) 858–865.

Smith, P. G., D. G. Reinertsen. 1995. Developing Products in Half theTime, 2nd ed. Van Nostrand Reinhold, New York.

Takeuchi, H., I. Nonaka. 1986. The new product development game. Har-vard Bus. Rev. (Jan.–Feb.) 137–146.

Terwiesch, C., C. H. Loch. 1999. Measuring the effectiveness of overlap-ping development activities. Management Sci. 45(4) 455–465.

Thomke, S., D. E. Bell. 2001. Sequential testing in product development.Management Sci. 47(2) 308–323.

Thomke, S., T. Fujimoto. 2000. The effect of “front-loading” problem-solving on product development performance. J. Product InnovationManagement 17(2) 128–142.

Ward, A., J. K. Liker, J. J. Cristiano, D. K. Sobek. 1995. The secondToyota paradox: How delaying decisions can make better cars faster.Sloan Management Rev. 36(Spring) 43–61.

Whitney, D. E. 1990. Designing the design process. Res. Engrg. Design 23–13.