A negative dual rectangle cancellation algorithm for the linear assignment problem

6
A negative dual rectangle cancellation algorithm for the linear assignment problem Mohammad S. Sabbagh a,, Sayyed R. Mousavi b , Yasin Zamani b a Department of Industrial and Systems Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran b Electrical and Computer Engineering Department, Isfahan University of Technology, Isfahan 84156-83111, Iran article info Article history: Received 16 November 2012 Received in revised form 17 May 2013 Accepted 18 May 2013 Available online 23 May 2013 Keywords: Assignment problem Hungarian method Integer programming Negative dual rectangle Partial zero cover abstract In this paper, we consider three alternative primal models and their corresponding alternative dual mod- els for the linear assignment problem. We then define the concept of Negative Dual Rectangle (NDR) and suggest an algorithm that solves two of these dual problems by repeatedly finding and cancelling NDRs until it yields an optimal solution to the assignment problem. The algorithm is simple, flexible, efficient, and unified. We also introduce the notion of partial zero cover as an interpretation of an NDR. We then introduce some heuristic methods for finding NDRs. We also state and prove a lemma to establish the optimal use of an NDR. Furthermore, we show that on a new class of benchmark instances that is intro- duced in this paper the running time of our algorithm is highly superior to a well-known pure shortest path algorithm. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction The classic linear sum assignment problem (LSAP) is the prob- lem of finding an optimal assignment of n persons to n jobs. Given the cost c ij of assigning person i to job j, we want to find an assign- ment of persons to jobs, on a one-to-one basis that minimizes the total cost. The LSAP can also be considered as a network optimiza- tion problem on the graph G ¼ðN; AÞ consisting of the node set N ¼ N 1 [ N 2 in which N 1 and N 2 respectively represent set of per- sons and set of jobs and the arc set A corresponds to the allocations. In the network context, this problem is also known as the minimum weight bipartite matching problem (Burkard, Dell’Amico, & Martello, 2009). Perfect matchings in bipartite graphs are particularly inter- esting because they represent one-to-one correspondences be- tween nodes in N 1 and nodes in N 2 . In computer vision, the problem of finding correspondences be- tween two sets of elements has many applications ranging from three-dimensional reconstruction (Dellaert, 2001) to object recog- nition (Belongie, Malik, & Puzicha, 2002). The LSAP also occurs fre- quently as a subproblem when solving some hard combinatorial optimization problems such as the quadratic assignment problem (Lawler, 1976), the traveling salesman problem (Karp, 1977), the crew scheduling problem, and the vehicle routing problem (Bodin, Golden, Assad, & Ball, 1983), and the multidimensional assignment problem (Krokhmal, Grundel, & Pardalos, 2007; Poore, 1994). Other applications of the LSAP include, but are not limited to, locating and tracing objects in space, scheduling on parallel machines and earth–satellite systems with Time Division Multiple Access (TDMA) protocol (Burkard, 1985) and (Brogan, 1989). The classical method for solving the LSAP has been the one developed and published by Kuhn in Naval Research Logistics Quarterly (Kuhn, 1955), which was called ‘‘Hungarian method’’ be- cause the algorithm was largely based on the earlier works of two Hungarian mathematicians: K} onig and Egerváry. Some of the effi- cient algorithms for the problem are the class of successive shortest path procedures with an O(n 3 ) complexity, see e.g. (Pedersen, Niel- sen, & Andersen, 2008) and (Jonker & Volgenant, 1987). The inte- rior point methods or the approximate dual projective (ADP) methods have also been used to solve large instances of the LSAP (Pardalos & Ramakrishnan, 1993). In particular, Ramakrishnan et al. performed extensive computational experiments with the ADP, which showed a good practical behavior of the method on large-size instances (Ramakrishnan, Karmarkar, & Kamath, 1993). In 1995, Goldberg and Kennedy introduced a computationally effective cost-scaling assignment (CSA) algorithm for the LSAP and based on extensive computational experiments, they reported that it outperformed the interior point methods on all test in- stances (Goldberg & Kennedy, 1995). For a more comprehensive discussion of the assignment problems, their formulations and applications, the interested reader is referred to, for instance, (Pardalos & Pitsoulis, 2000), (Burkard, 2002), (Pentico, 2007), (Pedersen et al., 2008), (Dell’Amico and Martello, 1997), 0360-8352/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cie.2013.05.013 Corresponding author. Tel.: +98 311 391 5521; fax: +98 311 391 5526. E-mail addresses: [email protected] (M.S. Sabbagh), [email protected] (S.R. Mousavi), [email protected] (Y. Zamani). Computers & Industrial Engineering 65 (2013) 673–678 Contents lists available at SciVerse ScienceDirect Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Transcript of A negative dual rectangle cancellation algorithm for the linear assignment problem

Page 1: A negative dual rectangle cancellation algorithm for the linear assignment problem

Computers & Industrial Engineering 65 (2013) 673–678

Contents lists available at SciVerse ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

A negative dual rectangle cancellation algorithm for the linearassignment problem

0360-8352/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.cie.2013.05.013

⇑ Corresponding author. Tel.: +98 311 391 5521; fax: +98 311 391 5526.E-mail addresses: [email protected] (M.S. Sabbagh), [email protected]

(S.R. Mousavi), [email protected] (Y. Zamani).

Mohammad S. Sabbagh a,⇑, Sayyed R. Mousavi b, Yasin Zamani b

a Department of Industrial and Systems Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iranb Electrical and Computer Engineering Department, Isfahan University of Technology, Isfahan 84156-83111, Iran

a r t i c l e i n f o a b s t r a c t

Article history:Received 16 November 2012Received in revised form 17 May 2013Accepted 18 May 2013Available online 23 May 2013

Keywords:Assignment problemHungarian methodInteger programmingNegative dual rectanglePartial zero cover

In this paper, we consider three alternative primal models and their corresponding alternative dual mod-els for the linear assignment problem. We then define the concept of Negative Dual Rectangle (NDR) andsuggest an algorithm that solves two of these dual problems by repeatedly finding and cancelling NDRsuntil it yields an optimal solution to the assignment problem. The algorithm is simple, flexible, efficient,and unified. We also introduce the notion of partial zero cover as an interpretation of an NDR. We thenintroduce some heuristic methods for finding NDRs. We also state and prove a lemma to establish theoptimal use of an NDR. Furthermore, we show that on a new class of benchmark instances that is intro-duced in this paper the running time of our algorithm is highly superior to a well-known pure shortestpath algorithm.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

The classic linear sum assignment problem (LSAP) is the prob-lem of finding an optimal assignment of n persons to n jobs. Giventhe cost cij of assigning person i to job j, we want to find an assign-ment of persons to jobs, on a one-to-one basis that minimizes thetotal cost. The LSAP can also be considered as a network optimiza-tion problem on the graph G ¼ ðN;AÞ consisting of the node setN ¼ N1 [ N2 in which N1 and N2 respectively represent set of per-sons and set of jobs and the arc set A corresponds to the allocations.In the network context, this problem is also known as the minimumweight bipartite matching problem (Burkard, Dell’Amico, & Martello,2009). Perfect matchings in bipartite graphs are particularly inter-esting because they represent one-to-one correspondences be-tween nodes in N1 and nodes in N2.

In computer vision, the problem of finding correspondences be-tween two sets of elements has many applications ranging fromthree-dimensional reconstruction (Dellaert, 2001) to object recog-nition (Belongie, Malik, & Puzicha, 2002). The LSAP also occurs fre-quently as a subproblem when solving some hard combinatorialoptimization problems such as the quadratic assignment problem(Lawler, 1976), the traveling salesman problem (Karp, 1977), thecrew scheduling problem, and the vehicle routing problem (Bodin,Golden, Assad, & Ball, 1983), and the multidimensional assignment

problem (Krokhmal, Grundel, & Pardalos, 2007; Poore, 1994). Otherapplications of the LSAP include, but are not limited to, locatingand tracing objects in space, scheduling on parallel machines andearth–satellite systems with Time Division Multiple Access(TDMA) protocol (Burkard, 1985) and (Brogan, 1989).

The classical method for solving the LSAP has been the onedeveloped and published by Kuhn in Naval Research LogisticsQuarterly (Kuhn, 1955), which was called ‘‘Hungarian method’’ be-cause the algorithm was largely based on the earlier works of twoHungarian mathematicians: K}onig and Egerváry. Some of the effi-cient algorithms for the problem are the class of successive shortestpath procedures with an O(n3) complexity, see e.g. (Pedersen, Niel-sen, & Andersen, 2008) and (Jonker & Volgenant, 1987). The inte-rior point methods or the approximate dual projective (ADP)methods have also been used to solve large instances of the LSAP(Pardalos & Ramakrishnan, 1993). In particular, Ramakrishnanet al. performed extensive computational experiments with theADP, which showed a good practical behavior of the method onlarge-size instances (Ramakrishnan, Karmarkar, & Kamath, 1993).In 1995, Goldberg and Kennedy introduced a computationallyeffective cost-scaling assignment (CSA) algorithm for the LSAPand based on extensive computational experiments, they reportedthat it outperformed the interior point methods on all test in-stances (Goldberg & Kennedy, 1995). For a more comprehensivediscussion of the assignment problems, their formulations andapplications, the interested reader is referred to, for instance,(Pardalos & Pitsoulis, 2000), (Burkard, 2002), (Pentico, 2007),(Pedersen et al., 2008), (Dell’Amico and Martello, 1997),

Page 2: A negative dual rectangle cancellation algorithm for the linear assignment problem

674 M.S. Sabbagh et al. / Computers & Industrial Engineering 65 (2013) 673–678

(Dell’Amico and Toth, 2000), (Burkard, 1999), (Burkard, Dell’Amico,& Martello, 2009), and references therein.

The new algorithm presented in this paper uses the concept ofNegative Dual Rectangle (NDR) to solve the LSAP. The main contri-butions of this paper are as follows:

� We present an algorithm that is simple, flexible, efficient, andunified.� We define the concept of Negative Dual Rectangle (NDR) and its

interpretations.� We prove a lemma that enables us to maximize the benefit of an

NDR.� We present some heuristic methods for finding NDRs.� Our algorithm is markedly faster than the APC on a class of

problem instances.

The remainder of the paper is organized as follows. In the nextsection, we review the primal and dual models. In Section 3, weprovide the preliminaries of our algorithm including optimalityconditions, NDR definition, some characteristics of an NDR with alemma regarding its optimal use, and some methods for findingan NDR. The new algorithm is presented in Section 4. In Section 5,we provide the computational results. Finally, conclusions are out-lined in Section 6.

2. The primal and dual models

2.1. Primal models

The following Primal-(a) model is the classic mathematical pro-gramming formulation of the LSAP. It is very easy to show that Pri-mal-(b) and Primal-(c) models are alternative linear programmingmodels of the LSAP (Fig. 1).

Here, we define xij ¼ 1 if person i is assigned to job j, and 0otherwise, for i; j ¼ 1; . . . ;n and cij is the cost of that assignment.The constraints ensure that each person is assigned exactly toone job, and that exactly one job is assigned to each person. The ba-sic mathematical structure of the problem ensures that all the xij

are either 0 or 1 (Pentico, 2007).

2.2. Dual models

We will use the symbols uiandv j to denote the dual variablesassociated with row i and column j, respectively. In Fig. 2, we seethe three matching alternative dual models of the LSAP.

We use Dual_(b) or Dual_(c) in our algorithm at will.

3. Preliminaries

3.1. Optimality conditions

We shall assume, without loss of generality, that cij P 0 for all iand j. Let C P 0 be the current cost matrix for an assignment prob-

Fig. 1. Three alternative LSAP primal models, respectively f

lem. We say that C displays optimality condition for a given feasi-ble solution X if cijxij ¼ 0 for all i and j. In the following, we use thesymbol � to denote the outer product operator. In our algorithm,we use Theorem 1.

Theorem 1. Let X be a feasible solution to an assignment problemwith cost matrix C. Then:

I. X is optimal if and only if there exists some constant vector vsuch that all positive allocations are made at the cells withthe lowest cost in the rows of �C ¼ C þ e � vT .

II. X is optimal if and only if there exists some constant vector usuch that all positive allocations are made at the cells withthe lowest cost in the columns of �C ¼ C þ u � eT .

A proof of Theorem 1 can be found on page 193, Theorem 6.21,in (Burkard, Dell’Amico, & Martello, 2009), or in (Egervary, 1931).

3.2. Dual variable interpretation and negative dual rectangle definition

Here, we provide an interpretation of Dual-(b) and its variables;a symmetric interpretation holds for the Dual-(c). Given thatui � v j 6 cij implies �cij ¼ ðcij þ v jÞ � ui P 0, we conclude that to godirectly from the initial cost matrix to the optimal cost matrixwe must do:

� Add some appropriate nonnegative numbers ðdenoted asvectorvÞ to the columns of the initial cost matrix, such thatthe entries corresponding to the positive allocations in eachrow become the lowest cost in that row.� Then subtract from its rows some nonnegative numbersðdenoted asvector uÞ; that is the vector of the minimum valuein each row.

In Dual-(b), the first summation ðPn

i¼1uiÞ is the total row reduc-tions and the second summation ð

Pnj¼1v jÞ is the total column addi-

tions. The value of ðPn

i¼1ui �Pn

j¼1v jÞ is called the net-reductionsvalue. We say a pair of vectors ðu;vÞ of dual values associated withthe current cost matrix is a negative dual rectangle (NDR) ifPn

i¼1ui >Pn

j¼1v j and �cij ¼ cij � ui þ v j P 0;ui P 0;v j P 0 fori; j ¼ 1; . . . ;n.

The conditionPn

i¼1ui >Pn

j¼1v j means ‘more reductions thanadditions’ and thus the adjective ‘negative’ is used in negative dualrectangle. An NDR could be viewed as an admissible transformation.Further information on admissible transformations can be found inSection 6.3 of (Burkard, Dell’Amico, & Martello, 2009).

According to Theorem 1, when the current cost matrix is notoptimal there exists an NDR that improves w (the objective valueof Dual-(b) or Dual-(c)) and results in creation of new zero(s). Inthe meantime, all entries of the current cost matrix remain‘‘nonnegative’’. For any NDR, computing �cij ¼ cij � ui þ v j fori; j ¼ 1; . . . ;n is called an NDR cancellation.

rom left to right, Primal-(a), Primal-(b) and Primal-(c).

Page 3: A negative dual rectangle cancellation algorithm for the linear assignment problem

Fig. 2. Three alternative dual models of LSAP, respectively from left to right, Dual-(a), Dual-(b) and Dual-(c).

M.S. Sabbagh et al. / Computers & Industrial Engineering 65 (2013) 673–678 675

We can increase the dual objective value in steps by finding andcancelling NDRs. To find an NDR, in the current square cost matrix,locate a strictly rectangular submatrix with a set A of rows and aset B of columns that satisfy the following conditions:

(1) if i 2 A and cij ¼ 0 then j 2 B and jAj > jBj; We call this a‘‘VR’’ for Vertical Rectangle because the number of rows isgreater than the number of columns.

or

(1) if j 2 B and cij ¼ 0 then i 2 A and jAj < jBj; We call this a‘‘HR’’ for Horizontal Rectangle because the number of rowsis smaller than the number of columns.

It is clear that the rows and the columns of a VR or HR may ormay not be contiguous. A non-optimal cost matrix can have manyVRs or HRs. Fig. 3 shows some examples of VRs and HRs.

Here we note that a minimum zero cover (Murty, 1976) withfewer than n lines results in exactly one pair of VR and HR. Theuncovered rows and the covered columns constitute one rectan-gle and the uncovered columns and the covered rows constituteanother one. However, there is a great deal of interest in devel-oping computationally efficient heuristics for locating VRs orHRs.

3.3. Some characteristics of an NDR

3.3.1. Two interpretations for an NDRWe have two interpretations for an NDR (VR or HR). The first

one is a partial zero cover that covers a subset of rows or columns

Fig. 3. Some examples of VRs and HRs.

of the cost matrix. The second interpretation is a full zero cover butnot necessarily the minimum zero cover provided that we cover allthe rows that are outside a VR with horizontal lines and all the col-umns that are outside a HR with vertical lines.

3.3.2. The optimal use of an NDRA VR in the current cost matrix allows us to subtract some

positive numbers from its corresponding rows and add somepositive numbers to its corresponding columns such that we willhave a net reduction and maintain nonnegativity of the costmatrix. Ideally, the numbers we choose to subtract or add mustresult in maximum net reduction. The following lemma is appli-cable to this situation.

Lemma 1 (The optimal use of an NDR). Given a VR in the current costmatrix consisting of m rows indexed by i ¼ 1;2; . . . ;m and k columnsindexed by j ¼ 1;2; . . . ; k; m > k. Let xi ¼minimumð�ci;kþ1; . . . ; �ci;nÞrepresent the smallest cost coefficient outside the VR in the ith row ofthe current cost matrix for i ¼ 1;2; . . . ;m. Then in order to achieve themaximum net reduction, given that we add one value to all itscolumns, do the following:

� Assume a ; . . . ;a represent the sorted values of x ; . . . ;x in

1 m 1 m

nonincreasing order.� Let v1 ¼ v2 ¼ � � � ¼ vk ¼ ak.� Let ui ¼ minimumðxi;akÞ for i ¼ 1;2; . . . ;m.

That is add ak to each column of the VR and subtract from its ithrow the amount ui ¼minimumðxi;akÞ for i ¼ 1;2; . . . ;m. There-fore, the net reduction is equal to: net reduction ðakÞ ¼ðkak þ akþ1 þ � � � þ amÞ � kak ¼ akþ1 þ � � � þ am.

Proof. It can be shown that net reduction_ðakÞnet reductionðakÞP net reductionðapÞ for p ¼ 1; . . . m: It is important to notethat if m� k ¼ 1 then the result of Lemma 1 is the same as lettingv1 ¼ v2 ¼ � � � ¼ vk ¼ am and ui ¼ am for i ¼ 1;2; . . . ;m. Observingthat the transpose of a VR becomes an HR, the rules for HRsbecome clear. For example, according to lemma 1, to cancel a VRwith one column we do: h

i. Reduce each row that belongs to the rectangle by the small-

est positive number in that row.

ii. Add to its single column the largest of those subtractednumbers.

These operations generate the maximum net reduction for a VRwith one column.

3.3.3. Avoidance of unnecessary computationsWe can avoid unnecessary computations by making ‘smart’ use

of the available NDR information. When we locate an NDR, some-times it is also possible to exploit its current data and the newinformation after its cancellation to identify a new NDR easily.For example, suppose we have located a VR with 10 rows and 3.

Page 4: A negative dual rectangle cancellation algorithm for the linear assignment problem

Fig. 4. Heuristic method 1 example for finding a VR.

Fig. 5. Heuristic method 2 example for finding a VR.

Fig. 6. Heuristic method 3 example for finding a VR.

Table 1A comparison between average CPU times required for APC versus Heuristics 0, 4, 1based on 5 random cost matrices for each instance (milliseconds).

Size n % Closeness reachedby Heuristics 0, 4, 1

Average runtimeof Heuristics 0,4,1

Average runtimeof APC method

cij 2 ½1;103�1000 95.73 64.8 115.22000 96.92 258 8703000 98.93 550.2 4508.24000 99.8 934 10,5195000 99.98 1396.8 15919.4

cij 2 ½1;104�1000 95.15 82.6 128.62000 95.28 362.4 667.43000 95.16 794.2 1460.24000 95.24 1364.8 3112.65000 95.41 2161.2 5263.6

cij 2 ½1;105�1000 95.14 87.6 1462000 95.07 376.4 763.23000 94.89 856.2 856.24000 95.02 1497 4057.45000 94.99 2404.6 6728.6

cij 2 ½1;106�1000 95.01 84.6 157.22000 95.03 365.4 801.63000 94.67 869.2 2238.64000 94.98 1514.4 4230.25000 94.82 2453.2 7983.4

676 M.S. Sabbagh et al. / Computers & Industrial Engineering 65 (2013) 673–678

Now assume that, after its cancellation, all the new zeros outsidethis VR are created in two columns. Then we will have a VR withat least 10 rows and 5 (=3 + 2) columns.

3.4. Some heuristic methods for finding an NDR

In the following, we present some heuristic methods for findinga VR (HR). First we define a lone-zero-row as a row with just onezero and likewise a lone-zero-column as a column with just onezero. We also assume that we have performed row and columnreductions (call these operations heuristic method 0) and havescanned the current cost matrix once which requires Oðn2Þ time.While scanning we record the location of each zero in its rowand column, the number of zeros, the number of lone-zeros (lone-zero-columns) of each row, and the number of lone-zeros (lone-

Fig. 7. Heuristic method 4 ex

zero-rows) of each column. Once we cancel a VR (HR), we updatethis information.

Some heuristic methods for finding a VR (HR):Heuristic method 1: Finding a single column (row) with two or

more lone-zero-rows (lone-zero-columns) is a simple way to identifyand cancel a VR (HR).

In Fig. 4 the first matrix is the original matrix where column 1has four lone-zero-rows (zeros in the large font); a VR with one col-umn and four rows 2–5. The second matrix is obtained after can-celling this VR.

Heuristic method 2: Let us assume there are k columns (rows),2 6 k < n; each with two or more zeros but with exactly one lone-zero-row (lone-zero-column) such that these k columns (rows) cov-er all zeros of at least another row (column). Then, we will have aVR (HR) with at least kþ 1 rows (columns) and k columns (rows).

In Fig. 5, the first matrix is the original matrix where columns 1and 3 each with a lone-zero-row (zeros in the large font) cover allzeros of rows 1, 2, and 4. The second matrix is obtained after can-celling the VR.

Heuristic method 3: Let NZðjÞ ðNZðiÞÞ represent the number ofzeros in the jth column ðith rowÞ of the current cost matrix forj ¼ 1;2; . . . ;n ði ¼ 1;2; . . . ;nÞ: Let NZ MAX ¼maximumðNZð1Þ; . . . ;

NZðnÞÞ: Assume there are k P 2 columns (rows) with at least p

ample for finding a VR.

Page 5: A negative dual rectangle cancellation algorithm for the linear assignment problem

Table 2A comparison between CPU times required for APC versus NDR for new benchmarkinstances (milliseconds).

n APC NDR

100 9 0200 72 1300 237 3400 561 5500 1097 8600 1883 13700 3013 19800 4460 30900 6386 48

1000 8739 602000 70,191 289

M.S. Sabbagh et al. / Computers & Industrial Engineering 65 (2013) 673–678 677

zeros for some 2 6 p 6 NZ MAX and suppose these k columns(rows) cover all zeros of m rows (columns). Now if m > k we willhave a VR (HR) with k columns (rows) and m rows (columns).

Fig. 6 shows a VR of Heuristic method 3 where columns 2 and 4each with four zeros (zeros in the large font) cover all zeros of rows1–3, and 5.

Heuristic method 4: Let k ¼ bn2c and divide the columns (rows)of the matrix (1, . . . , n) into two subsets ({1, . . . , k}, {n � k + 1, . . . ,n}). Pick the subset with the most number of zeros and supposethese k columns (rows) cover all zeros of m rows (columns). Nowif m > k we will have a VR (HR) with k columns (rows) and m rows(columns). Do columns (rows) reductions if some columns (rows)do not now contain zeros.

Fig. 7 shows a VR of Heuristic method 4. Here, the first subsethas the most zeros (zeros in the large font) and these two columnscover all zeros of rows 2–5. After cancelling the VR consisting ofcolumns 1–2 and rows 2–5 we will get the middle matrix. Now, be-cause there is no zero in the second column of the middle matrix, acolumn reduction is required, doing which will result in the rightmost matrix.

4. The new algorithm

In order to solve the LSAP, we can now use the following high-level procedure while avoiding unnecessary computations that ismentioned in 3.3.3:

Fig. 8. Chart o

Begin(1) Perform row reductions by subtracting the minimumvalue in each row from all row values; let ui be theminimum value in row i. Afterwards every row contains atleast one zero entry and all entries are nonnegative. Letw ¼

Pni¼1ui ¼ \total� row� reductions".

(2) Perform column reductions by subtracting the minimumvalue in each column from all column values; let v j be theminimum value in column j. Afterwards every row andcolumn contains at least one zero entry and all entries arenonnegative. Let w ¼ wþ

Pnj¼1v j be the total reductions so

far.(3) If (Heuristic method 4 or Heuristic method 3 isapplicable) doBegin

Cancel the rectanglew ¼ wþ \net reduction value"

End(4) While (Heuristic method 1 or Heuristic method 2 isapplicable) doBegin

Cancel the rectanglew ¼ wþ \net reduction value"

End(5) While (the current cost matrix is not optimal) doBegin

Find a minimum zero cover and cancel a VR (HR)w ¼ wþ \net reduction value"

EndEnd

5. Computational results

In this section, to evaluate and compare the empirical perfor-mance of the algorithm we briefly present our computational re-sults on randomly generated dense benchmark instances(Burkard et al., 2009). The tests were performed on a PC with an In-tel Pentium 4 at 2.00 GHz and 4 gigabytes of RAM memory underWindows 7 Ultimate 32-bit. The codes were developed in C#.Net Framework 4 using Microsoft Visual Studio 2010. We have

f Table 2.

Page 6: A negative dual rectangle cancellation algorithm for the linear assignment problem

678 M.S. Sabbagh et al. / Computers & Industrial Engineering 65 (2013) 673–678

generated five random instances for each n (the size of the cost ma-trix) where n ¼ 1000p; p ¼ 1; . . . ;5. The cij values were generatedaccording to a discrete uniform distribution in the range ½0;K� withK 2 f103;104;105;106g. We compare our algorithm with the APCalgorithm (Simeone, Toth, Gallo, Maffioli, & Pallottino, 1988).APC, a well-known O(n3) time complexity algorithm, is a pureshortest path algorithm preceded by a simple initialization proce-dure (Dell’Amico and Toth, 2000). In 1988, APC was written in FOR-TRAN, which was then translated to C and both are freely availablefrom the AP web page http://www.assignmentproblems.com/. Weused its C translation. For fair comparisons, we used a unique call-ing program to run both APC and NDR optimization codes. The call-ing program receives the cost matrix, prepares the data structures,and runs the optimization codes. The elapsed CPU time, in millisec-onds, was measured for the optimization phase only. Both APC andNDR were executed on the same computer using the same data.The results are given in Tables 1 and 2. On dense random bench-mark instances, the running time of our algorithm is longer thanAPC. Nevertheless, to highlight the importance of the heuristicsthat we introduced, in Table 1 we make a comparison betweenaverage CPU times required for APC versus Heuristics 0, 4, 1.

Here, we define some new benchmark instances, extending theinstance sets from the LSAP literature, by lettingcij ¼ i� jþ ðn� jÞðn� j� 1Þ=2 i; j ¼ 1;2; . . . ;n. These are difficultinstances for APC algorithm. The results are shown in Table 2and Fig. 8. As Table 2 and Fig. 8 show, on this new class of bench-mark instances the running time of our algorithm is markedlysuperior to the APC algorithm.

6. Conclusions

The contributions of this paper are fourfold: Firstly, we have gi-ven two interpretations for an NDR: one is the partial zero coverand the other one is a full zero cover but not necessarily the min-imum zero cover. Secondly, we have proved a lemma that enablesus to maximize the benefit of an NDR. Thirdly, we have presentedsome heuristic methods for finding NDRs. Finally, on a new class ofbenchmark instances the running time of our algorithm is mark-edly superior to the APC algorithm.

References

Belongie, S., Malik, J., & Puzicha, J. (2002). Shape matching and object recognitionusing shape contexts. IEEE Transactions on Pattern Analysis and MachineIntelligence, 24(4), 509–522.

Bodin, L. D., Golden, B. L., Assad, A. A., & Ball, M. O. (1983). Routing and scheduling ofvehicles and crews. Computers and Operations Research, 10, 65–211.

Brogan, W. L. (1989). Algorithm for ranked assignments with applications tomultiobject tracking. Journal of Guidance, Control, and Dynamics, 12(3), 357–364.

Burkard, R. E. (1985). Time-slot assignment for TDMA systems. Computing, 35(2),99–112.

Burkard, R. E. (2002). Selected topics on assignment problems. Discrete AppliedMathematics, 123(1–3), 257–302.

Burkard, R. E., & Cela, E. (1999). Linear assignment problems and extensions. In Du,D. Z., & Pardalos, P. M. (Eds.), Handbook of combinatorial optimization,Supplement Volume A. MA: Kluwer (pp. 75–149).

Burkard, R. E., Dell’Amico, M., & Martello, S. (2009). Assignment problems.Philadelphia: SIAM. ISBN:978-0-898716-63-4.

Dellaert, F. (2001). Monte Carlo EM for Data-Association and its Applications incomputer Vision. PhD thesis. Carnegie Mellon University.

Dell’Amico, M., & Martello, S. (1997). Linear assignment. In M. Dell’Amico, F.Maffioli, & S. Martello (Eds.), Annotated bibliographies in combinatorialoptimization. Chichester, England: John Wiley & Sons Ltd..

Dell’Amico, M., & Toth, P. (2000). Algorithms and codes for dense assignmentproblems: the state of the art. Discrete Applied Mathematics, 100(1–2), 17–48.

Egervary, J. (1931). Matrixok kombinatorius tulajdonsagairol (Hungarian).Matematikai es Fizikai Lapok 38, 16–28 (English translation by H. W. Kuhn,‘‘On combinatorial properties of matrices’’, Logistic Papers 11, paper 4, 1–11(1955), George Washington University).

Goldberg, A. V., & Kennedy, R. (1995). An efficient cost scaling algorithm for theassignment problem. Mathematical Programming, 71, 153–177.

Jonker, R., & Volgenant, A. (1987). A shortest augmenting path algorithm for denseand sparse linear assignment problems. Computing, 38(4), 325–340.

Karp, R. M. (1977). Probabilistic analysis of partitioning algorithms for the travelingSalesman problems in the plane. Mathematics of Operations Research, 2,209–224.

Krokhmal, P., Grundel, D., & Pardalos, P. (2007). Asymptotic behavior of theexpected optimal value of the multidimensional assignment problem.Mathematical Programming, 109(2–3), 525–551.

Kuhn, H.W. (1955). The Hungarian method for the assignment problem. NavalResearch Logistics Quarterly 2, (1&2) 83–97 (original publication).

Lawler, E. L. (1976). Combinatorial optimization Networks and Matroids. New York:Holt, Reinhart and Winston.

Murty, K. (1976). Linear and combinatorial programming. New York: John Wiley &Sons Ltd.. 362–363.

Pardalos, P. M., & Pitsoulis, L. (Eds.). (2000). Nonlinear assignment problems:Algorithms and applications, combinatorial optimization series. Kluwer AcademicPublishers.

Pardalos, P. M., & Ramakrishnan, K. G. (1993). On the expected value of randomassignment problems: Experimental results and open questions. ComputationalOptimization and Applications, 2, 261–271.

Pedersen, C. R., Nielsen, L. R., & Andersen, K. A. (2008). An algorithm for rankingassignments using reoptimization. Computers and Operations Research, 35(11),3714–3726.

Pentico, D. W. (2007). Assignment problems: A golden anniversary survey. EuropeanJournal of Operational Research, 176(2), 774–793.

Poore, A. B. (1994). Multidimensional assignment formulation of data associationproblems arising from multitarget tracking and multisensor data fusion.Computational Optimization and Applications, 3, 27–57.

Ramakrishnan, K. G., Karmarkar, N. K., & Kamath, A. P. (1993). An approximate dualprojective algorithm for solving assignment problems. In D. S. Johnson & C. C.McGeoch (Eds.), Network flows and matching: First DIMACS implementationchallenge, DIMACS series, Vol. 12 (pp. 431–452). Providence, RI: AmericanMathematical Society.

Simeone, B., Toth, P., Gallo, G., Maffioli, F., & Pallottino, S. (Eds.), (1988). Fortrancodes for network optimization, annals of operations research, Vol. 13, Baltzer,Basel.