0% found this document useful (0 votes)
62 views16 pages

Linear Assignment Algorithm Analysis

The document presents a Shortest Augmenting Path Algorithm for solving both dense and sparse Linear Assignment Problems (LAP), demonstrating its efficiency through computational experiments that show it outperforms existing algorithms. The algorithm includes new initialization routines and a specialized implementation of Dijkstra's shortest path method, with a Pascal implementation provided. The paper also reviews existing LAP algorithms and discusses the theoretical framework and practical applications of the proposed method.

Uploaded by

andy80man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views16 pages

Linear Assignment Algorithm Analysis

The document presents a Shortest Augmenting Path Algorithm for solving both dense and sparse Linear Assignment Problems (LAP), demonstrating its efficiency through computational experiments that show it outperforms existing algorithms. The algorithm includes new initialization routines and a specialized implementation of Dijkstra's shortest path method, with a Pascal implementation provided. The paper also reviews existing LAP algorithms and discusses the theoretical framework and practical applications of the proposed method.

Uploaded by

andy80man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Computing 38, 325- 340 (1987) Computing

9 by Springer-Verlag1987

A Shortest Augmenting Path Algorithm


for Dense and Sparse Linear Assignment Problems
R. Jonker and A. Volgenant, Amsterdam
Received May 6, 1986

Abstract - - Zusammenfassung

A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems. We develop a
shortest augmenting path algorithm for the linear assignment problem. It contains new initialization
routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse
problems computational experiments show this algorithm to be uniformly faster than the best
algorithms from the literature. A Pascal implementation is presented.
AMS Subject Classifications: 90C08, 68E 10.
Key words: Linear assignment problem, shortest path methods, Pascal implementation.

Ein Algorithmus mit kiirzesten alternierenden Wegen fiir dichte und diinne Zuordnungsprobleme. Wir
entwickeln einen Algorithmus mit kiirzesten alternierenden Wegen ffir das lineare Zuordnungsproblem.
Er enthfilt neue Routinen fiir die Anfangswerte und eine spezielle Implementierung der Kiirzesten-
Wege-Methode von Dijkstra. Sowohl fi]r dichte als auch f/ir dfinne Probleme zeigen Testl~iufe, dab unser
Algorithmus gleichm~ig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementie-
rung in Pascal wird angegeben.

1. Introduction
The linear assignment problem (LAP) is useful as a relaxation for difficult
combinatorial optimization problems like quadratic assignment, and traveling
salesman. Furthermore, theoretical developments for the LAP can often be
extended to other problems, such as minimum cost flow and transportation.
The first well known LAP algorithm, Kuhn's Hungarian method [22], was
published in 1955. After 1977 several more or less new algorithms were proposed for
example by Barr, Glover and Klingman [2], Hung and Rom [18], Bertsekas [3],
and Balinski [-1]. None of these authors even mentioned the existence of the shortest
augmenting path algorithm of Tomizawa [29] from 1971. Dorhout [10,111
improved it to what was for years the fastest LAP algorithm available. Still earlier, in
1969, Mack [24] developed an algorithm that is a forerunner of Tomizawa's. It is
equivalent to the Hungarian method, but more comprehensible.
After a review of assignment algorithms, we describe in Sections 4, 5 and 6 a new
algorithm LAPJV, based on shortest augmenting paths. A Pascal code is presented
326 R. Jonker and T. Volgenant:

in Section 7. Its average computation times are uniformly lower than those of the
other algorithms, as shown in the final section.
An LAP with costs c [i,j] (i,j = 1 ... n) can be formulated as a linear program:
min Z, ij c [ i , j ] . x [i,j]
subject to
Z j x [i,j] = 1 (i = 1 . . . n),
X i x [i,j] = 1 (j = 1 . . . n),
x [i,j] >-0 (i,j = 1 . . . n).

The dual problem is:


max r i u [i] + 2;/v [j]
subject to
c [i,j] - u [i] - v [j] _>0 (i,j = 1 . . . n).

With the dual variables u [i] and v [j] the reduced costs are c [ i , j ] - u [ i ] - v [}]
(i,j = 1 ... n). So, the dual problem is to find a reduction of the costs matrix with
maximum sum and non-negative reduced costs.
In the following, indices i and j refer to rows and columns respectively; x [i] is the
column index assigned to row i and y [j] the row index assigned to column j, with
x [i1 = 0 for an unassigned row i and y [j] = 0 for an unassigned column j; the dual
variable u [i] corresponds to row i and v [Jl to columnj. We denote the reduced costs
by: cred [i,j] = c [i,j] - u [i] - v[j], and sometimes we refer to the dual variables as
"prices".

2. A Review of Assignment Algorithms


Methods to solve the LAP can be classified in three categories, that arc based on
algorithms for
a) maximum flow,
b) shortest paths,
c) linear programming.
Most a l g o r i t h m s based on m a x i m u m f l o w are primal-dual methods. For an
introduction to these methods we refer to Papadimitriou and Steiglitz [261. The
Hungarian algorithm of Kuhn [22] actually served as the algorithm from which the
general primal-dual algorithm was derived. The original method has computational
complexity O (n4), but later 0 (n 3) versions were developed (Lawler [231). Jonker and
Volgenant [20] give some simple, but effective improvements.
Bertsekas [31 also presents a primal-dual algorithm. The method is Hungarian-
type, and the best version even switches to the Hungarian algorithm itself as soon as
the original method becomes less effective. We describe part of it in Section 4.
The m e t h o d s based on s h o r t e s t p a t h s are dual algorithms: dual feasibility exists and
primal feasibility has to be reached. This is achieved by considering the LAP as a
minimum cost flow problem, solved by steps that involve finding shortest paths on
an auxiliary graph.
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 327

In this group two algorithms, both of complexity O (n3), stand out: Hung and Rom's
[18] and Tomizawa's [29]. The former is the more ingenious, but the latter the
fastest, as shown in Section 8. Tomizawa augments partial assignments into a
complete solution by primal steps in each of which one shortest augmenting path is
determined. Hung and Rom's initial solution is complete, but may be infeasible.
They determine in each step a shortest path tree, which takes more effort, but may
contain more disjoint augmenting paths. We consider shortest augmenting path
LAP algorithms separately in Section 3.
The so-called Bradford method of Mack [24] is also of interest, especially for its
intuitively appealing presentation. As originally presented, it has computational
complexity O (n4). The method is equivalent to the Hungarian algorithm; adapting
it to obtain complexity O (n3) results in an algorithm close to Tomizawa's (Jonker
[19]).
Good results on sparse LAPs are obtained by Carraresi and Sodini [7] with an
algorithm based on the shortest path method of Glover et al. [15, 16].
The linear programming based algorithms in the third category are specialized
versions of the simplex method. The best published results are from Barr, Glover
and Klingman [2]. A major difficulty with these methods is the phenomenon of zero
pivot steps. A drawback is their relatively complex implementation as compared to
the other approaches. Computational experiments (Hung and Rom [ 18], M cGinnis
[25]) show that these algorithms are outperformed by the best primal-dual and dual
methods.
The O (n 3) signature algorithm presented by Balinski [1] and analyzed by Goldfarb
[17] also belongs to this category. It considers feasible dual solutions corresponding
to trees in the bipartite graph of row and column nodes. The algorithm can be
considered a variant of the Hungarian method. Nothing definite is known yet about
its computational performance.

3. Shortest Augmenting Paths Based Algorithms

Linear assignment is a special case of minimum cost flow, for which an algorithm
exists called "Buildup" in Papadimitriou and Steiglitz [26]. Ford and Fulkerson
[14] attribute this method to Jewell (1959) and to Busacker and Gowen (1961). It
uses flow augmentation along paths in an auxiliary network, that, depending on the
current flow, can be constructed from the original one.
Tomizawa [29] noted that shifting from the original costs of the assignment
problem to the (non-negative) reduced costs allows the algorithm of Dijkstra [ 12] to
solve the shortest path problems. With O (n) flow augmentations, this leads to an
O (n3) computational complexity of the algorithm. The theoretical improvements for
minimum cost flow algorithms of Edmonds and Karp [ 13] can also be applied to the
assignment problem. The resulting method is equivalent to that of Tomizawa.
In the original version of Tomizawa an augmentation step consists of finding a
shortest augmenting path with both initial row and final column specified.
23 Computing38/4
328 R. Jonker and A. Volgenant:

Following Tabourier [28], Dorhout [10, 1L] shows computational advantages in


leaving a choice for the final column.
The following sections are devoted to the steps of shortest augmenting path LAP
algorithms:
Step 1 : Initialization
Step 2: Termination, if all rows are assigned.
Step 3: Augmentation
Construct the auxiliary network and determine from an unassigned row i
to an unassigned column j an alternating path of minimal total reduced
cost, and use it to augment the solution.
Step 4: Adjust the dual solution to restore complementary slackness.
Go to step 2.
\
We discuss initialization strategies in the next section. H o w to modify shortest path
algorithms for assignment augmentation :is the subject of Section 5. Section 6
describes a simple way to adjust row and column prices after shortest path
augmentation.

4. T h e I n i t i a l i z a t i o n P h a s e

A standard method of initialization in LAP algorithms'is.column and row reduction.


For each columnj a row index i* is determined with minimum c [i,j] (i = 1... n), v [j]
is set to c [i*,j] and, if i* is unassigned,j is:assigned to i*. Next, one finds for every
unassigned row i the column j* wifh c [ i , j ] - v ' [ j ] (] = 1 ... n) minimum and assigns
j*, if unassigned, to row i.
In our assignment algorithm the initialization is primarily aimed at reaching a high
initial reduction of the costs matrix. It consists of three procedures, discussed below:
- reduction of columns,
- reduction transfer from unassigned t o assigned rows,
- augmenting reduction of unassigned rows.

The first procedure is standard column reduction. An implementation detail is


considering the columns in reverse order. So, the low indexed columns are most
likely to remain unassigned. As a consequence, if minimum reduced costs in a row
occur at an unassigned column, this column is automatically selected as the first in
which the minimum occurs.
The second procedure is reduction transfer. Its objective is to further reduce assigned
rows, but it has no direct net effect on the reduction sum. Afterwards a higher
reduction sum may be obtained when unassigned rows are reduced.
Consider a row i assigned to a column, say k. By sufficiently decreasing the price of
column k, row i can be reduced by the current second minimum of the reduced costs.
This additional reduction of assigned rows leads to an increase of some reduced
costs in unassigned rows, so that these may later be reduced further. The effect of the
procedure is twofold, as assigned columns are made more expensive relative to the
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 329

unassigned ones. Bertsekas [-3] calls this "ou,tpricing" of assigned columns. In


general the shortest paths in the augmentation phase will now earlier reach some
unassigned column.
The straightforward procedure for reduction transfer is given in Fig. 1. For clarity
we have not taken into account that at the start of the procedure all u [i] have value
zero. Furthermore, one can keep track during the column reduction for which rows
reduction transfer will certainly be useless.

procedure R E D U C T I O N TRANSFER;
begin
for each assigned row i do
begin
jl:=x[i]; # : = m i n {c[i,j]-v[j] [ j = l ...n,j< > j l } ;
v [ j l ] : = v [ j l ] - ( y - u [ i ] ) ; u[i]:=#
end
end.

Fig. 1. Procedure for reduction transfer

Clearly, reduction transfer may also be applied in the course of the augmentation
phase. Computational experiments showed that this does not improve the
performance of the algorithm LAPJV (Section 7).
Augmenting row reduction is the third initialization procedure. An attempt is made
to find augmenting paths starting in unassigned rows, to which at the same time
reduction is transferred. In the process assigned columns remain so, but rows may
become assigned, unassigned or reassigned.

procedure A U G M E N T I N G ROW R E D U C T I O N ;
begin
LIST: = {all unassigned rows};
for all i 9 LIST do
repeat
u l : = m i n {c[i,j]-v[f] I j = l ...n};
select j 1 with c [i,j 1] - v[j 1] = u I ;
u 2 : = m i n {c[i,j]-v[fJ I j = l ...n,j< >jl} ;
select j 2 with c [i,j2] - v [j2] = u 2 and j 2 < > j 1 ;
u[i]:=u2;
if u l < u 2 then v [ j l ] : = v [ j l ] - ( u 2 - u l )
else i f j l is assigned then j l : =j2;
k : = y [ j l ] ; if k > 0 then x [ k ] : = 0 ; x[i]:=jl; y [ j l ] : = i ; i:=k
until u l = u 2 (* no reduction transfer *) or k = 0 i~* augmentation *)
end.

Fig. 2. Procedure for. augmenting row reduction

The procedure is given in Fig. 2. In each step of the for-loop an alternating path is
started up from an unassigned row, say i. Consider the columnj where the minimum
reduced costs in row i occur. If j is unassigned, the alternating path leads to
augmentation of the solution. If not, the path is extended by reassigning column j to
23*
330 R. Jonker and A. Volgenant:

row i, thus unassigning the row k previously assigned to column j. Clearly, row i can
now be reduced by its minimum reduced costs, but if the second minimum is higher,
reduction transfer is possible by increasing the elements in column j. If so, we next
consider row k, as for this row the minimum reduced costs may now occur in a
column different fromj. If this is the case, the alternating path is extended as before.
If not, and furthermore the minimum is still unique, the path may even reverse
direction, and be extended by rows and columns visited before. The process is
continued until either an additional assignment is found, or no reduction transfer
takes place.
The two previous reduction procedures both have computational complexity O (nZ).
It can be shown that for augmenting reduction at most O (n 2 . R) steps are taken,
with R the range of the cost coefficients. With each step involving O (n) operations,
the complexity is at most O (n 3 9R). A different argument is as follows. We determine
O (n) alternating paths. Each extension of a path takes O (n) operations. So, an O (n 3)
computational complexity is obtained by simply allowing no more than n path
extensions. In practice the procedure never even approaches this number of
extensions.
Computational experiments show best results when the procedure of augmenting
row reduction is performed twice.
The advantages of this initialization strategy over standard column and row
reduction amply compensate the additional computation time. On full density
problems with n = 100 average total time has been decreased by 10~ (on cost range
1 - 100) to 18~ (on 1 - 1000). This initialization phase takes 6 0 ~ to 7 0 ~ of total run
time, whereas simple column and row reduction would take about 20~o. We expect
the effect of these initialization routines to be larger on augmentation approaches
less efficient than ours. Table 1 illustrates the advantages. It gives the average
reduction sum (the value of the current dual solution) and the average number of
assignments in the partial primal solution. These figures indicate how much effort
must still be put in the augmentation phase of the algorithm. The increased average
numbers of zero reduced costs coefficients suggest that this method is also useful for
assignment algorithms based on maximal flow.

Table 1. Column and row reduction compared to the initialization in L A P J V


(averages for 25 problems of each type with n - 100)

column and initialization


cost range row reduction in LAPJV

reduction sum 1- 100 87.2 96.6


in ~ of optimum 1- 1000 87.2 98.0
number of 1- 100 75 90
assignments 1- 1000 75 95
number of zero reduced 1- 100 188 205
cost coefficients 1- 1000 142 162
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 331

5. The Augmentation Phase

Augmentation starts by finding an alternating path. This is a sequence of,


alternately, row and column indices, with the first an unassigned row, the last a
column, and the intermediate columns and rows assigned in successive pairs. If the
final column is unassigned, augmentation of a partial solution can take place along
such a path by assigning all rows in the path to their succeeding column, which
results in one more assignment.
Augmentation in shortest path based algorithms (step 3, in Section 3) can best be
described without direct reference to the underlying minimum cost flow problem. It
requires only a simple modification of the shortest path method of Dijkstra [12]. In
Fig. 3 we give two procedures. Dijkstra's algorithm SHPATH1 determines a
shortest path tree rooted in node kk and traceable in the pred-array. The set A [i]
contains all nodes j for which arc (i,j) exists. Procedure SHPATH-AUGMENT is
its modification for the assignment problem, which determines the shortest
augmenting path for one additional assignment in row kk.

procedure S H P A T H 1 (kk);
begin
T O S C A N : = {1... n} - {kk}; for j: = 1... n do d U] : = oo;
i:=kk; d[kk]:=O; # : = 0 ;
repeat
for a l l j e A [i] c~ T O S C A N do
if # + c [i,f] < d [j] then begin d []]: = # + c [i,j] ; pred [j]: = i end;
#:=o0;
for all j e T O S C A N do if d [j] </~ then begin # : = d [j']; i: = j end;
T O S C A N : = T O S C A N - {i}
until T O S C A N = { }
end.

procedure S H P A T H - A U G M E N T (kk);
begin
T O S C A N : = {1 ... n}; for j : = 1 ... n do d I f ] : = oo;
i:=kk; d[kk]:=O; # : = 0 ;
repeat
for all j e A [i] c~ T O S C A N do
if # + cred [i,j] < d [3] then begin d [j]: = # + cred [i,j]; pred [[]: = i end;
#:=oo;
for all j e T O S C A N do if d [j] < # then begin #: = d [j]; #j: = j end;
i: = y [#f]; T O S C A N : = T O S C A N - {/~j}
until y [#j] = 0 ;
< augment along the path from column #j to row kk >
end.

Fig. 3. A shortest path algorithm according to Dijkstra and a modified version for augmentation in
assignment algorithms

This procedure only describes the method. For the actual implementation an
adapted version of Dijkstra's shortest path algorithm as in Fig. 4 is to be preferred.
The shortest path algorithms in Figs. 3 and 4 differ in the use of a set SCAN,
containing all rows that can be scanned for the current minimum d-value (the
332 R. Jonker and A. Volgenant:

variable /~). The set may be updated while scanning, and (relatively expensive)
searching for a new value of the minimum does not take place until SCAN is empty.
Especially for sparse networks this leads to substantially lower computation times.

procedure S H P A T H 2 (kk);
begin
T O S C A N : = {1 ,,. n) - {kk} ; for j : = 1.., n do d U] : = oo ;
d [kk] : = 0; SCAN: = {kk} ; #: = 0;
repeat
select any i e S C A N ; SCAN: = S C A N - {i};
for alljeA[i] c~ T O S C A N do if#+c[i,j~<d[j] then
begin
d [j] : = # + c [ i , j ] ; pred [j] : = i;
if d [/] = # then begin SCAN: = S C A N + {j} ; T O S C A N : = T O S C A N - {j} end
end;
if S C A N - { } then
begin
# : = r a i n {d[j] I j e T O S C A N ; d [ j ] > # ~ ;
for ./e T O S C A N do if d , ~ = # then S C A N - = S C A N + {j};
TOSCAN: = TO SCAN - SCAN
end
until S C A N = { }
end.

Fig. 4. Modified version of Dijkstra's shortest path method, basis for improved augmentation

Augmentation in our LAP algorithm is given by the procedure A U G M E N T in


Fig. 5, which is based on SHPATH2. Some minor improvements have been made.
Instead of a list SCAN containing rows, a list of columns facilitates updating of
column prices. Furthermore, we do not keep and update row prices. These can be
determined easily when needed due to complementary slackness. Updating of
column prices takes place as will be discussed in Section 6.
The column sets READY, SCAN and T O D O are mutually disjoint, and
READY u SCAN u TODO = {1 ... n}. So in the code one array COL of length n is
sufficient, with the elements of READY kept in front, just before the elements of
SCAN. This set is scanned first-in-first-out. So, its elements transfer automatically
to READY. The remaining places in the array are used for TODO.
Just like the initialization, the augmentation phase has computational complexity
O (n3). So this also holds for the entire algorithm. As for the memory requirements :
the full density version uses a costs matrix and eight arrays of n elements.
Derigs and Metz [8] investigated implementations of Dijkstra's shortest path
algorithm in assignment methods. The fastest implementation turned out to be very
similar to ours. They discovered that in their algorithms it is advantageous to
determine complete shortest path trees, so that more than one augmenting path per
iteration may be found. Our algorithm is faster when in each iteration only one
augmenting path is determined, which is probably due to the extensive initialization
procedures.
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 333

procedure A U G M E N T ;
begin
for all unassigned i* do
begin
for j: = 1 ... n do begin d [j] : = c [i*,j] - v [j] ; pred [j] : = i* end;
R E A D Y : = { ) ; S C A N : = { } ; T O D O : = { 1 ... n} ;
repeat
if S C A N = { } then
begin
# = min {d [j] [j ~ T O D O } ; SCAN: = {j [ d [3] = p} ; T O D O : = T O D O - SCAN;
for all j e S C A N do if y [3] = 0 then go to augment
end;
select any j* ~ S C A N ; i: = y [j*]; S C A N : = S C A N - {j*} ; R E A D Y : = R E A D Y + {j*} ;
for all j e T O D O do if #-F cred Fi,j] < d [3];then
begin
d [j] : = # + cred [i,j]; pred [3] : = i;
if d [j] = # then
if y [j] = 0 then go to augment else
begin SCAN: = S C A N + {j} ; T O D O : = T O D O - {j} end
end
until false; (* repeat always ends with go to a u g m e n t *)
augment:
(* price u p d a t i n g *)
for all k 6 R E A D Y do v [k]: = v [k] + d [k] - #;
(* augmentation *)
repeat
i: = p r e d [j]; y [ j ] : = i ; k: = j ; j: = x [i]; x [ i ] : = k
until i = i*
end
end.

Fig. 5. The procedure A U G M E N T for augmentation in the algorithm L A P J V

We experimented with augmentation based on the new shortest path methods from
Glover et al. [15, 16]. However, we did not find a faster procedure than one based on
SHPATH2. Carraresi and Sodini [73 report very good results with an algorithm
based on these shortest path methods. It must be noted that this LAP algorithm
performs well only on (very) sparse problems. Karp [21] also improved shortest
path based algorithms, but his modifications are more theoretically interesting than
practically. By using priority queues, he reduced expected running time for the LAP
to O (n2 9log n).

6. Adjustment of the Dual Solution

After augmentation of a partial assignment the values of the dual variables must be
updated to restore complementary Slackness, that is,
c [i, k] - u [i] - v [k] = 0, if x [-i] = k (i = 1... n), (1)
c [i,j] - u [i] - v[j] >_0 ( i , j = 1 . . . n). (2)
Substituting the values u [i] from (1) in (2) leads to:
c [i, k] - v [k] <_ c [ i , j ] - v [j] (] = 1 . . . n).
334 R. Jonker and A. Volgenant:

So, all assignments in the (partial) solution must correspond to row minima in the
reduced costs matrix. This simple observation is useful in the update step of shortest
path based algorithms. The price v [k] of every assigned column k (with y [k] = i)
must be adjusted so that
c[i,k]-v[k] =min {c[i,j]-v[j] lj - 1 ... n}.
In the procedure A U G M E N T (Fig. 5) this is achieved just before augmentation
actually takes place. Where necessary, column entries in the reduced costs matrix
are increased by decreasing the current column prices v[j] (j--1 ... n) by # - d [j] if
d [j] < # , so that # is the minimum value in row kk. The corresponding values ofu [i]
with i = y [j] must be increased by the same amount. The d-values and /z are
obtained from the modified shortest path procedure, clarifying the role of the d [j]
(j = 1 ... n) in the procedures S H P A T H - A U G M E N T and A U G M E N T .

7. A Pascal Implementation

In Fig. 6 we present a Pascal code for the algorithm from the previous sections. We
suppose integer valued costs c [i,j] (i,j = 1 ... n), but the code is easily adapted for a
real valued costs matrix. The type "mat" is an integer n x n matrix, and the type
"vec" an integer array of length n.
Listings of the Fortran code for dense LAPs and of the Pascal and Fortran code for
sparse LAPs can be obtained from the authors on request,

function LAPJV (n: integer; c: mat; vat x, y, u, v: vec): integer;


{n: problem size;
c: costs matrix;
x: columns assigned to rows;
y: rows assigned to columns;
u: dual row variables;
v: dual column variables}
label augment;
const inf= 1000000; {inf is a suitably large number}
vat f,h,i,j,k, fO, il,jl,j2,ul,u2,min, last, low, up: integer;
col, d, free, pred: vec;
{col: array of columns, scanned (k = 1 ... l o w - 1),
labeled and unscanned (k = l o w . . . u p - 1),
unlabeled (k = up... n);
d: shortest path lengths;
free: unassigned rows (number f0, index f);
pred: predecessor-array for shortest path tree;
i, il : row indices; j, j l , j2: column indices;
last: last column in col-array with d [j] <min.}
begin
for i:=1 to n do x [ i ] : = 0 ;
A Shortest A u g m e n t i n g P a t h A l g o r i t h m for Dense and Sparse Linear A s s i g n m e n t Problems 335

for j: = n downto 1 do { 4~ #e 4~ 41: C O L U M N REDUCTION}


begin
c o l [ j ] : = j ; h:=c[1,y~; i 1 : = 1 ;
for i : = 2 to n do i f c [ i , j ] < h then begin h:=c[i,j]; i 1 : = i end;
vU]:=h;
if x [i 1] = 0 t h e n begin x [i 1] : = j ; y [j] : = i i end
else begin x [i] : = - abs (x [-i]); y [j] : = 0 end
end;
f: =0; { ~: 4t: 4t= 4t= R E D U C T I O N T R A N S F E R }
for i : = 1 to n do
if x [i] = 0 then { 4t: ~ unassigned row in free-array}
begin f : = f + 1 ; free I f ] : = i end else
if x [-i] < 0 then { ~: 4t: no reduction transfer possible}
x [ i ] : = - x [i] else { 4t: ~ reduction transfer from assigned row}
begin
j l : = x [i] ; rain: = i n f ;
for j : = l to n do if j < ~:jl then
if c [i,j] - v[j] < min then min: = c [i,j] - v [j] ;
v[j 1 ] : = v [/1] - m i n
end;
cnt: = 0 ; { 4~ :~ ~ 4~ A U G M E N T I N G ROW REDUCTION}
repeat
k:=l;f0:=f;f:=0;
while k < = f 0 do
begin
/:=free[k]; k:=k+l; ul:=c[i, 1]-v[1];jl:=l; u2:=inf;
for j : = 2 to n do
begin
h : = c [ i , j ] - v [j] ;
if h < u2 then
if h > = u l then begin u 2 : = h ; j 2 : = j end
else begin u 2 : = u l ; u l : = h ; j 2 : = j l ; j l : = j end
end;
il:=y[jl];
if u l < u 2 then v [ j l ] : = v [ j l ] - u 2 + u l
else if i 1 > 0 then begin j 1 : =j2; i 1 : = y [j 1] end;
if i 1 > 0 then
if u 1 < u2 then begin k: = k - 1 ; free [k] : = il end
else begin f : = f + 1 ; free I f ] : = i l end;
x[i]:=jl; y[jl]:=i
end;
cnt: = c n t + 1
until cnt = 2 ; { 4# ~ routine applied twice}
f0:=f; { 4t= @ 4t= :~ A U G M E N T A T I O N }
for f : = 1 to f 0 do
begin
il : = f r e e [jr] ; low: = 1 ; up: = 1 ; { @ ~+ initialize d- a n d pred-array}
for j : = l to n do begin d [ . l ] : = c [ i l , j ] - v [ j ] ; pred [ / ] : = i l end;
repeat
if up = low then { @ @ find columns with new value for m i n i m u m d}
begin
last: = l o w - 1; rain: = d [col [ u p ] ] ; up: = up + 1 ;
for k : = u p to n do
begin
j : = c o l [k]; h : = d [j];
if h < = rain then
336 R. Jonker and A. Volgenant:

begin
if h<min then begin up:=low; min:~h end;
col [k] : = col [up] ; col [up] : --j; up: = up + 1
end
end;
for h:=low to u p - 1 do
begin j: = col [h]; if y [j] = 0 then goto augment end
end; {up = low}
j l : = c o l [low]; low:=low+ 1; i : = y [ j l ] ; { ~ # scan a row}
u l : = c [i,j ll - v [jl] - m i n ;
for k:= up to n do
begin
j:=col [k]; h: =c [i,j] - v [j] - ul;
if h < d [j] then
begin
d W : = h ; pred [j] :=i;
if h = rain then
if y [j] =0 then goto augment
else begin col [k] : = col [up] ; col [up]: =j; up: = up + 1 end
end
end
until false; {repeat ends with goto augment}
augment:
for k: = 1 to last do { 4b ~ updating of column prices}
begin j l : = c o l [k]; v [ j l ] : = v [jl] +d [jl] - m i n end;
repeat { @ ~ augmentation}
i:=pred [j]; y[j]:=i; k:=j; j:=x[i]; x[iq:=k
until i = i 1
end; {of augmentation}
h: =0; { @ ~ :I# :~ DETERMINE ROW PRICES AND OPTIMAL VALUE}
for i : = I to n do beginj:=x[i]; u[~:-c[i,j]-v[j]; h:=h+u[i]+v[j] end;
lapjv:=h
end.

Fig. 6. Pascal function for the linear assignment algorithm LAPJV

8. Computational Results

W e c o m p a r e o u r a l g o r i t h m L A P J V with several o t h e r m e t h o d s , b o t h o n d e n s e a n d
o n sparse p r o b l e m s . T h e c o m p u t a t i o n a l results are for F o r t r a n codes r u n o n a C D C
C y b e r 750 (with O P T = 2), b o t t h e a l g o r i t h m s were d e v e l o p e d i n P a s c a l o n p e r s o n a l
c o m p u t e r s (Apricot P C a n d F 1, O l i v e t t i M24). R u n n i n g times o n these c o m p u t e r s
are t y p i c a l l y b e t w e e n 2 a n d 4 s e c o n d s for full dense p r o b l e m s of size 100, u s i n g
B o r l a n d ' s T u r b o P a s c a l c o m p i l e r (version 3.0).

T h e classical H u n g a r i a n code of Silver [27] is a n i n t e r m e d i a r y for a c o m p a r i s o n w i t h


the shortest p a t h b a s e d a l g o r i t h m of H u n g a n d R o m [18]. T h e y give the ratio of
their c o m p u t i n g times to those of the Silver code t r a n s l a t e d i n t o F o r t r a n . D i v i d i n g
the s a m e ratios for L A P J V b y those p u b l i s h e d b y H u n g a n d R o m shows t h a t o n
p r o b l e m s of full d e n s i t y o u r a l g o r i t h m is a b o u t twice as fast ( T a b l e 2).
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 337

Table 2. Computation times of LAPJV divided by those of Hung and Rom

range of cost coefficients


problem size 1 - 100 1 - 1000 1 - 10000

50 .34 .44 .72


100 .31 .41 .52

W e left p r i m a l simplex m e t h o d s out of c o n s i d e r a t i o n , as the literature shows t h a t


these are o u t p e r f o r m e d b y several other m e t h o d s . T h e a l g o r i t h m of H u n g a n d R o m
is " a b o u t twice as fast" as t h a t of Barr, G l o v e r a n d K l i n g m a n [2], which is one of the
best p r i m a l simplex m e t h o d s . G l o v e r confirmed this in a p r i v a t e c o m m u n i c a t i o n .
C a r p a n e t o a n d T o t h [6] c o m p a r e the s a m e p r i m a l simplex m e t h o d with their
H u n g a r i a n code for sparse p r o b l e m s S P A S S a n d with a T o m i z a w a b a s e d m e t h o d ,
a d a p t e d from a code in B u r k a r d a n d Derigs [4]. B o t h a l g o r i t h m s are faster b y some
m a r g i n . F i n a l l y , the H u n g a r i a n m e t h o d as i m p l e m e n t e d b y M c G i n n i s [25] is
" r o u g h l y c o m p a r a b l e in solution speed", b u t in an a d d e n d u m he i m p r o v e s his
m e t h o d to a m u c h faster one.
T h e c o m p u t a t i o n times of L A P J V in T a b l e 3 are averages for ten full density
p r o b l e m s , c o m p a r e d to those of the a l g o r i t h m s :
- A S S C T : 0 (n 4) H u n g a r i a n m e t h o d c o d e d b y C a r p a n e t o a n d T o t h [5], m a k i n g
extensive use of p o i n t e r techniques to locate zero valued r e d u c e d costs,
- L S A P : D o r h o u t ' s i m p r o v e d version of T o m i z a w a ' s a l g o r i t h m [10, 11] t r a n s l a t e d
into F o r t r a n a n d p u b l i s h e d b y B u r k a r d a n d Derigs [4],
- A S S I G N : the a l g o r i t h m of Bertsekas [3], as m a d e available b y the a u t h o r a n d
a d a p t e d for full density costs matrices.

Table 3. Computation times for assignment problems of full density (in ms)

cost problem LAP algorithm


range size ASSCT LSAP ASSIGN LAPJV

1 - 100 50 51 32 22 15
100 149 168 114 64
150 283 535 256 179
200 420 1363 520 323
1 - 1000 50 121 30 24 17
100 637 165 123 77
150 1447 456 364 225
200 2217 850 665 406
1 - 10000 50 145 31 28 25
100 1085 168 134 103
150 3562 453 410 259
200 6989 919 779 456
338 R. Jonker and A. Volgenant:

As usual with pure Hungarian methods, ASSCT turns out to be very sensitive to cost
range. Only for small cost ranges the use of pointer techniques makes the algorithm
competitive. LSAP performs strangely on the cost range 1 - 100 with relatively large
computation times, also observed by Derigs and Metz [9]. LAPJV is clearly faster
than ASSIGN, and less sensitive to the range of the cost coefficients.
For a comparison on sparse problems we adapted the data structure of LAPJV,
yielding LAPJVsp. Average computation times (for ten problems) are compared in
Table 4 with those of two algorithms for sparse LAPs:
- SPASS: Lawler's O (n 3) Hungarian method [23] coded by Carpaneto and
Toth [6],
- ASSIGN: again Bertsekas' algorithm [3] as provided to us (ASSIGN requires
too much memory for LAPs with n = 400 and 2 0 ~ density of the costs matrix).
An indirect comparison may be made with the code SPTM from Carraresi and
Sodini [7] for sparse LAPs. The SPTM computation times in Table 4 have been
obtained by multiplying the SPASS times with the ratios calculated from [7].
ASSIGN and LAPJVsp clearly outperform the Hungarian method. The margin in
speed of LAPJVsp over ASSIGN is about the same as on full density problems.

Table 4. Computation times for sparse assignment problems (in ms) ("." indicates not run or not known)

density and problem LAP algorithm


cost range size SPASS ASSIGN SPTM LAPJVsp

5 ~ / 1 - 100 100 65 38 19
200 211 110 67 62
400 713 451 335 253
5 ~ / 1 - 1000 100 82 50 25
200 361 174 113 81
400 1553 688 356 333
2 0 ~ / 1 - 100 100 71 54 26
200 304 290 234 154
400 1046 1029 657
2 0 ~ / 1 - 1000 100 119 69 33
200 576 384 253 188
400 2123 1039 788

We may also compare results with the shortest augmenting paths algorithms for
sparse LAPs of Derigs and Metz [8]. Their best code SAPM3 is faster than SPASS:
4 0 ~ to 50~o on problems with n = 200, about 5 ~ density, and on cost ranges 1 - 100
and 1 - 1000. This is substantially slower than LAPJVsp. Consequently, the code
LAPJVsp will provide a better basis for an in-core-out-of-core algorithm for full
density LAPs as proposed by Derigs and Metz [9]. Such a code solves a sparse
version of full density problems that cannot be entirely contained in memory. It
contains a procedure to check whether the not considered assignments price out
correctly, and a reoptimization procedure for use if they do not. An in-core-out-of-
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems 339

core code c a n also be u s e d as a n all i n - c o r e code. T h e a d v a n t a g e s of s o l v i n g o n l y


s p a r s e p r o b l e m s h a s to b e w e i g h e d a g a i n s t the effort to c o n s t r u c t the sparse
p r o b l e m s . U n f o r t u n a t e l y this t a s k is c o m p u t a t i o n a l l y n o n - t r i v i a l , l e a d i n g to a b o u t
the s a m e t o t a l c o m p u t a t i o n times as for L A P J V .

9. Conclusions

T h e c o m p u t a t i o n a l results s h o w t h a t the a v e r a g e c o m p u t a t i o n times of the


a l g o r i t h m L A P J V are u n i f o r m l y lower t h a n the best of o t h e r a l g o r i t h m s . T h e code is
of m o d e r a t e size, a n d the m e m o r y r e q u i r e m e n t s are small. T h e a l g o r i t h m is suited
for b o t h d e n s e a n d s p a r s e a s s i g n m e n t p r o b l e m s , a n d its sensitivity to cost r a n g e is
relatively low.

References

[1] Ba•inski• M. L. • Signature meth•ds f•r the assignment pr•b•em. •perati•ns Research 3 3 • 527 - 5 36
(1985).
[2] Barr, R., Glover, F., Klingman, D. : The alternating path basis algorithm for assignment problems.
Mathematical Programming 13, 1 - 13 (1977).
[3] Bertsekas, D. P. : A new algorithm for the linear assignment problem. Mathematical Programming
21, 152-171 (1981).
[4] Burkard, R. E., Dcrigs, U. : Assignment and Matching Problems: Solution Methods with Fortran
Programs, pp. 1-11. Berlin-Heidelberg-New York: Springer 1980.
[5] Carpaneto, G., Toth, P. : Algorithm 548 (solution of the assignment problem). ACM Transactions
on Mathematical Software 6, 104-111 (1980).
[6] Carpaneto, G., Toth, P. : Algorithm 50: algorithm for the solution of the assignment problem for
sparse matrices. Computing 31, 83-94 (1983).
[7] Carraresi, P., Sodini, C. : An efficient algorithm for the bipartite matching problem. European
Journal of Operational Research 23, 86-93 (1986).
[8] Derigs, U., Metz, A. : An efficient labeling technique for solving sparse assignment problems.
Computing 36, 301 - 311 (1986).
[9] Derigs, U., Metz, A. : An in-core/out-of-core method for solving large scale assignment problems.
Zeitschrift ffir Operations Research 30, 181-195 (1986).
[10] Dorhout, B.: Het Lineaire Toewijzingsprobleem: Vergelijking van Algorithmen. Rapport
BN21/73, Mathematisch Centrum, Amsterdam (1973).
[11] Dorhout, B. : Experiments with some algorithms for the linear assignment problem. Report BW 39,
Mathematisch Centrum, Amsterdam (1975).
[12] Dijkstra, E. W. : A note on two problems in connexion with graphs. Numerische Mathemafik 1,
269-271 (1959).
[13] Edmonds, J., Karp, R. M. : Theoretical improvements in algorithmic efficiency for network flow
problems. Journal of the ACM 19, 248-264 (1972).
[14] Ford jr., L. R., Fulkerson, D. R. : Flows in Networks. Princeton: Princeton University Press 1962.
[15] Glover, F., Klingman, D., Phillips, N.: A new polynomially bounded shortest path algorithm.
Operations Research 33, 65-73 (1985).
[16] Glover, F., Klingman, D., Phillips, N., Schneider, R.: New polynomial shortest path algorithms
and their computational attributes. Management Science 31, 1106-1128 (1985).
[17] Goldfarb, D.: Efficient dual simplex algorithms for the assignment problem. Mathematical
Programming 33, 187-203 (1985).
[18] Hung• M. S.• R•m• W. O. : S••ving the assignment pr•b•em by re•axati•n. •perati•ns Research 28•
969-982 (1980).
[19] Jonker, R. : Traveling salesman and assignment algorithms: design and implementation. Faculty of
Actuarial Science and Econometrics, University of Amsterdam (1986).
340 A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems

[20] Jonker, R., Volgenant, A. : Improving the Hungarian assignment algorithm. Operations Research
Letters 5, 171 - 175 (1986).
[21] Karp, R. M. : An algorithm to solve the m x n assignment problem in expected time O (ran log n).
Networks 10, 143 - 152 (1980).
[22] Kuhn, H. W.: The Hungarian method for the assignment problem. Naval Research Logistics
Quarterly 2, 8 3 - 9 7 (1955).
[23] Lawler, E. L.: Combinatorial Optimization: Networks and Matroids. New York: Holt, Rinehart
& Winston 1976.
[24] Mack, C. : The Bradford method for the assignment problem. New Journal of Statistics and
Operational Research 1, 1 7 - 2 9 (1969).
[25] McGinnis, L. F. : Implementation and testing of a primal-dual algorithm for the assignment
problem. Operations Research 31, 277-291 (1983).
[26] Papadimitriou, C. H_, Steiglitz, K.: Combinatorial Optimization: Algorithms and Complexity.
Englewood Cliffs, N.J. : Prentice-Hall 1982.
[27] Silver, R. : An algorithm for the assignment problem. Communications of the ACM 3, 605 - 606
(1960).
[28] Tabourier, Y.: Un Algorithme pour le Probl6me d'Affectation. R.A.I.R.O. Recherche
Op6rationnelle/Operations Research 6, 3 - 1 5 (1972).
[29] Tomizawa, N. : On some techniques useful for the solution of transportation problems. Networks
1, 173 194 (1971).

Roy Jonker* and


Ton Votgenant
Faculty of Actuarial Sciences
and Econometrics
University of Amsterdam
Jodenbreestraat 23
1011 NH Amsterdam
The Netherlands

* Current address:
Koninklyke/Shell-Laboratorium
Amsterdam
The Netherlands

You might also like