totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular...

28
1 Introduction This dissertation is a reading of chapters 16 (Introduction to Integer Liner Programming) and 19 (Totally Unimodular Matrices: fundamental properties and examples) in the book : Theory of Linear and Integer Programming, Alexander Schrijver, John Wiley & Sons © 1986. The chapter one is a collection of basic definitions (polyhedron, polyhedral cone, polytope etc.) and the statement of the decomposition theorem of polyhedra. Chapter two is “Introduction to Integer Linear Programming”. A finite set of vectors a 1 ,…, a t is a Hilbert basis if each integral vector b in cone {a 1 ,…, a t } is a non- negative integral combination of a 1 ,…, a t . We shall prove Hilbert basis theorem: Each rational polyhedral cone C is generated by an integral Hilbert basis. Further, an analogue of Caratheodory’s theorem is proved: If a system a 1 x β 1 ,…, a m x β m has no integral solution then there are 2 n or less constraints among above inequalities which already have no integral solution. Chapter three contains some basic result on totally unimodular matrices. The main theorem is due to Hoffman and Kruskal: Let A be an integral matrix then A is totally unimodular if and only if for each integral vector b the polyhedron x| x 0; Ax b is integral. Next, seven equivalent characterization of total unimodularity are proved. These characterizations are due to Hoffman & Kruskal, Ghouila-Houri, Camion and R.E.Gomory. Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices. We prove Konig-Egervary theorem for bipartite graph.

Transcript of totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular...

Page 1: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

1

Introduction

This dissertation is a reading of chapters 16 (Introduction to Integer Liner Programming)

and 19 (Totally Unimodular Matrices: fundamental properties and examples) in the book

: Theory of Linear and Integer Programming, Alexander Schrijver, John Wiley & Sons ©

1986.

The chapter one is a collection of basic definitions (polyhedron, polyhedral

cone, polytope etc.) and the statement of the decomposition theorem of polyhedra.

Chapter two is “Introduction to Integer Linear Programming”. A finite set

of vectors a1 ,… ,at is a Hilbert basis if each integral vector b in cone {a1 ,… ,at} is a non-

negative integral combination of a1 ,… ,at. We shall prove Hilbert basis theorem: Each

rational polyhedral cone C is generated by an integral Hilbert basis.

Further, an analogue of Caratheodory’s theorem is proved: If a system

a1x � β1, … , amx � β

m has no integral solution then there are 2

n or less constraints

among above inequalities which already have no integral solution.

Chapter three contains some basic result on totally unimodular matrices.

The main theorem is due to Hoffman and Kruskal: Let A be an integral matrix then A is

totally unimodular if and only if for each integral vector b the polyhedron �x| x � 0;Ax � b is integral.

Next, seven equivalent characterization of total unimodularity are proved.

These characterizations are due to Hoffman & Kruskal, Ghouila-Houri, Camion and

R.E.Gomory.

Basic examples of totally unimodular matrices are incidence matrices of

bipartite graphs & Directed graphs and Network matrices. We prove Konig-Egervary

theorem for bipartite graph.

Page 2: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

2

Chapter 1

Preliminaries

Definition 1.1: (Polyhedron) A polyhedron P �� is the set of points that satisfies

finite number of linear inequalities i.e., P = {x �� | Ax ≤ b} where (A, b) is an

m � (n + 1) matrix.

Definition 1.2: (Polyhedral Cone) A cone C is polyhedral if C = {x | Ax ≤ 0} for some

matrix A, i.e., C is the intersection of finitely many linear half spaces.

Definition 1.3: (Rational Polyhedral Cone) A cone C is rational polyhedral if

C = {x |Ax ≤ 0} for some rational matrix A.

Definition 1.4: (Characteristic Cone) The Characteristic cone of P, denoted by

Char.cone P, is the polyhedral cone

Char.cone P: = �y | x � y P for all x in P = {y | Ay ≤ 0} (1)

The non-zero vectors in char.cone P are called the infinite directions of P.

Definition 1.5: (Polytope) A bounded polyhedron is called a polytope.

Definition 1.6: (Pointed Cone) The linearity space of P is {y | Ay = 0} which is

Char.cone P � - Char.cone P.

Clearly it is a linear space, as the kernel of A. If the dimension of this space is zero then

P is called pointed.

Definition 1.7: (Characteristic Vector) Let S be a finite set. If T S, the characteristic

vector of T is the {0, 1} - vector in ��, denoted by z�, satisfies z��s! = 1 if s T

z��s! = 0 if s S\T

Theorem 1.8: (Farkas’-Minkowski-Weyl theorem) A convex cone is polyhedral if

and only if it is finitely generated.

Theorem 1.9: (Decomposition theorem for Polyhedra ): P is a polyhedron in �� if and

only if P = Q + C for some polytope Q and some polyhedral cone C.

Page 3: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

3

Theorem 1.10: (Farkas’ Lemma): Let A be a real m � n matrix and let c be a real

nonzero n- vector. Then either the primal system

Ax � 0 and c#x $ 0

has a solution for x �� or the dual system

A#y = c and y � 0

has a solution for, y �� but never both.

Page 4: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

4

Chapter 2

Integer Linear Programming

Definition 2.1: (Integer Linear Programming) Given rational matrix A, and rational

vectors b and c, determine,

max {cx | Ax ≤ b; x integral} (1)

Another definition is

Definition 2.2: Given rational matrix A, and rational vectors b and c,

determine,

max {cx | x ≥ 0 Ax = b; x integral} (2)

Remark 2.3: It is easy to see that we can obtain from one formulation to another one.

Note 2.4: We have the duality relation:

max {cx | Ax ≤ b ; x integral} � min {yb | y ≥ 0, yA = c; y integral} (3)

[Since, Ax ≤ b yAx ≤ yb cx ≤ yb]

We can have strict inequality. For example, Take A = �2!, b = �1!, c = �1!. Thus the primal problem is

max {x | 2x ≤ 1; x integral}

Clearly, the maximum is 0. [x {0,-1,-2,-------}]

The dual is

min {y | y ≥ 0, 2y = 1; y integral}

So the problem is infeasible. But the corresponding LP – optimal both are )* . Note 2.5: We may write an analogous statement for Farkas’ lemma.

Page 5: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

5

The rational system Ax = b has a nonnegative integral solution x if and only if yb is a

nonnegative integer whenever yA is a nonnegative integral vector.

But, the statement is not true. For Example, Take A =�2, 3!, b = �1!. The rational system

is

2x1 + 3x2 = 1.

Clearly, yA = �2y, 3y! ≥ 0 implies y ≥ 0. As 2x)+ 3x*= 1 has no non-negative integral

solution, converse is not true.

Note 2.6: We restrict to integer linear programming with rational input data. Otherwise,

there may not be an optimum to the given problem. For Example,

Sup ,ξ � η√2 | ξ � η√2 � )* , ξ, η, integer1 2 )*

But no ξ, η attain the supremum. (Since √2 is irrational)

Definition 2.7: The LP-relaxation of the integer linear programming problem in the

definition 2.1 is the following LP problem:

max {cx | Ax ≤ b} (5)

Clearly, LP-relaxation gives an upper bound for corresponding integer linear

programming.

Definition 2.8: (The Integer Hull of a Polyhedron) For any polyhedron P, the integer

hull PI of P is

PI = the convex hull of the integral vectors in P. (6)

Note 2.9: The ILP problem (1) is equivalent to determine,

max {cx | x PI} for P = {x | Ax ≤ b}

Remark 2.10: For any rational polyhedral cone C,

CI = C (7)

Page 6: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

6

(as C is generated by rational and hence by integral, vectors.)

Theorem 2.11: [Meyer] For any rational polyhedron P, the set PI is again a polyhedron.

If PI is nonempty then

Char. cone (P) = Char. cone (PI) Proof: Consider a rational polyhedron P with a decomposition

P = Q + C,

Where Q is a Polytope and C is the characteristic cone of P. As C = CI, let C be generated

by the integral vectors y),… , y� and let B be the polytope, defined by,

B = 5∑ 7898:8;) | 0 � µi � 1 for i 2 1,… , s< (8)

It is enough to show that

PI = (Q � B!I+ C

Note that (Q � B!I is a polytope as both Q & B are polytopes.

Observe,

(Q � B!I + C PI + C

= PI + CI ( Remark 2.10)

(P � C!I

= PI

i.e., (Q � B!I + C PI

Now to show reverse inclusion. Take p be any integral vector in P. i.e., p PI. Now,

p = q + c for some q Q and c C. We have,

C = ∑ µiyi ( µi� 0)

Page 7: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

7

= ∑@µiAy

i + ∑� µ

i - @µ

iA )y

i,

Denote first term by c′ and the second by b (= c –c′). Clearly c′ C � BC and b D.

Hence,

p = q + c′ + b = (q+b) + c′

q + b = p – c′

q + b is an integral vector as p and c′ are integral.

p (Q � B!I + C

i.e., PI = (Q � B!I + C. ∎

Remark 2.12: The above theorem implies that any integer linear programming problem

can be written as max {cx | x Q} for some polyhedron Q which is again a linear

programming problem.

This means we can represent PI by linear inequalities. But generally this is

a difficult task. The theorem (2.11) can be extended to: For each rational matrix A, there

exists a integral matrix M such that for each column vector b there exists a column vector

d such that

�x | Ax � b I = {x | Mx ≤ b} (9)

So the coefficients of the inequalities defining PI can be described by the

coefficients of the inequalities defining P.

Definition 2.13: (Integral Polyhedron) A rational polyhedron with property P = PI is

called an integral polyhedron.

Remark 2.14: It is easy to see that for a rational polyhedron P the following are

equivalent.

(i) P is integral i.e., P = PI i.e., P is the convex hull of the integral vectors in P.

(ii) Each face of P contains integral vectors.

Page 8: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

8

(iii) Each minimal face of P contains integral vectors.

(iv) max {cx | x F} is attained by an integral vector for each C for which the

maximum is finite.

Definition 2.15: (Hilbert Basis) A finite set of vectors a1 ,… ,at is a Hilbert basis if each

integral vector b in cone 5a1 ,… ,at< is a nonnegative integral combination of a1 ,… ,at.

Note 2.16: When each vector in a Hilbert basis is integral, it is called Integral Hilbert

Basis.

Theorem 2.17: [Gordan] Each rational polyhedral cone C is generated by an integral

Hilbert basis.

[Vander Corput] If C is pointed there is a unique minimal Hilbert basis generating C,

(minimal relative to taking subset).

Proof: Let C be a rational polyhedral cone generated by say b1,… , bk [Theorem 1.8].

Without loss of generality, we can assume that b1,… , bk are integral vectors. Let a1,… , at

be all integral vectors in the polytope

{λ1b1 � … � λkbk | 0 � λ8 � 1, i = 1,…, k} (10)

We claim that a1,… , at form an integral Hilbert basis. Inparticular those b1,… , bk among

a1,… , at shall generate C. Let b be any integral point in C. We have

b = ∑ µiy

iki;1 , µ

i� 0 (11)

Then

b = @µ1Ab1 + + @µ

kAbk+ GHµ

1I @µ

1AJb1 � K � Hµ

kI @µ

kAJbkL (12)

b - @µ1Ab1 + + @µ

kAbk = GHµ

1I @µ

1AJb1 � K � Hµ

kI @µ

kAJbkL (13)

The left hand side vector, as it is integral, occurs among, a1,… , at. Observe that the R.H.S.

of (13) clearly belongs to (11) because 0 � µ8 I @µMA $ 1.

Page 9: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

9

Since b1,… , bk occur among a1,… , at, it follows that b is a nonnegative integral

combination of a1,… , at. So a1,… , at form a Hilbert basis.

Now, assume that the cone C is pointed. Define

H = {a C | a N 0, a integral, a is not the sum of two other integral

vectors in C}. (14)

It is clear that any integral Hilbert basis generating C must contain H. So H is finite as H

is contained in (10).

We claim that H itself a Hilbert basis generating C. Let b be a vector such that

bx 0 if x C – {0} (b exists as C is pointed). Suppose not every integral vector in C is

a nonnegative integral combination of vectors in H. Let c be such a vector, with bc as

small as possible this exists as, c must be in the set (10). Then c is not in H. Hence

c = c1 � c2

for certain nonzero integral vectors c1 and c2 in C. Then

0 $ Oc1 $ OP and 0 $ Oc2 $ OP

Hence both c1 and c2 are nonnegative integral combinations of vectors in H and therefore

c is also. Therefore H is a Hilbert basis. As H is contained in any Hilbert basis of C, it is

minimal. ∎

Remarks 2.18:

(i) Combining the methods of theorem 2.11 and 2.17 for any rational polyhedron

P there exist integral vectors x1, … , xt, y1, … , y

s such that

�x| x P, x integral =5λ1x1 � K� λtxt � µ

1y

1� K�

µsy

s | λ1, … , λt,µ1, … ,µ

s nonnegative integers with ∑ λi 2 1< (15)

(ii) If the cone is not pointed there is no unique minimal integral Hilbert basis

generating the cone.

Page 10: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

10

(iii) If a vector c belongs to a minimal integral Hilbert basis generating a pointed

cone then the components of c are relatively prime integers.

Theorem 2.19: [A Theorem of Doignon] Let a system

a1x � β1, … , amx � β

m (16)

of linear inequalities in n variables be given. If (16) has no integral solution, then there

are 2n or less constraints among (16) which already have no integral solution.

Proof: Suppose (16) has no integral solution. We may assume that if we delete one of

the constrains in (16), the remaining system has an integral solution. This means there

exists integral vectors x1, … , xm so that, for j = 1, … , m,

ajxj Q β

j

aixj � β

i i j

We must show m 2n. So assume m > 2

n. Let

Z = Bn � Conv.hull {x1, … , xm} (17)

Choose γ1, … , γ

m so that:

(i) γj� min ,ajz Rz Z; ajz Q β

j1

(ii) the system a1x � γ1, … , amx � γ

m has no solution in Z.

(iii) γ1� K � γ

m is as large as possible. (18)

We claim that such γ1, … , γ

m exists. This is proved by showing the set of

Hγ1, … , γ

m J satisfying (18) is nonempty, bounded and closed. Note that

xj ,z Rz Z; ajz Q βj1

Page 11: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

11

If we take

γj2 min ,ajz Rz Z; ajz Q β

j1

such γ1, … , γ

m exist. Note also that γ

jQ β

j. As ajx

j Q βj, the system in (ii) has no

solution in Z. This shows that (18) is nonempty. Next, if

γj� ajx

j

Then as aixj $ β

i i j and β

i� γ

i we get that system in (ii) has a solution. Therefore

γj� ajx

j j

i.e., the set of Hγ1, … , γ

mJ, satisfying (18), is bounded.

Now the complement of the set of Hγ1, … , γ

m J satisfied (18) is

,z Rajz $ γj, S j1

which is a finite intersection of open half spaces ajz $ γj and hence it is an open set.

Since γ1� K � γ

m is as large as possible, for each j =1, … , m, there exists yj Z. So that

ajyj = γ

j and aiy

j $ γi �i N j! (19)

As m Q 2n, there exists k, l �k N l!so that

yk = yl mod�2! i.e. , either both are even or both are odd. [n = 1 m > 2, n = 2 m > 4]. Thus,

1

2Hyk � ylJ belongs to Z and in view of (19), satisfies the system in (ii). This contradicts

(ii). Therefore m 2n. ∎

Corollary 2.20: [Scarf] Let Ax b, be a system of linear inequalities in n variables, and

let c �n. If max {cx | Ax ≤ b ; x integral} is finite then

Page 12: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

12

max {cx | Ax ≤ b; x integral} = max {cx | A′x ≤ b′; x integral} (20)

for some subsystem A′x ≤ b′ of Ax

b with at most 2

n - 1 inequalities.

Proof: Let

µ = max {cx | Ax ≤ b ; x integral}

Hence for each t �n., the system

Ax ≤ b, cx � µ� 1U (21)

has no integral solution. Therefore, by theorem (2.19) for each t N there is a subsystem

of (21) of at most 2n constraints having no integral solution.

Since Ax � b does have an integral solution (as µ

is finite), each such

subsystem must contain the constraint cx � µ� 1U. Hence there is a subsystem A′x ≤ b

′ of atmost 2

n - 1 constraints so that the

system (21) has no integral solution for infinitely many values of t V. Therefore, A′x

≤ b′, cx > µ has no integral solution. This gives (20). ∎

Note 2.21: The bound 2n in theorem (2.19) is best possible. This is shown by the system

∑ xii I - ∑ xiiWI � |I| - 1 (I {1, …, n}) (22)

of 2n constraints in the n variables x1, … , xn. Observe that for n= 1, the above system is

x1 � 1 �I 2 X! x1 � 0 �I 2 �1!

which is clearly infeasible.

Next, for n = 2, the system is

x1 � x2 � 1 �I 2 X!

Page 13: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

13

x) I x* � 0 �I 2 �1!

x2 I x1 � 0 �I 2 �2!

x1 � x2 � 1 �I 2 �1,2!

In particular, we have

x1 � x2 2 1

x1 I x2 2 0

Clearly, the above system has no integral solution (x1 = 1

2 , x2 2 1

2 ).

Now, take n + 1 variables. Put IY = {1, 2, … ,n, n + 1} = {1, …, n} Z {n + 1}.

Observe that 2�[) 2 2� � 2�. Hence we can arrange 2�[) inequalities in the system (22)

as

∑ xii I - ∑ xiiWI - x�[) � |I| - 1 $ |I| ( I {1, …, n})

∑ xii I + x�[) - ∑ xiiWI � |I| � 1 - 1 = |I| If this system has a solution then adding we get a solution of the system

∑ xii I - ∑ xiiWI $ |I| This means

∑ xii I - ∑ xiiWI � |I| – 1

which has no solution by induction hypothesis.

Page 14: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

14

Chapter 3

Totally Unimodular Matrices

Definition 3.1: (Totally Unimodular Matrix) A matrix A is totally unimodular if each

sub determinant of A is 0, +1 or -1.

Note3.2: In particular each entry in a totally unimodular matrix is 0, +1 or -1.

Remark 3.3: It is easy to see that if A is totally unimodular then all following matrices

are totally unimodular.

AT, -A, HA

IJ, H A\A

J, H A

ATJ , ] III AIA

^

Further if A is a square totally unimodular matrix then A\1

is also totally unimodular.

A relation between totally unimodularity and integer linear programming is

given by following result.

Theorem 3.4: Let A be totally unimodular matrix and let b be an integral vector. Then

the polyhedron P = �x| Ax � b is integral.

Proof: Consider a minimal face F of P

F = 5x_ A′x 2 b′<

where A′x � b′

is a subsystem of Ax � b with A′ having full row rank. Then we may

permute the columns of A in such a way that,

A′ 2 `U Va where U is a nonsingular matrix and let det U = 1. A basic feasible solution of A

′x 2

b′ is

= GUb1b

0L (1)

Page 15: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

15

Since U is totally unimodular U\1 is also. Hence each entry in U\1 is 0, 1. Thus x is

integral. Thus every minimal face is an integral vector. Hence P is an

integer polyhedron. ∎ Note 3.5: Following corollary makes clear that each linear program with integer data and

totally unimodular constraints matrix has an integral optimum solution.

Corollary 3.6: Let A be totally unimodular matrix and let b and c be integral vectors.

Then both problems in the LP-duality equation

max {cx| Ax � b = min {yb | y ≥ 0, yA = c} (2)

have integral optimum solution.

Proof: By above theorem the polyhedron Ax � b is integral and hence

max {cx| Ax � b} is integral. Further as A is unimodular,

c II AT IAT d (3)

is also unimodular which is a constraints matrix for the minimization problem. We again

use above theorem to conclude that min {yb | y ≥ 0, yA = c} is integral. ∎

Remark 3.7: Hoffman & Kruskal theorem characterizes totally unimodularity which is

similar to above characterization.

Definition 3.8: Let A be any m n matrix of row full rank. A is called unimodular if A

is integral, and each basis of A has determinant e 1.

Proposition 3.9: The matrix A Zm�n is totally unimodular if and only if ` I A a is

unimodular.

Proof: [f] If a basis of ` I A a contains columns from A then its determinant is e 1.

Otherwise given basis can be rearranged (if necessary) in following form

gI B1

0 B2h (4)

Page 16: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

16

Note that det B2 N 0, as columns form basis. But A is totally unimodular, so det B2 = e 1

and hence determinant of the basis is e 1.

`ia Let ` I A a be unimodular. Consider a submatrix B of A. If rk�B!= m then by

unimodularity of ` I A a, det B = e 1.

Suppose rk�B! = k m. Now we can complete the basis using columns of

and taking columns in A corresponding to columns in B. Further, we can rearrange

these columns to have form

B1 2 jI Bk0 B

l

Now, by unimodularity of ` I A a, det B1 = e1 = det B. ∎

Theorem 3.10: Let A be an integral matrix of full row rank then the polyhedron �x| x � 0;Ax 2 b is integral for each integral vector b, if and only if A is unimodular.

Proof: `ia Let A be m � n matrix. First suppose that A is unimodular. Let b be an

integral vector, and let x be a vertex of the polyhedron �x| x � 0;Ax 2 b. Then there are

n linearly independent constraints satisfied by x with equality. Therefore the columns of

A corresponging to the nonzero components of x are linearly independent.

We can extend these columns to a basis B of A. Then x restricted to the

coordinates corresponding to B is equal B\1b, which is integral as det B = e 1. Since

outside B, x is zero, it follows that x is integral.

[f] Suppose that �x| x � 0;Ax 2 b is integral for each integral vector b.

Let B be a basis of A. To prove that B is unimodular it suffices to show

that B\1t is integral for each integral vector t. Then there exist an integral vector y such

that

z = y + B\1t � 0

Page 17: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

17

Then b = Bz is integral. Now, extend z by adding zero components. Let this vector be z′. Then

Az′ = Bz = b

and z′ is a vertex of the polyhedron �x| x � 0;Ax 2 b. (As it is in the polyhedron, and

satisfies n linearly independent constraints with equality.) Therefore, z′ is integral, so z

and z – y = B\1t is integral. ∎

Corollary 3.11: (Hoffman and Kruskal’s

theorem) Let A be an integral matrix. Then A

is totally unimodular if and only if for each integral vector b the polyhedron �x | x � 0;Ax � b is integral.

Proof: Note that, for any integral vector b, the vertices of the polyhedron �x | x � 0;Ax � b are integral if and only if the vertices of the polyhedron �z |z � 0; `I Aaz 2 b are integral. (Transform Ax � b into Ax � y 2 b and put

(x,y) = z). By the proposition (3.9) A is totally unimodular if and only if ` I A a is

unimodular. Hence, the theorem proves the corollary. ∎

Remark 3.12: An integral matrix A is totally unimodular if only if for all integral vectors

a,b,c,d the vertices of the polytope �x|c � x � d; a � Ax � b are integral.

Observe that the constraints can be written as x � d, Ix � c, Ax � b, I Ax � Ia. Hence

the corresponding matrix has the form

] IIIAIA

^ (5)

Note 3.13: It is clear from the Hoffman and Kruskal’s theorem that an integral matrix A

is totally unimodular if and only if one of the following polyhedron has all vertices

integral, for each integral vector b and for some integral vector c.

�x| x � c;Ax � b

�x| x � c;Ax � b

Page 18: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

18

�x| x � c;Ax � b �x| x � c;Ax � b Corollary 3.14: An integral matrix A is totally unimodular if and only if for all integral

vectors b and c both sides of the linear programming duality equation

max{cx | x � 0,Ax � b} = min {yb | y ≥ 0, yA � c } (7)

are achieved by integral vectors x and y (if they are finite).

Proof: Clear from above corollary and noting that A is totally unimodular if and only if

AT is totally unimodular. ∎

Theorem 3.15: Let A be a matrix with entries 0, +1 or -1. Then the following are

equivalent:

(i) A is totally unimodular, i.e. each square submatrix of A has determinant 0, +1,

or -1.

(ii) [Hoffman & Kruskal] For each integral vector b the polyhedron �x| x � 0,Ax � b has only integral vertices.

(iii) [Hoffman & Kruskal] For all integral vectors a,b,c,d the polyhedron �x| c � x � d, a � Ax � b has only integral vertices.

(iv) [Ghouila-Houri] Each collection of columns of A can be split into two parts

so that the sum of the columns in one part minus the sum of the columns in

the other part is a vector with entries only 0, +1, or -1.

(v) [Camion] Each nonsingular submatrix of A has a row with an odd number of

nonzero components.

(vi) [Camion] The sum of the entries in any square submatrix with even row and

column sums is divisible by four.

(vii) [R.E.Gomory] No square submatrix of A has determinant +2 or -2.

Proof: We shall prove equivalence in following way.

Page 19: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

19

�i! m �ii! m (iii) f �iv! f (v) f (vii)

o

�vi! f (vii) m (i) The equivalence of (i), (ii) and (iii) is the Hoffman and Kruskal’s theorem (corollary

3.11, remark 3.12 and note 3.13)

�iii! f �iv! [(i) f (iv)]

Let A be totally unimodular, and choose a collection of columns of A. Consider the

polyhedron

P = ,xR 0 � x � d, p1

2Adq � Ax � r1

2Ads1 (8)

where d is the characteristic vector of the collection of chosen column and t u and v w denote component-wise lower and upper integral parts of vectors. Since is nonempty,

as 1

2d P, P has atleast one vertex say x, which is by �iii! a �0,1 vector. Then

y = d – 2

has components only 0,+1, or -1 and

y d �mod2! [ Observe that,

y = 0 d = 0 x = 0

y = 1 d = 1 x = 0

y = -1 d =1 x = 1 ]

Hence Ay has components only +1, -1 or 0. So y yields a partition of the columns as

required.

�iv! f �v!

Page 20: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

20

Let B be a square submatrix such that each row has even number of nonzero entries.

Thus, its row sum is even. By �iv! there exists a ��1, I1 vector x such that Bx is a �0, �1, I1 vector. Since B has even row sums we know that Bx = 0. Since xN 0, B is

singular. [The nullity is strictly less than the order of the matrix.]

�iv! f �vi! Let B be a square submatrix of A with each row sum and each column sum even. By �iv! the column of B can be split into two classes B1 and B2 so that the sum of the

columns in B1 is the same as the vector as the sum of the columns in B2 as each row sum

of B is even.

Let σ� Bi! denote the sum of the entries in Bi . Then

σ� B)! = σ� B2! and σ� Bi! is even as each column sum of B is even. Hence Then σ� B)! + σ� B2! is

divisible by four.

�i! f �vii! Obvious

�vii! f �i! Suppose no square sub matrix of A has determinant 2. To show that each

square sub matrix of A has determinant 0, 1, it is sufficient to show that each square

�0, e 1 matrix B with |det B| Q 2 has a square submatrix with determinant 2. Let

order of B be n. Consider the matrix

C = ` B I a Let C

′ arise from C by adding or subtracting rows to or from other rows, and by

multiplication of columns by -1, such that

(i) C′ is a �0, e1 matrix.

(ii) C′ contains among its columns the n unit basis column vectors and

(iii) C′ contains among its first n columns as many unit basis column vector as

possible.

Page 21: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

21

Let k be the number of unit basis vectors in the first n column of C′. We may suppose

without loss of generality that

C′ 2 cIx K 0y DY y0 K I�\x

d (10)

for a certain square matrix B′ of order n where Ik and In\k denote the identity matrices of

order k and n-k. Since the first n column of C and hence also of C′ form a matrix with

determinant not equal to 1 (z |det B| Q 2 ), we have k n.

So there is a 1, without loss of generality, occurring in some position �i, j! of

C′ with k + 1 i, j n. By our assumption �iii! we cannot transfer column j to a unit

vector by elementary row operations without violating condition �i!. Hence there is a

pair i′,j′ such that the 2 2 sub matrix with row indices i and i′ and column indices j and

j′ has the form

j 1 I1

1 I1 l or j 1 I1I1 I1

l Now the sub matrix of C

′ formed by the columns j, j′ and the unit column vectors, except

the ith

and i′ th

unit vectors, has determinant 2. So also the corresponding columns of C

form a matrix with determinant 2. This implies that B has a submatrix with determinant

2.

�v! f �vii! Suppose �v! is true for A. i.e., each nonsingular submatrix of A has a row with an odd

number of nonzero components. If �vii! is not true then det A = e 2 and each square

proper submatrix of A has determinant 0 or e 1 [since �i! m �vii!]. Now,

Since, det A 0�mod 2! the columns of A are linearly dependent over B2. As �i! & �vii!

are equivalent, det A = 0 over implies det A 0�mod 2! i.e., for each proper submatrix

of A, linear dependent over coincides with linear dependence over B2.

Page 22: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

22

As det A = e 2, the columns of A are linearly independent over . But det A

0�mod 2!. This means columns of A are linearly dependent over B2. Thus sum of all

columns of A is a vector having even components only. Clearly this contradicts �v!. �vi! f �vii! Similarly, as above, we can show that the sum of the rows of A has even components

only. Let B arise from A by deleting the first row of A. Note that we have proved �i! m �ii! m �iii! f �iv! f �v! f �vii! m �i!. Hence �vii! f �iv!. Therefore, there

exists a ��1, I1 vector x such that Bx is a �0, �1, I1 vector. But the matrix B has even

row sums (as A has above). This gives Bx = 0. So

Ax =

{|||} α 0

0′′

0 ~���� (9)

for some integer α. Now

where A′ arises from A by deleting the first column of A. Observe that determinant of the

second matrix on L.H.S. is 1. �x is a ��1, I1 vector!. Hence the determinant of the

matrix on R.H.S. must be e α [Infact, minor of α cannot be zero as, now, determinant of

L.H.S. is 2 ].

Further, as x is ��1, I1 vector, 1 – x has even components only. 1TA is the

sum of the rows which has also even components. Therefore

1TA�1 I x! = 0 �mod 4!.

Next, from (9),

1TAx = α = 2�mod 4! (z |�| = 2)

1TA1 = 2�mod 4!

which contradicts �vi!. ∎

Page 23: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

23

Theorem 3.16: [Baum and Trotter] An integral matrix A is totally unimodular if and

only if for all integral vectors b and y, and for each natural number k 1, with y

0, Ay

kb, there are integral vectors x1, … , xk in �x � 0;Ax � b such that y = x1 � … + xk.

Proof: `ia To show that A is totally unimodular, it is enough to show that for each

integral vector b, all vertices of the polyhedron �x| x � 0,Ax � b are integral (above

theorem 3.15(ii)). Suppose x0 is a non integral vertex. Let k be the l.c.m.of the

denominators occurring in x0. Then y = kx0 satisfies y 0, Ay kb. Therefore y = x1 �

… + xk. for certain integral vectors x1, … , xk in P. Hence

x0 = x1[ … [ xk

k

is a convex combination of integrals vectors in P. Contradicting the fact that x0 is a non-

integral vertex of P.

[ ] Let A be totally unimodular. Choose integral vectors b and y and a natural number

k 1, such that y 0, Ay kb.We show that there are integral vectors x1, … , xk in

�x| x � 0,Ax � b with y = x1 � … + xk, by induction on k. the case k = 1 is trivial �y 2 x1!. We know that the polyhedron

�x | 0 � x � y; Ay I kb � b � Ax � b (12)

is nonempty, as k\1

y is in it.

Since A is totally unimodular (12) has an integral vertex [theorem 3.15], call it

xk. Let

y′= y - xk.

Then y′ is integral, and y′ � 0 �0 � xk � y! and

Ay′ = A�y I xk! = Ay - Axk

Ay – �Ay I kb � b!

= �k I 1!b

Page 24: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

24

Hence by induction,

y′ = x1 � … + xk\1

for integral vectors x1,… , xk\1 in �x � 0;Ax � b. So, y = x1 � … + xk is a

decomposition as required. ∎

Theorem 3.17: Let A be a matrix of full row rank. Then the following are equivalent.

(i) for each basis B of A, the matrix B\1A is integral.

(ii) for each basis B of A the matrix B\1A is totally unimodular.

(iii) there exists a basis B of A for which B\1A is totally unimodular.

Proof: We can arrange columns of A = `B Ra where B is a non singular matrix. Then

B\1A = `I Ca, C = B\1R. Now observe that �i!, �ii! & �iii! are invariant under

premultiplying A, by a non singular matrix. Hence we can assume that A = `I Ca for

some matrix C. Now, by proposition 3.9 each of (i), (ii) and (iii) is equivalent to each basis of

`I Ca being unimodular. ∎

Theorem 3.18: (Chandrasekaran) A matrix A is totally unimodular if and only if for

each nonsingular submatrix B of A and for each nonzero {0, 1} vector y, the g.c.d. of

the entries in yB is 1.

Proof: [ ] Let B be a non-singular submatrix of a totally unimodular matrix A. By

above theorem 3.15 (i), B\1 is integral. Let k be the gcd of the components of yB. As

entries of B are 0, e1 and y is a {0, e 1} vector k\1

yB is integral. Thus

k\1

y = k\1

yBB\1

is integral. As y is a {0, e1}-vector, k = 1.

Page 25: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

25

`ia Let B be a nonsingular submatrix of A. Then the components of 1B have g.c.d. 1.

Then one of the columns of B must have an odd number of nonzero entries. Thus the

condition (v) is satisfied in the theorem 3.15. ∎

Remark 3.19: [Tamir] The proof shows that A is totally unimodular if and only if for

each non-singular sub matrix B of A the g.c.d. of the entries in 1B is 1.

Remark 3.20: A nonsingular matrix B is unimodular if and only if g.c.d.(yB) = g.c.d.(y)

for each vector y.

Remark 3.21: With Hoffman and Kruskal’s theorem the total unimodularity of the

incidence matrix of a bipartite graph implies several theorems, like Konig’s theorem for

matching and coverings in bipartite graphs and the Birkoff-von Neumann theorem on

doubly stochastic matrices.

The Basic examples

Example 3.22: (Bipartite Graphs) Let G = (V, E) be an undirected graph. Let M be V

E incidence matrix of G (i.e., M is the {0,1}-matrix with rows and columns indexed by

the vertices and edges of G, respectively, where Mv,e = 1 if and only if v e). It is easy

to see that G is bipartite if and only if the rows of M can be split into two columns so that

each column contains a 1 in each of these classes. By theorem 3.15, Ghouila-Houri’s

characterization (iv),

M is totally unimodular if and only if G is bipartite. (13)

Let M be the V E incidence matrix of the bipartite graph G = (V, E) Then by (13) and

corollary 3.14. we have

max �y1 |y � 0, yM � 1, y integral = min �1x |x � 0;Mx � 1; x integral This is equivalent to Konig’s covering theorem: the maximum cardinality of a co-clique

in a bipartite graph is equal to the minimum number of edges needed to cover all vertices.

(we assume that the graph has no isolated vertex). Similarly,

max �1x|x � 0;Mx � 1; x integral = min �y1|y � 0 yM � 1, y integral

Page 26: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

26

This is equivalent to the Konig-Egervary theorem: the maximum cardinality of a

matching in a bipartite graph is equal to the minimum cardinality of a set of vertices

intersecting each edge.

More generally,for each w: E � B[

max �wx |x � 0 Mx � 1, x integral = min �1y |y � 0; yM � 0; 9 integral If we consider w as a profit function, above equation gives a min-max formula for the

optimal assignments problem.

Further, again by the theorem 3.15 (ii), it follows that the polytopes �x|x � 0,Mx � 1 and �x|x � 0,Mx 2 1 are integral. Therefore, a function x: E � �[

is a convex combination of incidence vectors of (perfect) matching in G if and only if

∑ x�e!v e � 1 � ∑ x�e!v e 2 1 !

for each vertex v.

Inparticular, if G is the complete bipartite graph Kn,n. Then the last-result is

equivalent to the theorem of Birkoff and Von Neumann: A doubly stochastic matrix is a

convex combination of permutation matrices. ∎

Example 3.23: (Directed Graphs) Let D = �V,A! be a directed graph, and let M be the

V � A incidence matrix of D defined by

Mv,a = +1 if a enters v

= -1 if a leaves v

= 0 otherwise

Then M is totally unimodular by following theorem.

Theorem 3.24: [Poincare] A {0, 1} matrix with in each column exactly one +1 and

exactly one -1 is totally unimodular.

Page 27: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

27

Proof: [Veblen & Franklin] Proof uses induction on size t of a submatrix N of M. The

case t = 1 is trivial. Let t Q 1. There are three cases.

Case (i): If N has a column with only zeros. Then, clearly det N = 0.

Case (ii): Now, suppose a column with exactly one nonzero entry. Then we can write N,

if necessary after permuting rows and columns,

N = ge1 bt

0 N′h

For some matrix b and matrix N′. Then by induction hypothsis.

det N′ {0, 1}

and hence det N = �e1! det N′ {0, 1}.

Case (iii): As a last case, suppose each column of N contains exactly two nonzero

entries. Then each column of N contains one +1 and one -1, while all other entries are

zero. So the rows of N add up to the zero vector. This means rows are linearly

dependent. Therefore det N = 0. ∎

Note 3.25: The theorem also follows from Ghouila-Houri’s characterization (iv) in

theorem 3.15.

Page 28: totally unimodular matrices - Devi Ahilya … unimodular...Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices.

28

REFERENCE

1. Theory of Linear and Integer Programing, Alexander Schrijver, John Wiley &

Sons © 1986.