DETERMINANTS OF MATRICES OVER LATTICES · Due to the recent developments in Lattice Theory and the...

37
DETERMINANTS OF MATRICES OVER LATTICES I' by Daniel Sprigg IV Thesis submitted to the Graduate Faculty of the Virginia Polytechnic Institute in candidacy for the degree of MASTER OF SCIENCE in Mathematics APPROVED: ChJlrman, Jean H. Bevis Leon W. Rutland Carl E. Hall August 1967 Blacksburg, Virginia

Transcript of DETERMINANTS OF MATRICES OVER LATTICES · Due to the recent developments in Lattice Theory and the...

DETERMINANTS OF MATRICES OVER LATTICES I'

by

Daniel Sprigg ~~esley, IV

Thesis submitted to the Graduate Faculty of the

Virginia Polytechnic Institute

in candidacy for the degree of

MASTER OF SCIENCE

in

Mathematics

APPROVED: ChJlrman, Jean H. Bevis

Leon W. Rutland Carl E. Hall

August 1967

Blacksburg, Virginia

I~

2

TABLE OF CONTENTS

CHAPTER

I. INTRODUCTION •

III.

IV.

v. VI.

;'VII.

VIII.

PRELIMINARIES

2.1 Basic Lattice Theory

2.2~ Matrices: Fundamental Concepts and Operations •

2.3 Permutations

THE FIRST DETERMINANT •

THE SECOND DETERMINANT • •

THE THIRD DETERMINANT . • • • .

BIBLIOGRAPHY •

ACKNOWLEDGEMENTS • •

VITA . .

PAGE

3

• 5

5

10

15

17

24

28

34

35

36

3

I. INTRODUCTION

In recent y~ars, there has been an ever increasing amount of work

done in Lattice Theory. In the.beginning of his investigation, the

author found much work done in the field of Boolean algebras, taking

notice of three different definitions for.the determinant of a Boolean

matrix. It became his task to examine these definitions given in

Wedderburn [12], Rutherford [8] and [9], and Sokolov [10] in the light

of arbitrary lattices and determine which properties and relations

were reminiscent of the determinant or permanent of elementary algebra •

. Due to the recent developments in Lattice Theory and the desire to present

a self-contained paper, the author has included a preliminary chapter on

the· background necessary for the main discussions. Each determinant has

been given a separate chapter, but in many cases there are corresponding

definitions and properties. In each determinant there are properties

concerning: the number of elements of·a matrix in the expansion of its

determinant; the determinant of a matrix and.its transpose; ·a principle

of duality for rows and columns; the interchange of a row or column; the

determinant of a matrix formed from another by a row or column meet of an

element; and evaluations of certain special matrices.

The First Determinant as we defined it has added properties cqncerning

the join of a row or column with certain elements, an expansion by row

or column, and· counterexamples to other related. properties of determinants

of elementary·algebra. Our Second Determinant also has an interesting

lennna on its relation to the First Determinant. In our discussion on the

,J / 4

Third Determinant, we have defined a new matrix and its determinant in

terms of the First with an added property concerning rows and colUJ;11ns.

We concluded this chapter with a lemma on the relation of the last 'two

determinants and a sufficient condition for a matrix to have an inverse.

5

·. II. PRELIMINARIES

The first section of this chapter contains the basic definitions

and remarks needed to give the reader an understanding of the theory

behind the lattices that will be used in this paper, while the second

section is devoted to pertinentmatrix theory. Finally, the third

section develops the subject of permutations to an extent that will be

sufficient for our purposes.

2.1 Basic Lattice Theory

The elementary concepts.employed in the first part of this section

can be found in Sz~sz [ ll], Rutherford [ 8], and Birkhoff [ 3], while the

latter part was formulated from the notes of Bevis .[2].

Definition 2~ 1.1. ,A partially ordered set is an algebraic system in

which a binary relation x s; y is defined, which satisfies the following

postulates.

pl For all x,. x s; x. (reflexive property)

p2 If X. s; y and y s; x, then x y. (antisymmetric property)

p3 If x s; y and y s; z' then x s; z. (transitive property)

Definition 2.1. 2. A lattice L is a partially ordered set in which every

pair of elements [x,y} of L haves. least upper bound or join, denoted

by xV y; and a greatest lower bound or meet, denoted by x /\ y.

Remark 2.1.3. For x,y, and z in any lattice L, the following identities

hold:

6

(1) x V y = y V x ; x A y = y A x, (commutative laws)

(2) (x V y)V z

(3) x V(x A y)

x V (y V z) (x A y)A z = x A(y A z), (associative laws)

x ; x A(x V y) = x, (absorptive laws)

(4) x V x = x ; x A x = x. (idempotent laws)

Lemma 2.1.4. If x,y € L, then

(1) x ~ y if and only if x A y = x;

(2) x ~ y if and only if x V y = y.

Definition 2.1.5. If a lattice L has an element o such that any element

x of L satisfies the inequality 6 ~ x, then o is called the least element

of L. If a lattice L has an element 1 such that any element x of L

satisfies the inequality 1 2 x, then 1 is called the greatest element of

h These elements will be called the bound elements, of L and we will

say that L is bounded proviEled it has a least and greatest element.

Remark 2.1.6. If a partially ordered set does not already have a least

element and a greatest element, it may be equipped with thew, therefore, '

we will assume throughout the rest 6f this paper that L is bounded.

Remark 2.1. 7. By the definition of the ordering of lattices, the least

' element 6 and the greatest element 1 of the lattice L satisfy the identities:

(1) & Ax = 0 0 v x = x· ' (2) 1 A x x 1 v x 1 for all x € L.

Definition 2.1.8. An involution poset is a partially ordered set L,

together with a mapping or unary operation ' ; L~L, called an involution,

such that:

7

(1) x S: y implies y' _::;;; x'. for all x,y e: L.

(2) x" = (x')' = x for all x e: L.

We see by (2) of Definition 2 .1. 6 that the involution is an onto

mapping and also by (1) that a generalized DeMorgan Law hplds in the

involution poset, that is,

n (1) (v x.)'

i=l 1.

n (2) </\.. X.) I

i=l 1.

n = /\

i=l

n

=V i=l

x' and i

x'.. 1.

. Definition. 2.1. 9. By a complement of an element x of a lattice L with

o and 1 is meant an element y e: L such that x /\ y = o and x V y = 1.

A lattice_ in which every element has_ at least one complement, which may

not be unique, is called a complemented lattice.

Definition 2.1.10. If ' : L--+L is an involution and if x' is a complement

of x for each x in L, then ' is called an orthocomplementation and L is

an orthocomplemented lattice.

Definition 2.l. ll. A lattice L will be called distributive if and only

if it satisfies

(l) x V(y /\ z) (x V y)/\(x V z) and

(2) x /\(y V z) = (~ /\ y)V(x /\ z) for x,y,~ e: L.

Definition 2.1.12. A.lattice Lis called modular if and only if its

8

elements satisfy the modular identity:

if x s •~ then x V(y A z) = (x V y)A z for x,y,z e L.

Definition 2.1.13. A lattice Lis orthomodular provided it is orthocomple-

mented and satisfies the orthomodular identity:

if x ~ y, then x V(x' A y) = y for x,y e L.

We now continue with some useful results that will be pertinent

in the development of the determinant.

Definition 2.1.14. By the direct product of the lattices L1, ••• ,Ln'

denoted by L1® ••• ®Ln' we mean the algebra defined on the. product set

L1X ••• XLn' in which

y. for all i=l, ••• ,n; l.

Furthermore, the ordering of the lattice L1® ••• ®Ln is described

by the formula (x1 , ••• , xn) S (y 1 , ••• , y n) if and only if xi S y i for all

i = l, ••• ,n. We shall denote the direct product of L with itself n

times by ® L. n

Remark 2.1.15. If Lis an orthomodular lattice, then® Lis also an n

orthomodular lattice.

9

Definition 2.1.16. In an orthocomplemented lattice L, we say that x

commutes with y and write x C y if (x V ;¥:')/\ y = x /\ y.

Remark 2 .1. 17. If L is an orthomodular lattice, the C is a symmetric

relation, that is,. x C y ·if and only if y C x •

. Foulis-Holland Theorem 2.1.18. In an orthomodular lattice,· if two· of

the three relations x c y, x c z, y c z hold, then

(1) (x V y)/\ z = (x /\ z)V(y /\ z) and

(2) (x /\ y)V z = (x V z)/\(y V z).

Definition 2.1.19. Let L be an orthomodular lattice. If NGE L, .then

C(N) = [x e L; xC y for ally e N}. C(N) is ~alled the centralizer

of N. If K = L,.C(L) is called the center of L.

Remark 2.1.20.; Let L be an orthomodular lattice, then C(L) is closed

under the operations of meet, join, and ortho;c,omplementation.

Remark 2.1. 21. e L. for all i}, 1

then we can say (~{' _.. ,xn) e C (L) if and only if xi e C (Li) for all

i = --1.,. ··-· ;ri.

Example 2.1.22. An interesting e~ample that can be used to show many

of the results obtained in theforthcoming chapters is the lattice of

subspaces of Euclidean 2-space, whose Hasse diagram is the following.

10

1

0

Any finite portion of a lattice can be represented by a Hasse

diagram in which distinct elements of L are represented by distinct

circles and in which each relation x ·::2: y ·is represented by a line or

lines which descend steadily from x to y. The above example is an

orthocomplemented modular lattice, and hence, it is an orthomodular

lattice.

2~2 Matrices :·Fundamental Concepts and Operations

The fundamental definitions and concepts concerning matrices can

be found in Eves [4] and Marcus and Mine [6]. Our development of

matrices.over lattices is similar to the work done in Boolean matrices

found in Luce [6], Rutherford [8], Birkhoff [3], and Flegg [5].

11

. Definition 2. 2.1. A matrix A of order m X n is a rectangular array

of mn elements from a given set .. arranged in m rows and n columns:

a mn

where A .. =a .. denotes the element in the ith row and jth column and 1] 1]

i = l, ••• ,m; j = l, ••• ,n. If m = n, A is called an n-sguare matrix.

If matrix A is square of order n, the elements a11 ,a22 , ••• ,ann are said

to constitute the principal diagonal of A.

A matrix of order n X 1 is called a column matrix and a matrix

of order 1 X n is called a row matrix.

We will interchange the use of A .. and a .. to denote the ijth 1J 1]

element of the matrix A whichever is most convenient in the context.

Definition 2 •. 2. 2. Let M (L) be the set of n-sguare matrices A,B, ••• n

whose elements aiY;ff}Jifij, ••• , i,j = l, ••• ,n belong to the lattice L.

We now consider some basic operations and relations of members of

M (L). n

Definition 2.2.3. The two matrices A= (a .. ) and B = (b .. ) are said to 1] 1]

be equal if and only if a .. = b .. for all i, j = 1, ••• ,n. 1] 1] .

Definition 2. 2.4. The join of two matrices A = (a .. ) and B = 1]

12

defined to be the matrix C = (.c .. ), where c .. 1.J 1.J

a .. V b ..• 1.J 1.J

Definition 2.2.5. The meet of two matrices A= (aij) and B = (bij)

is defined to be the matrix C = (c .. ), where c .. =a .. /\ b ..• 1.J 1.J 1.J 1.J

Definition 2.2.6. Let A= (a .. ),c e L, then we define a scalar 1.J

·meet and join as follows:

(1) c /\ A c (cij), where cij = c /\ aij for all i,j.

(2) c V A C = (c .. ), where c .. = c Va .. for all i,j. 1.J 1.J 1.J

Definition 2.2.7. By matrix inclusion, we mean A::;; B if and only if

a . . ::;; b . . for a 11 i , j = 1, ••• , n. 1.J 1.J

An innnediate consequence of the above is the following lennna.

Lennna 2.2.8. Mn(L) is lattice isomorphic to @2 L. n

We now define and exhibit certain matrices that will be of importance

in our later developments of the determinant.

Definition 2.2.9. A matrix each of whose elements is one is called the

universal matrix, and will be denoted by

1 1 1

I = 1 1 1

1 1 1

13

Definition 2.2.10. A matrix each of whose elements is zero is called

the zero matrix, and will be denoted by

0 8 0

0 0 0 0

0

0 0 0

Definition 2.2.11. A matrix whose elements along the principal diagonal

are ones with zeroes elsewhere is called the unit matrix, and will be

denoted by

1 0 0

E = 0 1 0

0 0 1

Definition 2.2.12. A square matrix A= (a .. ) suchi':that a .. = o if i :f. j 1J . 1J and a .. = c e L.if i = j is called a scalar matrix, and A= c A E = 1J

c 0 0

0 c 0

0 0 c

Definition 2.2.13. A square matrix A= (a .. ) such that a .. = o if 1J 1J i :f. j is called a diagonal matrix, and the principal diagonal elements

14

Definition 2.2.14. In a square matrix Ut (a .. ) , if a .. = o, i > j, 1J 1J so that

all al2 aln 0 a22 a2n lit =

.0 0 a nn

then Ut is called an upper triangular matrix. A square matrix U (aij)

such that, a .. = o,ifi :<:: j, is called an upper matrix, and 1J

Similarly, if a .. = 1J

matrix and if a .. = 1J

Definition 2.2.15.

of A, defined by A'

i,j.

0 al2 aln 0 0 a2n

m3=··· 0 0 . . • 0

o, i < j, then ut is called a lower triangular

o, i :::; j' then u is called a lower matrix.

Associated with the matrix A = (a .. ) is the complement 1J = (a~.) where a~. is the complement of a .. for all 1J 1J . 1J

Definition 2. 2.16. Associated with the matrix A = (a .. ) is the matrix . . 1J . AT h . f A. h . . . th . h .. th f A Th , t e transpose o . , w ose 1J entry .is t e J1 entry o • us

T (A ) ijf\ a ji.

Now we proceed to define matrix multiplication in order to be able

to define the inverse of a matrix. Then we close this section with the

• 15

. . .. statement ·of a lemffia ·concerning necessary and sufficient conditions· for . .- .· . . . . .. . . .

<matrices over a lattice to have an. inverse.· This lennna and its·. proo~ .•·

was originally dorie for B~olean matrices by J;.uce [61 anµ has recenqy

'.been shown to be :t::rue for ·matrices over arbitrary lattices with 0 and 1

.... ' - ' ·· .. ·. . ...

. Defini don 2' 2 .1'1 • If A and B are · membm . of M (L) , then the matrix · .. .. . . .· .. ·- ... ·.. ·. .... . ·,· .... . . , ·, .· n •ri .. · - .· .

product of A ~ndB. is.the matrix G, where cij =. ~ (aik /\ bkj). '

. . ' Reniark 2.2.18. !µ·general, mcitrix multiplication is neither associative

nor commutative •

. _'.' . . i

Definition 2 •. 2.19. Let A e Mn(I,), anJ:nverse of A, i-f it exists, denoted . . -1 .. ·. .. .•. . ... ··.· ·.. . -1 -1 ..... · .. by.A , is a square matrix such that AA =A A= E.

.·· . -1 T · .. De·finition 2.2~20 •. The inattixA is said to.be orthogonal if A =A .•

· Lennna 2~2.2i. For a lriatrix A .e M (L),: . .n ... ·

.. 2.3 .

(2)

n = .E if and only if . V a.k =

k=l . 1

for alli,j,kwith•L~ J~·and '

1 for all i and a. ik /\ a 'k =, 0 J

. . ... ·· .n :. AT,A = E · .. if. and on~y. if v. ak·. J' = · 1 for all .J and ak. /\ ak •. = 0

, k=;:::l . J 1

for all: i,j,k with ii j.

. ~ . ,.'

Permutations . . . . .

The fol;lowing discus~icm is sind.l.ar to those· found ·in Eve~ [4] and

.. Marcus and Mine [7] • . We need to . develope the :i,dea of permutations in . . .

order to have a better understanding. of· the definitions of our dete.ri'ninants.

16

Definition 2.3.l. A permutation on n, objects, l.abeled l, ••• ,n, is

a one-one mapping of the set {l, ••• ,n} onto itself.

We shall denote the image of·i·under a permutation r:JbyO'~i).

Remark 2.3.2. We shall denote the set o;E all permutations of l, ••• ,n

by p • n

Lennna 2.3.3. There are n! distinct permutations in P • n

DeHnition 2. 3.4. A situation in a permutation (cr(l), cr(2), ••• , cr(n))

. of 19 2, ••• ,n in which cr(r) preceeds. cr(s) and cr(r) > cr(sJ, is called an

inversion. The permutation is·said to be even or odd, according as it

possesses an even or odd number of>inv:ersions.

Remark 2.3.5. We shall denote the set of all even permutations of

l;op,n l;>y Pn+ and the set of all odd permutations of l, ••• ,n by Pn

Definition 2.3.6. The operation of interchanging any two distinct elements

of a permutation (cr(l),cr(2), ••• ,cr(n)) .of l, ••• ,n is called a transposition.

Lemma 2.3.7. A transposition converts an even (odd) permutation of

· 1, ••• ,n ihnto and odd (even) permutation of l,. •. ,n.

Lemma 2.3.8. Of then! permutations of l, ••• ,n where n > 1, exactly half

are odd and half are even.

17

III. THE FIRST DETERMINANT

We now proceed to develope the definitions and properties of a

certain scalar-valued function of the matrices in M (L), which we will , n

call the First Determinant. A similar definition and brief discussion

done for the determinant of a Boolean matrix can be found in Rutherford

[8] and [9], while Flegg [5] mentions a definition and some properties

of a determinant of a switching matrix done in Boolean algebra. The

statements of some definitions and theorems are similar to those in

Marcus and Mine [7] and Eves [4].

Definition 3.1. Let A e M (L). If cr e P then the diagonal corresponding n n

to cr is the n-tuple of elements n

for any given permutation cr, /\ i=l

called a joinand of A.

fromL;;(alcr(l), a2ei(Z), ••• , ancr(n)), and

aicr(i) = alcr(l)/\a2cr(2/'· • ./\ancr(n) is

Definition 3.2. Let A e M (L). We define the first determinant of A n

as the join over all CT € P of the joinands of A and we denote it by n

1Aj 1 . Thus

n

(/\ aicr(i)). i=l

Remark:-:3.3. There are n! joinands in the expansion of a determinant of

A e M (L). n

Lennna 3.4. Each joinand of jAj 1 contains one and only one element from

each row and column of A.

18

Proof: Innnedia.te consequence of the definition of the first

detenninant.

Theorem 3.5. If·A. e Mn(L}, then IATl 1 =IAI 1.

Proof: Let A= (a . .), then (AT) .. = a .. and IATl 1 . . . 1] . 1] ]1

.. n T v C(\ (A ) icr(i)) = creP 1=1 n n V (/\ a (.).), but observe. that by the

P ·1· 0'11 previous lennna, a joinand of

cre . 1= · n n n

is /\ acr(i) i. i=l

So, if we write this joinand in the fonn /\a····cp(·.·), • 1 1.. 1 1=,

by rearranging the elements·so that the second suffixes came into natural

order and denote by cp the permutation (cp(l), cp (2), ••• ,cp (n)) where ho.th

cr and cp range over P , then we have I ATl,1 .. = V n · .. ·.. cpeP

n

n (/\ a. (··).) = IAl 1· .. • i=l 1cp. 1

The next corollary follows innnediately from the above theorem and

gives us a principJe of duality.

Corollary 3.6. Any theorem concerning the rows, columns, and value of

a first detenninant remains valid if the words ''row" and "column" are .

everywhere interchanged in the statement of the theorem.

Remark 'J.7. Due to Corollary 3.6, the proofs concerning rows or columns

of the first detenninant will be done either for the tows or the columns

but not both.

Notation 3.7. Let A e M (L), we shall denote the.matrix A whose ith n . .

and j th rows have been interchanged by A[i: j] .

'\ ,· 19 . • ' ·: ~ <'

Theorem 3.8. Let A e Mn (L); if ~trix. B is bbtained from Aby the

. intercha~ge of two' rows (columns}, then: l·BI l. I All"

Proof: Let B .=. A[:t:s]. ·Then, in passing from, the' joinands of

I Al 1 ·to' the corresponcling joinands . of I Bil' the natural order of .. the

column suffixes .is not altered,·· but. the row· suffixes receive one. transp@

position, that is, the joiniands are only rearranged, hence .IA\ 1 =

IAilr:s~l 1 = IBl1·

·. Theorem 3. 9. Let A e w (L), where L. is an orthomodular lattice; if the ·• n .. · . . . . .· . .

matrix B is formed from A by the. ineet of each. element of a tow 'R60lUil.lii.)

of A with an element c e C (L), then I B\1 = c .A I A\ l" .

. Proof: By Lemma 3. 4, eae;:h. join.and of I :S 11 ~ontains one and only ' :·

one element from each row and. coill:mnn of B. · .. ~uppose B is obtained from

'· . th A by the scalar m'eet of the j .· · row of A by c. Then

I Bl 1 = ~ ·. (alO'(l)A ••• A· c A ajO'(J)A ••• AanO'(n)) -n

= c A· V (ala(l)A .• · •• Aa~O'(n)) (since c e.C(L)) O'eP ·· · · · . · ·n

Remark 3.10 •. We now give st~tements_~nd examples-of a few·p:tope:tties . ..

which do.not.hold, in general; for·the first determinant ~f matrices over·

arbitrary lattices. : .· . . .· ,

. .

(l)_ ~o rows (c0lumns) of a matri~ A_maybe identical, but th.is

does not imply tha~ IAI~ =6.

. 20

Proof: Consider A e: M2 (L) for L. of Example 2 .1. 22, then for

A [ & l] ,:;, ' IAI 1 = a 1 . . .

(a A 1) V (1 A a) = a V a ~ a.

(2) If one row (column) of a matrix A ~s joined to another row

(column) of A to form 13, then I B 11 does not necessarily equal IAI 1 .

Proof: Consider A e: M2 (L) for L of Examp;I.e 2.1.22, then if

A= [~ :] and B is such . . [ 1 that B = . · . · lVo

al aV~

, then IAl 1 = o and

(3) If an element.of Lmeet.one·row·of a matrix Ais joined to

another row of.A resulting in the ~atrix B, .the !Bl 1 ·does not necessarily

equal IAI 1.

Proof: Follows from (2).

(4) We do not have the property that the first determinant of the

product of two matrices is equal to .the meet of their first determinants.

Proof: Consider A. e: M2(L) for L of Example· 2.1.22,. then if

A= [: ~] and B = [: :] then AB = [: :] and IABI 1 = o but

·.·Definition 3.11. Given A e: Mn(L} and C = (c1, •.• ,cn) with ci e: L., then

we define A(k:C) to be the matrix A .with the k th row replaced by C and

A(C:j) to be the matrix A with the jth column replaced byC.

Theorem 3.12. Let A e: Mn (t), where L is an orthomodular lattice;. if. the

matrix B is formed from A by the join of any tow k (column j) of A with

C = (c1 , ••• ,·en), where ci e: C{L),. then IBI 1 = IA(l(:C)j1 V !Al 1

<!Bil= IA(C:J)ll v !All) •

. Proof: ·By Lemma. 3.4, each joinand of IBl 1 contains one and only

one element from each row and column of B. Suppose B is obtained from

A by the join of the kth row. of A by C. Then I Bl 1 = V (alcr(i/'· • •. /\ cre:P

n

ak+lcr (k+ 1) /\ • • • /\ a n.cr (n) ) V

(since c. e: CJL) and by use of the associative laws) . 1

D f . . . 3 ·13 If d 1 h • th d .. th 1 f A h · e 1n1t1on • • · we . e ete t e 1 .· row an J co umn rom . , t e

first determinant of the resulting (n-1)-square matrix is called the

minor of the element aij and is denoted by IA(i)(j)l 1 •

Theorem 3.14. Let A e M (L),where L. is an orthomodular lattice. n

If for

any row i (column j), aij connnutes with the joinands of IA(i)(j) 11 for

all j (for all i), then the first"determinant lA.1 1 can be expanded by

row i(column j), where. the expansion by the i th row yields

n

(expansion by

!Al 1 = ·~ (aij /\ IA(i) (j) j 1) (for all i) n

{h column yields IAl 1 = V (a .. /\ IA(.·).(")1 1) . i=l . 1J . 1 J

for all j).

Proof: ·Choose an arbitrary row k, then by Lennna 3.4, an eleme.nt

akj e: A can occur at most one time in an.y joinand of lAl 1 for each j.

22

Collect all the joinands Of IAl 1 that contain the element akj for

eachj. Since ak. commutes with the joinands . J of IA(k)(J) 11 for all j,

the join of the col1ected terms may be written, using Theorem 2.1.18,

as (akl /\ dkl) V {ak2 /\ dkZ) V ••• V (akn /\ dkn), where dkj (j = 1, ••• , ri)

represents the join of all the joinands of IAl 1 except those of row k

and column j for each akj but these are preciselythe IA(k)(j)l 1,

therefore dkj = IA(k)(j}l l and hence IAI 1 = (ak,l /\ dkl)V ••• V(akn/\ dkn) n n

= Y (ak. /\ dk .) = V (ak. /\ IA(k) ( ") 11), where k. is arbitrary, so the J=l J . J J=l . J. J .

theorem follows for. all rows.

Corollary 3.15. Let. A. e M (L), where L is an orthomodular lattice. n

for any row i (column j), a .. e C (L) for all j, (for all i) then I Al 1 . . 1]

h . . . b . f h . th ( . th .1 ) ·as an expansion yminors o t.e 1 row J co.umn.

Proof: Follows from above theorem and definition of C(L).

If

Corollary 3.16.. If all the elements of a row (column) of the matrix A

are zero, or if all the minors in IAl 1 of the elements of any row (column)

of A are zero, then IAl 1 = o.

Proof: The j0inands of a minor that is zero, are zero, and thus, are

inC(L). Hence, the result follows from the expansion formula.

Lemffia 3.17. The following·are evaluations of the first determinant of

certain special matrices: (1) I rl 1 = 1.

(2) lol 1 = o.

(3) IE.l 1 = 1.

23

(4) The first detenninant of a scalar matrix for c € L is:

(5) The first detenninant of a diagonal matrix A= (aij) is:

!All= all A a22 A ••• A ann·

(6) t Let U and Ut be defined as iri Definition 2. 2.14, then

I utl = I u I = 1 t 1

(7) Let Ube an upper matrix, then lul 1 = o.

(8) Let U be a lower matrix, then I ul1 = o.

Remark 3.18. Since IAl 1 is a monotone function of its elements, then

A ::;; B implies IAI 1 ::;; IBI 1•

24

IV. THE SECOND DETERM!NANT

During the author's review of the literature, he found a second

defi.nition for the detenninant of a square matrix whose elements are in

a·Booleanalgebra in Sokolov [10]. The paper gives the definition and

states some properties of the determinant •. It is the purpose of this

fourth chapter to. give a similar definition for a determinant of matrices

over orthocomplemented lattices ari.d to consider briefly a few properties

·and relations of this·. detenninant, which we shall call the Second

De te nninant • ·

n . Definition 4.L. For any given even pennutation Pn+' CJ akO'(k) ·is

called an even joinand, and for any given odd permutation · P , n

n A akcr(k) is called and odd joinand. k=l ·.

Definition 4.2.· Let A e: 1)1n(L), where L-is an orthocomplemented lattice.

We de,fine the second detenninant of A to be the symmetric difference of

(a /\ b') V (bA· /\ aA') A · A

= [ ( V. + (alO'(l) /\. • ./\ ancr(n) )) /\ ae:P · .cre:P

( /\

[( v cre:P

n

n n (ala(l) /\. • ./\ ancr(n))) /\ ( /\ (ala(l) V • • .v a~a(n))}] •

O'e:P + n

25

Lemma4.3. Each even and odd.joinand and their orthocomplements in

the expansion of I Aj 2 contains one and only 0ne element from each row

and c.olumn of A.

Proof: Follows immediately from the definition .and the symmetric

difference of aA and b A"

Theorem 4.4. If A e M (L), where Lis an orthocomplemented lattice, n

·Proof: By the previous lemma, each even (odd)·. joinand of jAj 2 .

contains one and only one element from each row and column of A, Likewise

for their orthocom.plements, and so each.even (odd) joinand.and their

orthocomplements in the expansion of jATj 2 contains one and only one

element from eachco1umn and row of A. Hence, each even (odd).joinand

and Hs orthocomplement in the expansion of jAj 2 is an even (odd) joinand

and orthocomplement in.the expansion of jATj 2, and vice versa; therefore,.

all the joinandsg.re identical and jATj 2 = jAj 2•

Corollary 4.5. For every theorem concerning the rows of.the second

determinant there is a· corresponding theorem co~cerning the columns and

vice versa.

Theorem 4. 6. Let A. e Mn (L), where L i.s an orthocomplemented lattice;

if matrix B is obtained from A by the interchange of two rows (columns)

then ·IBI 2 = IAl 2 ~

Proof: If B A[i:j], the natural order of the column suffixes.is

26

not a.ltered and the row suffixes receive one transposition. By Lemma

2.37, the number of even and odd permutations remain the same, hence,

if IAI 2 = aA !:::. bA, where aA and bA are as in Definition 4.2, then

bA(aA) is the join of the even (odd) joinands for B, therefore,

IBl2 = bA !:::. aA = aA !:::. bA = IAl2·

Theorem 4.7. Let A e M (L), where Lis an orthomodular lattice; if n .

the matrix B is formed from A by the meet of each element of a row

(column) of Awith an element c e:.C(L), then IBl 2 = c /\ IAl 2•

Proof: · Let aA and b A be as is defined in Definition 4. 2, then if

c e: C (L) meets with each element of a row of A to form B, we have

aB c /\ aA and bB = c /\ bA by properties of C(L). Hence, IBl 2 = aB /J. bB =

(c /\ aA) !:::. (c/\bA) = [(cAaA) /\ (c' V bJ.)l V [(c AbA) /\ (c' Val)]=

[((c /\ aA) /\ c') V ((c /\ aA) /\ bA)l V [((c /\ bA) /\ c') V ((c /\ bA) /\ aA)] =

c /\ [(aA /\ bA) V (bA /\ aA)] = c /\ IAl 2•

Corollary 4.8. If all the elements of a fow (column) of the matrix A e: M (L) n

are zero, IAl 2 = o.

Proof: o e: C(L),. therefore, in. previous theorem we would have

0 /\ IAI 1 = o.

Lennna 4.9. The following are evaluations of the second determinant of

certain special matrices:

(1) lrl2=0.

(2) I ol 2 = o.

27

(3) IE I 2 = 1.

(4) The second determinant of a scalar matrix for c e Lis:

(5) The second determinant of a diagonal matrix A = (a .. ) 1]

is:

/\. • ./\ a • nn (6) Let ut and ut be defined as in Definition 2. 2.14, .then I utl 2

lutl2 =all/\ a22 /\ ••• /\ ann°

(7) Let Ube an upper (lower) matrix, then lul 2 = o.

Proof: (a /\ b ' ) V (b /\ a 1 ) ;:;; A A A A

V. THE THIRD.DETERMINANT

In 1934, J. H. M. Wedderburn [12] introduced a quite different

definition for the determinant &f a Boolean matrix and stated a necessary

and sufficient condition for one such matrix to possess an .inverse. In

1963, D. E. Rutherford [9] recalled Wedderburn's definition and noted

a few properties that were desirable, while stating that this definition

did not permit row or column expansion of the matrix according to the

familiar formula. · Rutherford then showed a relationship between his

remarks on inverses and that of Wedderburn's with regard to the determinant.

· We now consider Wedderburn' s definition in the light of arbitrary

lattices and give· a short development of the properties.and relations of

this determinant, which we shall call the Third Determinant. Then we

will prove a lennna concerning imrerses and give a counterexample to its

converse.

Definition 5.1. Let A e: M (L), where L is an orthocomplemented lattice. n . . . . .

We. define

a .. = a .. -:LJ ;L.J

- -A = (a . .), where ;L.J ..

/\ cl\ ak' .) • kfi J

a .. ;L.J

=a .. /\(/\ a~k) andA = (~J.), where· ;L.J. kfj ;L. ...

Lennna 5. 2. Let A e: Mn (L), where L is. an orthocomplemented lattice,. then

l~\11 = l&l 1·

Proof:· Immediate from Definition 3.2 and 5.1, since l/il 1 = V . cre:P .n

(alcr(l) /\. • ./\ ana(n) /\ q(O')) = l~J 1, where q(cr) is the meet of the

complements of .the elements which are not of th. e· form a .. c··)' i=l, ••• ,n. . . ;L.(j ;L. .

·. 29 .·

Definition 5. 3. • Let A e Mn (L), wh.ere L fs an orthocoinplemented latt~~e.

We define the third determin~ht ?f ~·as'·· IAl3 =. IAl1 ~ . .l~J 1 .

. n.. . .· ... Definition 5.4. For any give.n·permutation cr, (\ ;icr(b' = (ala(l) ·A .. •.•/\

. i=l .. . . n~ ancr(n) A q (a)? ·= ~ ~a(i) · are called. the joiriands of A a:nd !;_. ' .. · • . ·... · .. · .. > >

Lemma 5 .5. . Each j oinand of I A 13. is the meet of one el~men1: ·from .each . . .

row and celumri. of A .and the complemerit!'J of all the other elements in.A •

. Proof: ·Follows iltlinediately from De.finition 5.3.

·Lemma 5.6. Let A e Mn (L}, where L i·Si orthocomplementec1 lattice, theri

.. (;:r). = (A) T • . · ..

. - . . . . .. T . . T . . /\. . TI )·. /\ I ) . . T

. Proof: (A ) i . = A . ~ /\ ( . · A • k - = A. i /\ ( Ak. = !;_. • = (A) .• • . . . J J. J k~j J. . . J . ·. k~ j •. ·. J. ·. ··. J J. . - ·. J. J

Theo:t".em_ 5.7. Let A. e Mn (L), whete L is an orth~complementeq lattice, . . .... · .. T ·.

then 1Al3 = IA 13 •

-Pr.oaf: 1Al3 = l!l 1 =,I {!)Tll = l<AT)l 1 = IATl3• ,

Remark 5~8 •. Due to Theorei:n 5.7, we have a corresponding principle of . .· :· . .

duality for the third d~terminant of A as in Corollary 3. 6.

. . . . . .

· - Theorem 5. 9. .Let A e M~(I.i), ·where· L. is an. orthocomplemented lattice. . . . . .

If ·matrix B is obtained from A by the. intercha~ge of two rows (columns), . · .

. '··

Proof: Let B = A[r:s], theh l•I 3 m IA(r;Ol I 3 = jA[i;:s] I 1 IA [r:s] 1:1 =

III 1 ·= .. lAI 3•·

30

. . . .

Theorem 5.10. Let A e M (L) for n: > land L an orthocomplemented lattice •. , · ... ·.·n . . . . .. . .··. .

If one·colunm (re>W) is less tlian or equa;L to another colunm (row), then

A ~) has a column (row) _pf zeroes; El,nd IA! 3 = o.

Proof: Witho.Ut loss.of generality, suppose column one is less.than . . . .

. . .

or equal to cohmm two, that is a. 1 ;s; a. 2 for i :::. l, ••• ,n•· By Definition . 1 1 . . . . . .

5.1, the elements in: the first colunm of .A are a:. 1 /\ (a! 2· .. /\ a !k· ) = · 1 · · .·- 1 · kF1, 2 ·1 · -

( .· f\ I ) f\ (· /\ I··) · a • 1· ·· a"1: 2· a · k· . . 1 . kfl·, 2 1 .

= o for i = l~ ••• ,'n.

\ .

The following two corollaries for A e M~_(L), where L is an orthocomple-

. ·. mented lattice, are innnediate ·consequences of the previous· theorem•

Corollary 5.ll. If two rows (col\mlils; of A are tiidenticalthen IAl 3 = o.

Corollary 5.12 •. If all the· elements of a row (column) .of A. are zero,.· then

_ IAl 3 = o.

Theorem 5.13 •. Let A e M (L),,where L. is an orthomodular lattice; if the ·. . .n... .. . .

matrix B ·is formed from A by. the m~et of each element· of a row .(column) . J . . .

of A with an element c e: C(t.), then l:Bl 3 ::: c /\ IAl 3 .'

th . . . . Proof:. · Suppose B is obtained from A. l:>y c meet the le · ·row of A,

then we first need to show that bk. =: c /\ ~.. Now bk. == ._ (ak ~ /\. c) /\ . . . . . . '• . J. . J . J . . J .

A (c /\ ak.)' = (ak ~ /\ · c) /\ /\ (c' V ak'.) [ (c' A (fi..j /\.· c)) V • ../. . 1. ·. J . ,· .. ../.. ·.. . . 1 . · .. k. 1rJ . . ·. ·.·· · . ·· .. · 1rJ

((c /\ ak.) /\ /\ a~1)] = c /\.· akJ" /\ <_ .... 1()J. a~i) = c /\ ~kJ" .. ·. · · J · il-j r

for iAc, the proo·f follows frc;>m Theorem 3. 9.

Since b · · · = ij a .. 1J

31

Theorem 5.14. Let A e: M (L), where L is an orthomodular lattice; if n .

the matrix B is formed from A by the join of each element of a row (column)

of A with an element c e: C(L), then \B\ 3 = c' /\ IAl 3•

Proof: th

Suppose B is obtained from A by the join of the k ·row . -

of A by c. We will show that bk. = c 1 /\ ak .• Now bk. = (ak. V c) /\ /\ . .J J J . J

(c. V akl.") I = (a . V C) f\ /\ (CI f\ a I.) = (a . V C) f\ CI f\ c/\ kJ ifj . ki . kJ ifj

[(ak. /\ c') v cc/\ c')] /\ c/\ ak'.) = c' /\ a.kJ .. /\ cA ak'1.) = c' J . if j .. 1. if j

l:)ince b .. = a •. for ifk, the result follows from Theorem 3.9. l.J 1.J

ifj

a~i) -

/\ a.k.· .• J.

Lennna 5.15. The following are evaluations of the third determinant of

certain special matrices:.

(1). I rl 3 = ·O.

(2) 1013 = ·O.

(3) IEl3 = 1.

(4) The third determinant of a scalar matrix for c e: L is

(5) ·The third determinant of a diagonal matrix A (aij) is:

IAl 3 = a 11· /\a · A ••• A a 22 nn

(6) Let Ut and Ut be defined as in Definition 2.2.14, then lut\ 3 =

(a11 A ••.• A a ) A(/\/\ a~.) and lu 13 = (a11 A ••• A a ) /\ nn . ·<. 1.J t . nn J 1. J

c/\ /\a~.). j i>j 1.J

(7) Let U be an upper (lower) matrix, then \ ul 3 o. ·

Remark 5.16. According to the definitions, a .. s;; a .. , so .from the monotone 1.J 1.J

~ .. ·

32 .. ·

'' ... ;

property, it is. evident t~at !Al 3 ::;; IA! i• ' '

. Lennna 5.17. letA e; M~(L>, then I Aj 3· s. I Al 2· . . . . ' ' ' . ' . . . .

Proof: By definition T!l3 ::: . V + (a1an) ·A ••• 1\ana(n} 1\q(a) ). v ·.. O'ePri .· · · . . •. ·· · . . .

V - <~ia(l) A ••• t\ ana(n) /\q(a)), also, bA = A_ <aia(l)v ••• v ariacn» aeP . '.a. e. l?n. ". . n ...

and'aA= A+.<ai.a:(l) v .• ·.·.v·-a_·.~cr.(n))' where.q(a) - /\ a~J~. If a is · · .·· aePn ··· -· i,Jf·a · ·even,. q(a) ::;; h_A,. arid if a i~ odd,· q(O') ::;; aA. Reri.ce, I A! 3 ::;; ~· +

:n

·<4?1a(l) A •••. A ancr(n) .. AbA_}-,Y V._ (ala(l) /\~.~/\ ana(n) Aaj,)::;; (aA-/\ bA_) v · ci'eP .

(b .. /\ a') = I Aj • -A A 2 ·

. n -

Lennna 5.18. • T · . T Let A e Mn (L). If. IA.I 3 = 1,. then AA = A A= E.

. . . . . ·. -.. · .. · .... n .· . . ·. Proof: Since IA! 3 ::;; !Al 1 , l\7e have i ::;; IA! 1 ::;; V aik for all i and

. . n · ·· . ·. · . .. ·. k=:=l · . . likewis~ I Al 1 .s v ak. for all j and hence we have for a.11 :i,j

', ·, ,··· ·. ' k;=t J

Va!bl\:·= 1.::: Vak .• · .. Now consi,dera k =.a~/\ c/\ a!k)::;; a,~kfor k=l .> ., '.··'k=l·.· J.' ' ·. ,' ·,'; .·· -in .·.· )ll. ' iAn 1 ' · 1 .·

. . . . . . ·.. .· .. ·. .. n . . . iAn, whereas .a.k ··;.; a~k for i~j, .thus l = ·v. a·.k··.::; a!k V a~k· = (a.k./\ a.k-).' ' .· ' '' "."1- .!1f, ' ,' ,' ' • -1 ~ ,1 ,' J 1 • J

' ' : ' : ·, 'i-,. ,' ·: .... , ' ' ' ' ' ' '

' for j~i which: impli~s' that 0' = a /\ a· for a1i k and j~i there. fore, by ' ' ik ' ,' jk · ... ·.· ' ' ' '

Lemma 2.2.21 AAT .. ,£,,, E.-

Now. consider akm_=.akm-~ (~ aki).Saki for all iAn, .but;ki::;; a'kj

' ·. ··.' ', ·- ' ' •. ,, ' ri. ' ' ' ' ' ' ' ' ' ' ,' ' ' ' ,' . ' .·· ' ' ' for any i~j, thus l= v ~ik ::;;· ak. v aki = (ak·. /\ aki)_'' for i~J implies'

' .·. ·, k=l' .. · ' '' .J ', '- __ J ' ' .· ,· ·.

' 0 = akj ' /\ aki for all k artd jtfi,. therefore by same lemma AT A. = E. '·,

. ·. :.

...·.·

33

Remark 5.19. We now give a counterexample to the converse of Lemma

5.18. Consider A e M2(L) for L.of Example 2.1.22, where A = [ac bd] •

Now AAT = E, but IAl 3 o.

[ l]

[ 2]

[ 3]

['4]

[ 5]

[ 6]

[7]

[ 8]

[ 9]

[ 10]

[ 11]

[ 12]

34

Vt:. BIBLIOGRAPHY

Bevis, Jean H .• , ''Matrices over orth~modular lattices", [Unpublished article written at Virginia Polytechnic Institute during summer of 1967].

Bevis, Jean H~, "Trends iri. lattice theory", [Notes :~a:ken in a seminar given at Virginia Polytechnic Institute during fa11 of 1967].

Birkhoff, Garrett, "Lattice Theory", Amer. Math. Soc. Colloguimn Publication, Vol. 25, rev. ed., New York, 1948.

Eves, Howard W., Elementary Matrix Theory, Allyn and Bacon, Inc., Boston, 1966.

Flegg, H. Graham, Boolean Algebra and its Application, John Wiley & Sons, Inc., New York, 1964.

Luce, R. Duncan, "A Note on Boolean Matrix Theory", Proc. Amer. Math. Soc. 3 (1952), pp •. 382-388.

Marcus, Marvin and Mine, Henryk, Introduction to Linear Algebra, The Macmillan Company, New York, 1965.

Rutherford, D. E., Introduction to Lattice Theory, Hafner Publish-ing Company, New York, 1965.

Rutherford, D. E., "!nverses of Boolean Matrices", Proc. Glasgow Math. Assoc. 6 (1963), pp. 49-53.

Sokolov, O.B., "The application of Boolean determinants to the analysis of logical multipoles", Kazan State Univ. Sci. Survey

_. Conf. (1962), pp. 109-111. , .

Szasz, G~bor, Introduction to Lattice Theory, Academic Press, New York and London, 1963.

Wedderburn, J. H. M., "Boolean Linear Associative Algebra", Annals of Mathematics, Vol. 35, No. 1, ·(January, 1934), pp. 185-186.

35

VII. ACKNOWLEDGMENTS

The author wishes to express his sincere appreciation to

of the Department of Mathematics of the Virginia

Polytechnic Institute, whose assistance and guidance made this thesis

possible. Also, the author is extremely grateful to all the members of

the Department of Mathematics for their encouragement and help, either

directly or indirectly,. in the author's maturity in Mathematics, especially,

and who also served on his committee.

Finally, particular thanks goes to the author's parents,

for their unfailing guidance, support, and encouragement.

The vita has been removed from the scanned document

. ; · .. '

: ,·, .:

DETERMINANT~ OF ':MATRICES·· OVER LATTICES .

by

Daniel Sprigg Che.sley, .IV ..... ··

; !.

. ABSTRACT ·. ,·:

·. ' . ~ . . . . . .

Three. different defini.tions. for the ·dete.rminaµt of a matrix over .· . . .. ·.. . ··. ·. . .

· arbitrary lattices have been deyeloped t.o determine which properties .. ··, . . . . ·. : . · .. ··

and,' re.lations were ~eminiscent of the de'forminaht or permanent ·of'' . . . . ~ ·. . . . . ..

elementary algebra. ··rn each detenninant there. are properties .. concerning:

. : the elements of the 'matrix Tri 'the expansion o:f its determinant; the ·

determinant· of a matrix ~nd· its transpose; a ·princ;:iple 'of duality :for ' '

rows and .columns; ·.the intercfomge of raws ancl ·oolu~s; the determinant

of a matrix formed from another by a row or column meet of certain

efo~~!nts; and.· ~valua,tio1w. of· certa~n sp~cial .. matrices.. An expansion by· ; .. . . . ·. . : . .

row or ~ohnnn·· is .gi:V;en· for 6ne determinant and a lemna ,f:ttl fnverses is

proven· in light of another •. · A pre:liminary sect':i,on on. l.iattice Theory is

· also included~

•.·