Graphs and Matrices in Geometry Fiedler.pdf

208

Transcript of Graphs and Matrices in Geometry Fiedler.pdf

Page 2: Graphs and Matrices in Geometry Fiedler.pdf
Page 3: Graphs and Matrices in Geometry Fiedler.pdf

MATRICES AND GRAPHS IN GEOMETRY

The main topic of this book is simplex geometry, a generalization of thegeometry of the triangle and the tetrahedron. The appropriate tool for itsstudy is matrix theory, but applications usually involve solving huge systemsof linear equations or eigenvalue problems, and geometry can help invisualizing the behavior of the problem. In many cases, solving such systemsmay depend more on the distribution of nonzero coefficients than on theirvalues, so graph theory is also useful. The author has discovered a methodthat, in many (symmetric) cases, helps to split huge systems into smallerparts.

Many readers will welcome this book, from undergraduates to specialistsin mathematics, as well as nonspecialists who only use mathematicsoccasionally, and anyone who enjoys geometric theorems. It acquaintsreaders with basic matrix theory, graph theory, and elementary Euclideangeometry so that they too can appreciate the underlying connectionsbetween these various areas of mathematics and computer science.

Page 4: Graphs and Matrices in Geometry Fiedler.pdf

Encyclopedia of Mathematics and its Applications

All the titles listed below can be obtained from good booksellers or from CambridgeUniversity Press. For a complete series listing visit

http://www.cambridge.org/uk/series/sSeries.asp?code=EOM

85 R. B. Paris and D. Kaminski Asymptotics and Mellin–Barnes Integrals86 R. J. McEliece The Theory of Information and Coding, 2nd edn87 B. A. Magurn An Algebraic Introduction to K-Theory88 T. Mora Solving Polynomial Equation Systems I89 K. Bichteler Stochastic Integration with Jumps90 M. Lothaire Algebraic Combinatorics on Words91 A. A. Ivanov and S. V. Shpectorov Geometry of Sporadic Groups II92 P. McMullen and E. Schulte Abstract Regular Polytopes93 G. Gierz et al. Continuous Lattices and Domains94 S. R. Finch Mathematical Constants95 Y. Jabri The Mountain Pass Theorem96 G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn97 M. C. Pedicchio and W. Tholen (eds.) Categorical Foundations98 M. E. H. Ismail Classical and Quantum Orthogonal Polynomials in One Variable99 T. Mora Solving Polynomial Equation Systems II

100 E. Olivieri and M. Eulalia Vares Large Deviations and Metastability101 A. Kushner, V. Lychagin and V. Rubtsov Contact Geometry and Nonlinear Differential

Equations102 L. W. Beineke and R. J. Wilson (eds.) with P. J. Cameron Topics in Algebraic Graph Theory103 O. J. Staffans Well-Posed Linear Systems104 J. M. Lewis, S. Lakshmivarahan and S. K. Dhall Dynamic Data Assimilation105 M. Lothaire Applied Combinatorics on Words106 A. Markoe Analytic Tomography107 P. A. Martin Multiple Scattering108 R. A. Brualdi Combinatorial Matrix Classes109 J. M. Borwein and J. D. Vanderwerff Convex Functions110 M.-J. Lai and L. L. Schumaker Spline Functions on Triangulations111 R. T. Curtis Symmetric Generation of Groups112 H. Salzmann et al. The Classical Fields113 S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with Levy Noise114 J. Beck Combinatorial Games115 L. Barreira and Y. Pesin Nonuniform Hyperbolicity116 D. Z. Arov and H. Dym J-Contractive Matrix Valued Functions and Related Topics117 R. Glowinski, J.-L. Lions and J. He Exact and Approximate Controllability for Distributed

Parameter Systems118 A. A. Borovkov and K. A. Borovkov Asymptotic Analysis of Random Walks119 M. Deza and M. Dutour Sikirie Geometry of Chemical Graphs120 T. Nishiura Absolute Measurable Spaces121 M. Prest Purity, Spectra and Localisation122 S. Khrushchev Orthogonal Polynomials and Continued Fractions123 H. Nagamochi and T. Ibaraki Algorithmic Aspects of Graph Connectivity124 F. W. King Hilbert Transforms I125 F. W. King Hilbert Transforms II126 O. Calin and D.-C. Chang Sub-Riemannian Geometry127 M. Grabisch et al. Aggregation Functions128 L. W. Beineke and R. J. Wilson (eds.) with J. L. Gross and T. W. Tucker Topics in

Topological Graph Theory129 J. Berstel, D. Perrin and C. Reutenauer Codes and Automata130 T. G. Faticoni Modules over Endomorphism Rings131 H. Morimoto Stochastic Control and Mathematical Modeling132 G. Schmidt Relational Mathematics133 P. Kornerup and D. W. Matula Finite Precision Number Systems and Arithmetic134 Y. Crama and P. L. Hammer (eds.) Boolean Models and Methods in Mathematics, Computer

Science and Engineering135 V. Berthe and M. Rigo (eds.) Combinatorics, Automata and Number Theory136 A. Kristaly, V. D. Radulescu and C. Varga Variational Principles in Mathematical Physics,

Geometry, and Economics137 J. Berstel and C. Reutenauer Noncommutative Rational Series with Applications138 B. Courcelle Graph Structure and Monadic Second-Order Logic139 M. Fiedler Matrices and Graphs in Geometry140 N. Vakil Real Analysis through Modern infinitesimals

Page 5: Graphs and Matrices in Geometry Fiedler.pdf

Encyclopedia of Mathematics and its Applications

Matrices and Graphs inGeometry

MIROSLAV FIEDLERAcademy of Sciences of the

Czech Republic, Prague

Page 6: Graphs and Matrices in Geometry Fiedler.pdf

cambridge univers ity press

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,

Sao Paulo, Delhi, Dubai, Tokyo, Mexico City

Cambridge University Press

The Edinburgh Building, Cambridge CB2 8RU, UK

Published in the United States of America by Cambridge University Press, New York

www.cambridge.org

Information on this title: www.cambridge.org/9780521461931

c© Cambridge University Press 2011

This publication is in copyright. Subject to statutory exception

and to the provisions of relevant collective licensing agreements,no reproduction of any part may take place without the written

permission of Cambridge University Press.

First published 2011

Printed in the United Kingdom at the University Press, Cambridge

A catalogue record for this publication is available from the British Library

Library of Congress Cataloguing in Publication data

Fiedler, Miroslav, 1926–Matrices and Graphs in Geometry / Miroslav Fiedler.

p. cm. – (Encyclopedia of Mathematics and its Applications; 139)

Includes bibliographical references and index.ISBN 978-0-521-46193-1

1. Geometry. 2. Matrices. 3. Graphic methods.I. Title. II. Series.

QA447.F45 2011

516–dc222010046601

ISBN 978-0-521-46193-1 Hardback

Cambridge University Press has no responsibility for the persistence oraccuracy of URLs for external or third-party internet websites referred to

in this publication, and does not guarantee that any content on such

websites is, or will remain, accurate or appropriate.

Page 7: Graphs and Matrices in Geometry Fiedler.pdf

Contents

Preface page vii

1 A matricial approach to Euclidean geometry 11.1 Euclidean point space 11.2 n-simplex 41.3 Some properties of the angles in a simplex 121.4 Matrices assigned to a simplex 15

2 Simplex geometry 242.1 Geometric interpretations 242.2 Distinguished objects of a simplex 31

3 Qualitative properties of the angles in a simplex 463.1 Signed graph of a simplex 463.2 Signed graphs of the faces of a simplex 503.3 Hyperacute simplexes 523.4 Position of the circumcenter of a simplex 53

4 Special simplexes 644.1 Right simplexes 644.2 Orthocentric simplexes 724.3 Cyclic simplexes 944.4 Simplexes with a principal point 1084.5 The regular n-simplex 112

5 Further geometric objects 1145.1 Inverse simplex 1145.2 Simplicial cones 1195.3 Regular simplicial cones 1315.4 Spherical simplexes 1325.5 Finite sets of points 1355.6 Degenerate simplexes 142

Page 8: Graphs and Matrices in Geometry Fiedler.pdf

vi Contents

6 Applications 1456.1 An application to graph theory 1456.2 Simplex of a graph 1476.3 Geometric inequalities 1526.4 Extended graphs of tetrahedrons 1536.5 Resistive electrical networks 156

Appendix 159A.1 Matrices 159A.2 Graphs and matrices 175A.3 Nonnegative matrices, M - and P -matrices 179A.4 Hankel matrices 182A.5 Projective geometry 182

References 193Index 195

Page 9: Graphs and Matrices in Geometry Fiedler.pdf

Preface

This book comprises, in addition to auxiliary material, the research on whichI have worked for over 50 years. Some of the results appear here for the firsttime. The impetus for writing the book came from the late Victor Klee, aftermy talk in Minneapolis in 1991. The main subject is simplex geometry, atopic which has fascinated me since my student days, caused, in fact, by therichness of triangle and tetrahedron geometry on one side and matrix theoryon the other side. A large part of the content is concerned with qualitativeproperties of a simplex. This can be understood as studying relations not onlyof equalities but also of inequalities. It seems that this direction is starting tohave important consequences in practical (and important) applications, suchas finite element methods.

Another feature of the book is using terminology and sometimes even morespecific topics from graph theory. In fact, the interplay between Euclideangeometry, matrices, graphs, and even applications in some parts of electricalnetworks theory, can be considered as the basic feature of the book.

In the first chapter, the matricial methods are introduced and used forbuilding the geometry of a simplex; the generalization of the triangle andtetrahedron to higher dimensions is also discussed. The geometric interpre-tations and a detailed description of basic relationships and of distinguishedpoints in an n-simplex are given in the second chapter.

The third chapter contains a complete characterization of possible distribu-tions of acute, right, and obtuse dihedral angles in a simplex. Also, hyperacutesimplexes having no obtuse dihedral interior angle are studied. The idea ofqualitative properties is extended for the position of the circumcenter of thesimplex and its connection to the qualities of dihedral angles.

As can be expected, most well-known properties for the triangle allow a gen-eralization only for special kinds of simplexes. Characterizations and deeperproperties of such simplexes – right, orthocentric, cyclic, and others – arestudied in the fourth chapter.

Page 10: Graphs and Matrices in Geometry Fiedler.pdf

viii Preface

In my opinion, the methods deserve to be used not only for simplexes, butalso for other geometric objects. These topics are presented in somewhat moreconcentrated form in Chapter 5. Let me just list them: finite sets of points,inverse simplex, simplicial cones, spherical cones, and degenerate simplexes.

The short last chapter contains some applications of the previous results.The most unusual is the remarkably close relationship of hyperacute simplexeswith resistive electrical networks.

The necessary background from matrix theory, graph theory, and projectivegeometry is provided in the Appendix.

Miroslav Fiedler

Page 11: Graphs and Matrices in Geometry Fiedler.pdf

1

A matricial approach to Euclidean geometry

1.1 Euclidean point space

We assume that the reader is familiar with the usual notion of the Euclideanvector space, i.e. a real vector space endowed by an inner product satisfyingthe usual conditions (cf. Appendix).

We shall be considering the point Euclidean n-space En, which containstwo kinds of objects: points and vectors. The usual operations – additionand multiplication by scalars – for vectors are here completed by analogousoperations for points with the following restriction.

A linear combination of points and vectors is allowed only in two cases:

(i) the sum of the coefficients of the points is one and the result is a point;(ii) the sum of the coefficients of the points is zero and the result is a vector.

Thus, if A and B are points, then 1.B + (−1).A, or simply B − A, is avector (which can be considered as starting at A and ending at B). The point12A+ 1

2B is the midpoint of the segment AB, etc.The points A0, . . . , Ap are called linearly independent if α0 = . . . = αp = 0

is the only way in which to express the zero vector as∑p

i=0 αiAi with∑pi=0 αi = 0.The dimension of a point Euclidean space is, by definition, the dimension

of the underlying Euclidean vector space. It is equal to n if there are in thespace n+1 linearly independent points, whereas any n+2 points in the spaceare linearly dependent.

In the usual way, we can then define linear (point) subspaces of the pointEuclidean space, halfspaces, convexity, etc. A ray, or halfline, is, for somedistinct points A,B, the set of all points of the form A+λ(B−A), λ ≥ 0. Asusual, we define the (Euclidean) distance ρ(A,B) between the points A,B inEn as the length

√〈B − A,B −A〉 of the corresponding vector B − A. Here,

as throughout the book, we denote by 〈p, q〉 the inner product of the vectorsp, q in the corresponding Euclidean space.

Page 12: Graphs and Matrices in Geometry Fiedler.pdf

2 A matricial approach

To study geometric objects in Euclidean spaces, we shall often use positivedefinite and positive semidefinite matrices, or the corresponding quadraticforms. Their detailed properties will be given in the Appendix. The followingbasic theorem will enable us to mutually intertwine the geometric objects andmatrices.

Theorem 1.1.1 Let p1, . . . , pn be an ordered system of vectors in someEuclidean r-dimensional (but not (r − 1)-dimensional) vector space. Thenthe Gram matrix G(p1, . . . , pn) = [gik], where gik = 〈pi, pk〉 is positivesemidefinite of rank r.

Conversely, let A = [aik] be a positive semidefinite n× n matrix of rank r.Then there exists in any m-dimensional Euclidean vector space, for m ≥ r, nvectors p1, . . . , pn such that

〈pi, pk〉 = aik, for all i, k = 1, . . . , n.

In addition, every linear dependence relation among the vectors p1,

p2, . . . , pn implies the same linear dependence relation among the rows (andthus also columns) of the Gram matrix G(p1, . . . , pn), and vice-versa.

The proof is in the Appendix, Theorems A.1.44 and A.1.45.Another important theorem concerns so-called biorthogonal bases in the

Euclidean vector space En. The proof will also be given in the Appendix,Theorem A.1.47.

Theorem 1.1.2 Let p1, . . . , pn be an ordered system of linearly independentvectors in En. Then there exists a unique system of vectors q1, . . . , qn in En

such that for all i, k = 1, . . . , n

〈pi, qk〉 = δik

(δik is the Kronecker delta, meaning zero if i �= k and one if i = k). Thevectors q1, . . . , qn are again linearly independent and the Gram matrices ofthe systems p1, . . . , pn and q1, . . . , qn are inverse to each other.

In other words, if G(p), G(q) are the Gram matrices of the vectors pi, qj,then the matrix [

G(p) I

I G(q)

]has rank n.

Remark 1.1.3 The bases p1, . . . , pn and q1, . . . , qn are called biorthogonalbases in En.

We shall be using, at least in the first chapter, the usual orthonormalcoordinate system in En, which assigns to every point an n-tuple (usuallyreal, but in some cases even complex) of coordinates. We also recall that the

Page 13: Graphs and Matrices in Geometry Fiedler.pdf

1.1 Euclidean point space 3

linear independence of the m points A = (a1, . . . , an), B = (b1, . . . , bn), . . .,C = (c1, . . . , cn) is characterized by the fact that the matrix⎡⎢⎢⎣

a1 · · · an 1b1 · · · bn 1· · · · · · · · ·c1 · · · cn 1

⎤⎥⎥⎦ (1.1)

has rankm. In the case that we include the linear independence of some vector,say u = (u1, . . . , un), the corresponding row in (1.1) will be u1, . . . , un, 0.

It then follows analogously that the linear hull of n linearly independentpoints and/or vectors, called a hyperplane, is determined by the relation

det

⎡⎢⎢⎢⎢⎣x1 · · · xn 1a1 · · · an 1b1 · · · bn 1· · · · · · · · ·c1 · · · cn 1

⎤⎥⎥⎥⎥⎦ = 0.

This means that the point X = (x1, . . . , xn) is a point of this hyperplane ifand only if it satisfies an equation of the form

n∑i=1

αixi + α0 = 0;

here, the n-tuple (α1, . . . , αn) cannot be a zero n-tuple because of the linearindependence of the given points and/or vectors. The corresponding (thusnonzero) vector v = [α1, . . . , αn]T (in matrix notation) is called the normalvector to this hyperplane. It is easily seen to be orthogonal to every vectordetermined by two points of the hyperplane.

Two hyperplanes are parallel if and only if their normal vectors are linearlydependent. They are perpendicular (in other words, intersect orthogonally)if the corresponding normal vectors are orthogonal. The perpendicularity isdescribed by the formula

n∑i=1

αiβi = 0 (1.2)

ifn∑

i=1

αixi + α0 = 0,n∑

i=1

βixi + β0 = 0

are equations of the two hyperplanes.In the following chapters, it will be advantageous to use the barycentric

coordinates with respect to the basic simplex.

Page 14: Graphs and Matrices in Geometry Fiedler.pdf

4 A matricial approach

1.2 n-simplex

An n-simplex in En is usually defined as the convex hull of n + 1 linearlyindependent points, so-called vertices, of En. (Thus a 2-simplex is a triangle,a 3-simplex a tetrahedron, etc.)

Theorem 1.2.1 Let A1, . . . , An+1 be vertices of an n-simplex in En. Thenevery point X in En can be expressed in the form

X =n+1∑i=1

xiAi,n+1∑i=1

xi = 1, (1.3)

where the xis are real numbers, and this expression is determineduniquely.

Also, every vector u in En can be expressed in the form

u =n+1∑i=1

uiAi,∑

ui = 0, (1.4)

where the uis are real numbers, and this expression is determineduniquely.

Proof. The vectors pi = Ai − An+1, i = 1, . . . , n, are clearly linearly inde-pendent and thus form the basis of the corresponding vector space. Hence,if X is a point in En, the vector X − An+1 is a linear combination of thevectors pi: X − An+1 =

∑ni=1 xipi =

∑ni=1 xi(Ai − An+1). If we denote

xn+1 = 1 −∑n

i=1 xi, we obtain the expression in the theorem.Suppose there is also an expression

X =n+1∑i=1

yiAi,n+1∑i=1

yi = 1,

for some numbers yi. Then for ci = xi − yi it would follow that

n+1∑i=1

ciAi = 0,∑

ci = 0,

which implies∑n

i=1 cipi = 0. Thus ci = 0, i = 1, . . . , n + 1, so that bothexpressions coincide.

If now u is a vector in En, then u can be written in the form

u =n∑

i=1

uipi =n+1∑i=1

uiAi,

if un+1 is defined as −∑n

i=1 ui. This shows the existence of the requiredexpression. The uniqueness follows similarly as in the first case. �

Page 15: Graphs and Matrices in Geometry Fiedler.pdf

1.2 n-simplex 5

The numbers x1, . . . , xn+1 in (1.3) are called barycentric coordinates ofthe point X (with respect to the n-simplex with the vertices A1, . . . ,

An+1). The numbers u1, . . . , un+1 in (1.4) are analogously called barycentriccoordinates of the vector u.

It is advantageous to introduce the more general notion of homogeneousbarycentric coordinates (with respect to a simplex). With their use, it is pos-sible to study the geometric objects in the space En, i.e. the Euclidean spaceEn completed by improper points.

Indeed, suppose that (x1, . . . , xn+1) is an ordered (n + 1)-tuple of realnumbers, not all equal to zero. Distinguish two cases:

(a)∑n+1

i=1 xi �= 0; then we assign to this (n+ 1)-tuple a proper point X in En

having (the usual nonhomogeneous) barycentric coordinates with respectto the given simplex

X =(

x1∑xi,x2∑xi, . . . ,

xn+1∑xi

).

(b)∑n+1

i=1 xi = 0; then we assign to this (n + 1)-tuple the direction of the(nonzero) vector u having (the previous) nonhomogeneous barycentriccoordinates u = (x1, . . . , xn+1), i.e. the improper point of En.

It is obvious from this definition that to every nonzero (n + 1)-tuple(x1, . . . , xn+1) a proper or improper point from En is assigned, and to the(n+ 1)-tuples (x1, . . . , xn+1) and (ρx1, . . . , ρxn+1), for ρ �= 0, the same pointis assigned.

Also, conversely, to every point in En and to every direction in En a nonzero(n + 1)-tuple of real numbers is assigned. We thus have an isomorphismbetween the space En and a real projective n-dimensional space. The improperpoints in En form a hyperplane with equation

∑n+1i=1 xi = 0 in the homoge-

neous barycentric coordinates. The points A1, . . . , An+1, i.e. the vertices ofthe basic simplex, have in these coordinates the form A1 = (1, 0, . . . , 0), . . . ,An+1 = (0, 0, . . . , 1). As we shall see later, the point (1, 1, . . . , 1) is the cen-troid (barycentrum in Latin) of the simplex, which explains the name of thesecoordinates.

Other important objects assigned to a simplex Σ are faces. They are definedas linear spaces spanned by proper subsets of vertices of Σ. The word face,without specifying the dimension, is usually reserved for faces of maximumdimension, i.e. dimension n − 1. Every such face is determined by n of then + 1 vertices. If Ai is the missing vertex, we denote the face by ωi and callit the face opposite the vertex Ai. The one-dimensional faces are spanned bytwo vertices and are called edges of Σ. Sometimes, this name is assigned justto the segment between the two vertices.

Page 16: Graphs and Matrices in Geometry Fiedler.pdf

6 A matricial approach

It is immediately obvious that the equation of the face ωi in barycentriccoordinates is xi = 0 and the smaller dimensional faces can be determinedeither as spans of their vertices, or as intersections of the (n− 1)-dimensionalspaces ω.

For completion, we present a lemma.

Lemma 1.2.2 Suppose Σ is an n-simplex in En with vertices A1, . . . ,

An+1 and ((n− 1)-dimensional) faces ω1, . . . , ωn+1 (ωi opposite Ai). The setR of those points of En, not contained in any face ωi, consists of 2n+1 − 1connected open subsets, exactly one of which, called the interior of Σ, isbounded (in the sense that it does not contain any halfline). Each of thesesubsets is characterized by a set of signs ε1, . . . , εn+1, where each ε2i = 1, butnot all εi = −1; the proper point y, which has nonhomogeneous barycen-tric coordinates y1, . . . , yn+1, is in this subset if and only if sgn yi = εi,i = 1, . . . , n + 1. The interior of Σ consists of the points corresponding toεi = 1, i = 1, . . . , n+ 1, thus having all barycentric coordinates positive.

We shall not prove the whole lemma. The substantial part of the proof isin the proof of the following:

Lemma 1.2.3 A point y is an interior point of Σ (i.e. belongs to the interiorof Σ) if and only if every open halfline originating in y intersects at least oneof the (n− 1)-dimensional faces of Σ.

Proof. Suppose first that y = (y1, . . . , yn+1),∑yi = 1, is an interior point

of Σ, i.e. yi > 0 for i = 1, . . . , n + 1. If u �= 0 is an arbitrary vector withnonhomogeneous barycentric coordinates u1, . . . , un+1,

∑ui = 0, then the

halfline y + λu, λ > 0, necessarily intersects that face ωk, for which k is theindex such that uk/yk = minj(uj/yj) < 0 (at least one ui is negative), namelyin the point with the parameter λ0 = −yk/uk.

Suppose now that y = (y1, . . . , yn+1),∑yi = 1, is not an interior point

of Σ. Let p be the number of positive barycentric coordinates of y, so that0 < p < n + 1. Denote by u the vector with nonhomogeneous barycentriccoordinates ui, i = 1, . . . , n+ 1

ui = n+ 1 − p for yi > 0,ui = −p for yi ≤ 0.

We have clearlyn+1∑i=1

ui = 0, and no point of the halfline y + λu, λ > 0, is in

any face of Σ since all coordinates of all points of the halfline are differentfrom zero. �We now formulate the basic theorem (cf. [1]), which describes necessary andsufficient conditions for the existence of an n-simplex if the lengths of all edgesare given. It generalizes the triangular inequality for the triangle.

Page 17: Graphs and Matrices in Geometry Fiedler.pdf

1.2 n-simplex 7

Theorem 1.2.4 Let A1, . . . , An+1 be vertices of an n-simplex Σ. Then thesquares mik = |Ai −Ak|2 of the lengths of its edges, i, k = 1, . . . , n+1, satisfythe two conditions:

(i) mii = 0, mik = mki;(ii) for any nonzero (n + 1)-tuple x1, . . . , xn+1 of real numbers for which

n+1∑i=1

xi = 0, the inequalityn+1∑

i,k=1

mikxixk < 0 holds.

Conversely, if mik, i, k = 1, . . . , n+1, form a system of (n+1)2 real numberssatisfying the conditions (i) and (ii), then there exists in any n-dimensionalEuclidean space an n-simplex with vertices A1, . . . , An+1 such thatmik = |Ai −Ak|2.

Proof. Suppose that A1, . . . , An+1 are vertices of an n-simplex in a Euclideanspace En. Then clearly (i) holds. To prove (ii), choose some orthonormal

coordinate system in the underlying space. Let then (ka1,

ka2, . . . ,

kan) be the

coordinates of Ak, k = 1, . . . , n+1. Since the points A1, . . . , An+1 are linearlyindependent

det

⎡⎢⎢⎢⎢⎣1a1 . . .

1an 1

2a1 . . .

2an 1

. . .n+1a1 . . .

n+1an 1

⎤⎥⎥⎥⎥⎦ �= 0 (1.5)

by (1.1). Suppose now that x1, . . . , xn+1 is a nonzero (n+ 1)-tuple satisfyingn+1∑i=1

xi = 0. Then

n+1∑i,k=1

mikxixk =n+1∑i,k=1

( n∑α=1

(iaα − k

aα)2)xixk

=n+1∑i=1

( n∑α=1

iaα

2)xi

n+1∑k=1

xk +n+1∑i=1

xi

n+1∑k=1

( n∑α=1

kaα

2)xk

−2n+1∑i,k=1

n∑α=1

iaα

kaαxixk

= −2n∑

α=1

(n+1∑k=1

kaαxk

)2

≤ 0.

Let us show that equality cannot be attained. In such a case, a nonzero systemx1, . . . , xn+1 would satisfy

n+1∑k=1

kaαxk = 0 for α = 1, . . . , n,

Page 18: Graphs and Matrices in Geometry Fiedler.pdf

8 A matricial approach

andn+1∑k=1

xk = 0.

The rows of the matrix in (1.5) would thus be linearly dependent, acontradiction.

To prove the second part, assume that the numbers mik satisfy (i) and (ii).Let us show first that the numbers

cαβ =12(mα,n+1 +mβ,n+1 −mαβ), α, β = 1, . . . , n, (1.6)

have the property that the quadratic formn∑

α,β=1

cαβxαxβ is positive definite

(and, of course, cαβ = cβα).Indeed, suppose that x1, . . . , xn is an arbitrary nonzero system of real

numbers. Define xn+1 = −n∑

α=1xα. By the assumption,

n+1∑i,k=1

mikxixk < 0. Now

n+1∑i,k=1

mikxixk =n∑

α,β=1

mαβxαxβ + 2xn+1

n∑α=1

mα,n+1xα

=n∑

α,β=1

mαβxαxβ − 2n∑

β=1

n∑α=1

mα,n+1xα

= −2n∑

α,β=1

cαβxαxβ .

This implies that∑n

α,β=1 cαβxαxβ > 0 and the assertion about the numbersin (1.6) follows.

By Theorem 1.1.1, in an arbitrary n-dimensional Euclidean space En, thereexist n linearly independent vectors c1, . . . , cn such that their inner productssatisfy

〈cα, cβ〉 = cαβ, α, β = 1, . . . , n.

Choose a point An+1 in En and define points A1, . . . , An by

Aα = An+1 + cα, α = 1, . . . , n.

Since the points A1, . . . , An+1 are linearly independent, it suffices to provethat

mik = |Ai − Ak|2, i, k = 1, . . . , n+ 1. (1.7)

This holds for k = n + 1: |Aα − An+1|2 = 〈cα, cα〉 = cαα = mα,n+1 forα = 1, . . . , n and, of course, also for i = k = n + 1. Suppose now that i ≤ n,k ≤ n, and i �= k (for i = k (1.7) holds). Then

Page 19: Graphs and Matrices in Geometry Fiedler.pdf

1.2 n-simplex 9

|Ai −Ak|2 = 〈ci − ck, ci − ck〉 = 〈ci, ci〉 − 2〈ci, ck〉 + 〈ck, ck〉 = mik,

as we wanted to prove. �

Remark 1.2.5 We shall see later thatn+1∑

i,k=1

mikxixk = 0 is the equation of

the circumscribed hypersphere of the n-simplex in barycentric coordinates.The condition (ii) thus means that all the improper points are in the outerpart of that hypersphere.

Remark 1.2.6 For n = 1, the condition in Theorem 1.2.4 means thatm12x1x2 < 0 whenever x1 + x2 = 0, x �= 0, which is just m12 > 0. Thuspositivity of all the mijs for i �= j similarly follows.

Theorem 1.2.7 The condition (ii) in Theorem 1.2.4 is equivalent to thefollowing:

the n× n matrix C = [cik], where cik = mi,n+1 +mk,n+1 −mik is positivedefinite.

Proof. This follows by elimination of xn+1 from the condition (ii) inTheorem 1.2.4. �

Example 1.2.8 The Sylvester Criterion (Appendix, Theorem A.1.34) thusyields for the triangle the conditions

m13 > 0, 4m13m23 > (m13 +m23 −m12)2

by positive definiteness of the matrix[2m13 m13 +m23 −m12

m13 +m23 −m12 2m23

].

From the second inequality, the usual triangle inequalities follow.For the tetrahedron, surprisingly just three inequalities for positive definite-

ness of the matrix⎡⎣ 2m14 m14 +m24 −m12 m14 +m34 −m13

m14 +m24 −m12 2m24 m24 +m34 −m23

m14 +m34 −m13 m24 +m34 −m23 2m34

⎤⎦suffice.

In the sequel, we shall need some formulae for the distances and angles inbarycentric coordinates.

Theorem 1.2.9 Let X = (xi), Y = (yi), Z = (zi) be proper points in En,

and xi, yi, zi be their homogeneous barycentric coordinates, respectively, with

Page 20: Graphs and Matrices in Geometry Fiedler.pdf

10 A matricial approach

respect to the simplex Σ. Then the inner product of the vectors Y − X andZ −X is

〈Y −X,Z −X〉 = −12

n+1∑i,k=1

mik

(xi

Σxj− yi

Σyj

)(xk

Σxj− zk

Σzj

). (1.8)

Proof. We can assume that Σxj = Σyj = Σzj = 1. Then

〈Y −X,Z −X〉 =⟨n+1∑

i=1

(yi − xi)Ai,

n+1∑i=1

(zi − xi)Ai

=⟨ n∑

i=1

(yi − xi)(Ai −An+1),n∑

k=1

(zk − xk)(Ak − An+1)⟩.

Since

〈Ai − An+1, Ak −An+1〉 = −12(〈Ai − Ak, Ai − Ak〉

−〈Ai − An+1, Ai −An+1〉 − 〈Ak − An+1, Ak − An+1〉),

we obtain

〈Y − Z,Z −X〉 = −12

( n∑i,k=1

mik(yi − xi)(zk − xk)

−n∑

k=1

(zk − xk)n∑

i=1

mi,n+1(yi − xi) −n∑

i=1

(yi − xi)n∑

k=1

mk,n+1(zk − xk))

= −12

n+1∑i,k=1

mik(yi − xi)(zk − xk).

For homogeneous coordinates, this yields (1.8). �

Corollary 1.2.10 The square of the distance between the points X = (xi)and Y = (yi) in barycentric coordinates is

ρ2(X,Y ) = −12

n+1∑i,k=1

mik

(xi

Σx− yi

Σy

)(xk

Σx− yk

Σy

). (1.9)

Theorem 1.2.11 If the points P = (pi), and Q = (qi) are both improper(i.e., Σpi = Σqi = 0), thus corresponding to directions of lines, then these areorthogonal if and only if

n+1∑i,k=1

mikpiqk = 0. (1.10)

Page 21: Graphs and Matrices in Geometry Fiedler.pdf

1.2 n-simplex 11

More generally, the cosine of the angle ϕ between the directions p and q

satisfies

|cosϕ| =

∣∣∑n+1i,k=1mikpiqk

∣∣√∑n+1i,k=1mikpipk

∑n+1i,k=1mikqiqk

. (1.11)

Proof. Let X be an arbitrary proper point in En with barycentric coordinatesxi (so that

∑i xi �= 0). The points Y , Z with barycentric coordinates xi +λpi

(respectively, xi + μqi) for λ �= 0, μ �= 0 are again proper points, and thevectors Y −X, Z −X have the directions p and q, respectively.

The angle ϕ between these vectors is defined by

cosϕ =〈Y −X,Z −X〉√

〈Y −X,Y −X〉〈Z −X,Z −X〉.

Substituting from (1.8), we obtain

cosϕ =(λμ)

∑n+1i,k=1mikpiqk√

λ2μ2∑n+1

i,k=1mikpipk

∑n+1i,k=1mikqiqk

,

which is (1.11). �To unify these notions and use the technique of analytic projective geometry,

we redefine En into a projective space.The linear independence of these “generalized” points P = (p1, . . . , pn+1),

Q = (q1, . . . , qn+1), . . . , R = (r1, . . . , rn+1), is reflected by the fact that thematrix ⎡⎢⎢⎣

p1 . . . pn+1

q1 . . . qn+1

. . . . .

r1 . . . rn+1

⎤⎥⎥⎦has full row-rank. This enables us to express linear dependence and to definelinear subspaces. Every such linear subspace can be described either as alinear hull of points, or as the intersection of (n− 1)-dimensional subspaces,hyperplanes; each hyperplane can be described as the set of all (generalized)points x, the coordinates (x1, . . . , xn+1) of which satisfy a linear equality

α1x1 + α2x2 + . . .+ αn+1xn+1 = 0, (1.12)

where not all coefficients α1, . . . , αn+1 are zero. The coefficients α1, . . . , αn+1

are (dual) coordinates of the hyperplane, and the relation (1.12) is theincidence relation for the point (x) and the hyperplane (α).

In accordance with (1.12)n+1∑i=1

xi = 0, (1.13)

Page 22: Graphs and Matrices in Geometry Fiedler.pdf

12 A matricial approach

i.e. the condition that x = (xi) is improper represents the equation of ahyperplane, a so-called improper hyperplane.

Two (proper) hyperplanes∑αixi = 0,

∑βixi = 0 are different if and only

if the matrix [α1 . . . αn+1

β1 . . . βn+1

]has rank 2. They are then parallel if and only if the rank of the matrix⎡⎣α1 . . . αn+1

β1 . . . βn+1

1 . . . 1

⎤⎦is again 2.

An important tool in studying the geometric properties of objects isthat of using the duality. This can be easily studied in barycentric coor-dinates according to the usual duality in projective spaces (cf. Appendix,Section 7.5).

1.3 Some properties of the angles in a simplex

There is a relationship between the interior angles and the normals (i.e. linesperpendicular to the (n− 1)-dimensional faces of an n-simplex). Let Σ be ann-simplex in En with vertices A1, . . . , An+1. Denote by cα the vectors

cα = Aα − An+1, α = 1, . . . , n. (1.14)

Theorem 1.3.1 Let d1, . . . , dn be an (ordered) system of vectors, which formsa biorthogonal pair of bases with the system c1, . . . , cn from (1.14). Let dn+1

be the vector

dn+1 = −n∑

α=1

dα.

Then the vectors

vk = −dk, k = 1, . . . , n+ 1

are vectors of outer normals to the (n−1)-dimensional faces of Σ. The vectorsvk satisfy, and are characterized by, the relations

〈Ai − Aj , vk〉 = −δik + δjk, i, j, k = 1, . . . , n+ 1. (1.15)

Page 23: Graphs and Matrices in Geometry Fiedler.pdf

1.3 Some properties of the angles in a simplex 13

Proof. Let α ∈ {1, . . . , n}. Since di is perpendicular to all vectors cj for j �= i,di is orthogonal to ωi. Let us show that 〈dn+1, cα − cβ〉 = 0 for all α, β ∈{1, . . . , n}. Indeed

〈dn+1, cα − cβ〉 =⟨−∑

γ

dγ , cα − cβ

⟩= −

∑γ

δγα +∑

γ

δγβ

= 0.

Thus dn+1 is orthogonal to ωn+1.Let us denote by ω+

k (respectively, ω−k ), k = 1, . . . , n + 1, that halfspace

determined by ωk which contains (respectively, does not contain) the pointAk. To prove that the nonzero vector vk is the outer normal of Σ, i.e. thatit is directed into the halfspace ω−

k , observe that the intersection point of ωk

with the line Ak + λdk corresponds to the parameter λ0 satisfying

Ak + λ0dk =n+1∑

j=1,j �=k

γjAj ,n+1∑

j=1,j �=k

γj = 1,

i.e.

ck + λ0dk =n+1∑

j=1,j �=k

γjcj.

By inner multiplication by dk, we obtain

1 + λ0〈dk, dk〉 = 0.

Hence, λ0 < 0, which means that this intersection point belongs to the raycorresponding to λ < 0 and vk is the vector of the outer normal of Σ. Similarly

An+1 + λ0dn+1 =n+1∑j=1

γ′jAj ,

n+1∑j=1

γ′j = 1,

determines the intersection point of ωn+1 with the line An+1 + λdn+1. Hence

−λ0

n∑α=1

dα =n∑

α=1

γ′αcα

and by inner multiplication by∑

α dα, we obtain

−λ0

⟨∑α

dα,∑α

⟩=

n∑α,β=1

γ′α〈cα, dβ〉

=∑α

γ′α

= 1.

Page 24: Graphs and Matrices in Geometry Fiedler.pdf

14 A matricial approach

Thus λ0 < 0 and −dn+1 is also the vector of an outer normal to ωn+1.The formulae (1.15) follow easily from the biorthogonality of the cαs and

dαs and, on the other hand, are equivalent to them. �

Remark 1.3.2 We call the vectors vk normalized outer normals of Σ.

It is evident geometrically, since it is essentially a planar problem, thatthe angle of outer normals vi and vk, i �= k, complements the interior anglebetween the faces ωi and ωk to π, i.e. the set of all half-hyperplanes originatingin the intersection ωi∪ωk and intersecting the opposite edge AiAk. We denotethis angle by ϕik, i, k = 1, . . . , n+ 1.

We now use this relationship between the outer normals and interior anglesfor characterization of the conditions that generalize the condition that thesum of the interior angles in the triangle is π.

Theorem 1.3.3 Let di be the vectors of normalized outer normals of thesimplex Σ from Theorem 1.3.1. Then the interior angle ϕik of the faces ωi

and ωk (i �= k) is determined by

cosϕik = − 〈di, dk〉√〈di, di〉

√〈dk, dk〉

. (1.16)

The matrix

C =

⎡⎢⎢⎣1 − cosϕ12 . . . − cosϕ1,n+1

− cosϕ12 1 . . . − cosϕ2,n+1

· · ·− cosϕ1,n+1 − cosϕ2,n+1 . . . 1

⎤⎥⎥⎦ (1.17)

then has the following properties:

(i) its diagonal entries are all equal to one;(ii) it is singular and positive semidefinite of rank n;(iii) there exists a positive vector p, p = [p1, . . . , pn+1]T , such that

Cp = 0.

Conversely, if C = [cik] is a symmetric matrix of order n+1 with properties(i)–(iii), then there exists an n-simplex with interior angles ϕik (i �= k) suchthat

cosϕik = −cik (i �= k, i, k = 1, . . . , n+ 1).

In addition, C is the Gram matrix of the unit vectors of outer normals of thissimplex.

Proof. Equation (1.16) follows from the definition of ϕik = π−ψik, where ψik isthe angle spanned by the outer normals −di and −dk. To prove the properties(ii) and (iii) of C, denote by D the diagonal matrix whose diagonal entries are

Page 25: Graphs and Matrices in Geometry Fiedler.pdf

1.4 Matrices assigned to a simplex 15

the numbers λ1 =√

〈d1, d1〉, . . . , λn+1 =√

〈dn+1, dn+1〉. The matrix DCD =C1 is clearly the Gram matrix of the system of vectors d1, . . . , dn+1. Thus C1

is positive semidefinite of rank n (since d1, . . . , dn are linearly independentand the row sums are equal to zero). This means that also C is positivesemidefinite of rank n and, if we multiply the columns of C by the numbersp1 = λ1, . . . , pn+1 = λn+1 and add together, we obtain the zero vector.

To prove the converse, suppose that C = [cik] fulfills (i)–(iii). ByTheorem 1.1.1, there exists in an arbitrary Euclidean n-dimensional spaceEn a system of n+ 1 vectors v1, . . . , vn+1 such that

〈vi, vk〉 = cik,

andn+1∑k=1

pkvk = 0. (1.18)

Now we shall construct an n-simplex with outer normals vi in En. Choose apoint U in En and define points Q1, . . . , Qn+1 by Qi = U+vi, i = 1, . . . , n+1.For each i, let ωi be the hyperplane orthogonal to vi and containing Qi.

Denote by ω+i that halfspace with boundary ωi which contains the point U .

We shall show that the hyperplanes ωi are (n − 1)-dimensional faces of ann-simplex satisfying the conditions (i)–(iii). First, the intersection

⋂i ω

+i does

not contain any open halfline starting at U and not intersecting any of thehyperplanes ωi: if the halfline U + λu, λ > 0, did have this property for somenonzero vector u, then 〈u, vi〉 ≤ 0 for i = 1, . . . , n+ 1, thus by (1.18)∑

i

pi〈u, vi〉 = 0,

implying 〈u, vi〉 = 0 for all i, a contradiction with the rank condition (ii).It now follows that U is an interior point of the n-simplex and that the

vectors vi are outer normals since they satisfy (1.16). �

Remark 1.3.4 As we shall show in Chapter 2, Section 1, the numbersp1, . . . , pn+1 in (iii) are proportional to the (n−1)-dimensional volumes of thefaces (in this case convex hulls of the vertices) ω1, . . . , ωn+1 of the simplex Σ.

1.4 Matrices assigned to a simplex

In Theorem 1.2.4, we assigned to any given n-simplex Σ an (n+ 1) × (n+ 1)matrix M, the entries of which are the squares of the (Euclidean) distancesamong the points A1, . . . , An+1

M = [mij ], mij = ρ2(Ai, Aj), i, j = 1, . . . , n+ 1. (1.19)

Page 26: Graphs and Matrices in Geometry Fiedler.pdf

16 A matricial approach

We call this matrix the Menger matrix of Σ.1 On the other hand, denote byQ the Gram matrix of the normalized outer normals v1, . . . , vn+1 from (1.15)

Q = [〈vi, vj〉], i, j = 1, . . . , n+ 1. (1.20)

We call this matrix simply the Gramian of Σ.In the following theorem, we shall formulate and prove the basic relation

between the matrices M and Q.

Theorem 1.4.1 Let e be the column vector of n+ 1 ones. Then there existsa column (n+ 1) vector q0 = [q01, . . . , q0,n+1]T and a number q00 such that[

0 eT

e M

] [q00 qT

0

q0 Q

]= −2In+2, (1.21)

where In+2 is the identity matrix of order n+ 2.In other words, if we denote Q = [qik], i, k = 1, 2, . . . , n + 1, and, in

addition, m00 = 0, m0i = mi0 = 1, i = 1, . . . , n + 1, then for indicesr, s, t = 0, 1, . . . , n+ 1 we have

n+1∑s=0

mrsqst = −2δst. (1.22)

Proof. Partition the matrices M and Q as

M =

[M m

mT 0

], Q =

[Q q

qT γ

],

where M, Q are n× n. Observe that by Theorem 1.3.1

Q = [〈vi, vj〉], i, j = 1, . . . , n,

and

M = [〈ci − cj , ci − cj〉], i, j = 1, . . . , n.

Since

〈ci − cj , ci − cj〉 = 〈ci, ci〉 + 〈cj , cj〉 − 2〈ci, cj〉,

we obtain

[〈ci, cj〉] =12(meT + emT − M), (1.23)

where

e = [1, . . . , 1]T

with n ones.

1 In the literature, this matrix is usually called Euclidean distance matrix.

Page 27: Graphs and Matrices in Geometry Fiedler.pdf

1.4 Matrices assigned to a simplex 17

By Theorem 1.1.2, the matrices Q and [〈ci, cj〉] are inverse to each other sothat (1.23) implies

MQ = −2In +meT Q+ emT Q. (1.24)

Set

q0 =[γ

q

],

where

q = −Qm,

γ = −2 + eT Qm,

and

q00 = mT Qm.

The left-hand side of (1.21) is then (the row-sums of Q are zero)⎡⎣0 eT 1e M m

1 mT 0

⎤⎦⎡⎢⎣ mT Qm −mT Q −2 + eT Qm

−Qm Q −Qe−2 + eT Qm −eT Q eT Qe

⎤⎥⎦ ,which by (1.24) is easily seen to be −2In+2. �

Remark 1.4.2 The relations can also be written in the following form, whichwill sometimes be more convenient. Denoting summation from zero to n+ 1by indices r, s, t, summation from 1 to n + 1 by indices i, j, k, and furtherm0i = mi0 = 1, i = 1, . . . , n + 1, m00 = 0, we have (δ denoting Kroneckerdelta) ∑

s

qrsmst = −2δrt;

thus also e.g. ∑j

qijmjk = −q0j − 2δik. (1.25)

Corollary 1.4.3 The matrix

M0 =[0 eT

e M

](1.26)

is nonsingular.The same holds for the second matrix Q0 from (1.21) defined by

Q0 =[q00 qT

0

q0 Q

]. (1.27)

Page 28: Graphs and Matrices in Geometry Fiedler.pdf

18 A matricial approach

Remark 1.4.4 We call the matrix M0 from (1.26) the extended Mengermatrix and the matrix Q0 from (1.27) the extended Gramian of Σ. It is wellknown (compare Appendix, (A.14)) that the determinant of M0, which is usu-ally called the Cayley–Menger determinant, is proportional to the square ofthe n-dimensional volume VΣ of the simplex Σ. More precisely

V 2Σ =

(−1)n+1

2n(n!)2detM0. (1.28)

It follows that

sign det M0 = (−1)n+1,

and by the formula obtained from (1.21)

Q0 = (−2)M−10 , (1.29)

along with

detQ0 < 0.

Remark 1.4.5 It was shown in [24] that in the formulation of Theorem 1.2.4,the part (ii) can be reformulated in terms of the extended matrix M0 as:

(ii′) the matrix M0 is elliptic, i.e. it has one eigenvalue positive and theremaining negative.

From this, it follows that in M0 we can divide the (i+1)th row and columnby mi,n+1 for i = 1, . . . , n and the resulting matrix will have in its first n+ 1rows and columns again a Menger matrix of some n-simplex.

Corollary 1.4.6 For I ⊂ N = {1, . . . , n + 1}, denote by M0[I] the matrixM0 with all rows and columns corresponding to indices in N \ I deleted. Lets = |I|. Then the square of the (s − 1)-dimensional volume VΣ(I) of the faceΣ(I) of Σ spanned by the vertices Ak, k ∈ I, is

V 2Σ(I) =

(−1)s

2s−1((s− 1)!)2detM0[I]. (1.30)

Using the extended Gramian Q

V 2Σ(I) = − 4

((s− 1)!)2detQ[N \ I]

detQ0.

Here, Q[N \ I] means the principal submatrix of Q from (1.20) with row andcolumn indices from N \ I.

Proof. The first part is immediately obvious from (1.28). The second followsfrom (1.30) by Sylvester’s identity (cf. Appendix, Theorem A.1.16) and (1.29).

Page 29: Graphs and Matrices in Geometry Fiedler.pdf

1.4 Matrices assigned to a simplex 19

Observe that we started with an n-simplex and assigned to it the Mengermatrix M and the Gram matrix Q of the normalized outer normals. In the fol-lowing theorem, we shall show that we can completely reconstruct the previoussituation given the matrix Q.

Theorem 1.4.7 The following are necessary and sufficient conditions for areal (n+ 1) × (n+ 1)-matrix Q to be the Gramian of an n-simplex:

Q is positive semidefinite of rank n (1.31)

and

Qe = 0. (1.32)

Proof. By (1.21), eTQ = 0 so that both conditions are clearly necessary.To show that they are sufficient, observe that the given matrix Q haspositive diagonal entries, say qii. If we denote by D the diagonal matrixdiag{√q11,

√q22, . . . ,

√qn+1,n+1}, then the matrix D−1QD−1 satisfies the

conditions (i)–(iii) in Theorem 1.3.3, if p = De. By this theorem, there existsan n-simplex, the Gram matrix of the unit outer normals of which is thematrix D−1QD−1. However, this simplex has indeed the Gramian Q. �

Remark 1.4.8 Let us add a consequence of the above formulae; with q00defined as above

det(M − 12q00J) = 0,

where J is the matrix of all ones.

Theorem 1.4.9 Let a proper hyperplane H have the equation∑

i αixi = 0in barycentric coordinates. Then the orthogonal improper point (direction) Uto H has barycentric coordinates

ui =∑

k

qikαk. (1.33)

If, in addition, P = (pi) is a proper point, then the perpendicular line from P

to H intersects H at the point R = (ri), where

ri =(∑

αjpj

)∑k

qikαk −(∑

j,k

qjkαjαk

)pi;

the point symmetric to P with respect to H is S = (si), where

si = 2(∑

αjpj

)∑k

qikαk −(∑

j,k

qjkαjαk

)pi. (1.34)

Page 30: Graphs and Matrices in Geometry Fiedler.pdf

20 A matricial approach

Proof. Observe first that since H is proper, the numbers ui are not all equalto zero. Now, for any Z = (zi)∑

i,k

mikuizk =∑i,k,l

mikqilαlzk

= −2∑

k

αkzk −∑

l

q0lαl

∑k

zk

by (1.25). By (1.10), it follows that whenever Z is an improper point of H, Zis orthogonal to U . Conversely, whenever Z is an improper point orthogonalto U , then Z belongs to H.

It is then obvious that the point R is on the line joining P with the improperpoint orthogonal to H, as well as on H itself. Since for non-homogeneousbarycentric coordinates ri/

∑rk = 1

2pi/∑pk + 1

2si/∑sk, R is the midpoint

of PS, which completes the proof. �

Theorem 1.4.10 The equation

α0

∑i,k

mikxixk − 2∑

i

αixi

∑i

xi = 0 (1.35)

is an equation of a real hypersphere in En in barycentric coordinates if theconditions

α0 �= 0 (1.36)

∑r,s

qrsαrαs > 0 (1.37)

are fulfilled.The center of the hypersphere has coordinates

ci =∑

r

qirαr, (1.38)

and its radius r satisfies

4r2 =1α2

0

∑r,s

qrsαrαs. (1.39)

The dual equation of the hypersphere (1.35) is∑i,k

qikξiξk − 1r2(∑

i ci)2(∑

i

ciξi

)2

= 0. (1.40)

Every real hypersphere in En has equation (1.35) with α0, α1, . . . , αn+1

satisfying (1.36) and (1.37).

Page 31: Graphs and Matrices in Geometry Fiedler.pdf

1.4 Matrices assigned to a simplex 21

Proof. By Corollary 1.2.10, it suffices to show that for the point C = (ci) from(1.38) and the radius r from (1.39), (1.35) characterizes the condition that forthe point X = (xi), ρ(X,C) = r. This is equivalent to the fact that for all xis

α0

∑i,k

mikxixk − 2∑

i

αixi

∑i

xi (1.41)

= −2α0

(−1

2

∑i,k mik

(xi

Σjxj− ci

Σjcj

)(xk

Σjxj− ck

Σjcj

)− r2

)(∑xi)

2.

Indeed, the right-hand side of (1.41) is

α0

∑i,k

mikxixk − 2α0∑cj

∑j

xj

∑i,k

mikcixk+

+α0

(∑xj

)2(∑i,k mikcick

(∑cj)2

− 2r2).

Since by (1.21) ∑j

cj = α0

∑r

q0r

= −2α0

and ∑k

mikck =∑k,r

mikqkrαr

= −2∑

r

δirαr −mi0

∑r

q0rαr

= −2αi −∑

r

q0rαr,

we have ∑i,k

mikcick = −2∑i,r

qirαiαr − 2α0

∑r

q0rαr

= −2∑r,s

qrsαrαs,

as well as similarly∑i,k

mikcixk = −2∑

i

αixi −∑

i

xi

∑r

q0rαr.

This, together with (1.39), yields the first assertion. To find the dualequation, it suffices to invert the matrix

α0M − eαT − αeT

Page 32: Graphs and Matrices in Geometry Fiedler.pdf

22 A matricial approach

of the corresponding quadratic form, α denoting the vector [α1, . . . , αn+1]T .It is, however, easily checked by multiplication that this is the matrix

− 12α0

(Q− 1

r2(∑

i ci)2ccT),

where c = [c1, . . . , cn+1]T .This implies (1.40). The rest is obvious. �

Remark 1.4.11 In some cases, it is useful to generalize hyperspheres for thecase that the condition (1.35) need not be satisfied. These hyperspheres, usu-ally called formally real, can be considered to have purely imaginary (andnonzero) radii and play a role in generalizations of the geometry of circles.We can also define the potency of a proper point, say P (with barycentriccoordinates pi) with respect to the hypersphere (1.33). In elementary geom-etry, it is defined as PS2 − r2, where S is the center and r the radius of thecircle, in our case of the hypersphere. Using the formula (1.33), this yields thenumber

−12

∑mikpipk

(∑pi)2

+α0

∑pi∑

i αipi.

This number is defined also in the more general case; for a usual (not formallyreal) hypersphere, the potency is negative for points in the interior of thehypersphere and positive for points outside the hypersphere. For the formallyreal hypersphere, the potency is positive for every proper point.

Also, we can define the angle of two hyperspheres, orthogonality, etc. Twousual (not formally real) hyperspheres with the property that the distancebetween their centers d and their radii r1 and r2 satisfies the conditiond2 = r21 + r22 are called orthogonal; this means that they intersect and theirtangents in every common point are orthogonal. Such a property can alsobe defined for the formally real hyperspheres. In fact, we shall use this moregeneral approach later when considering polarity.

Remark 1.4.12 The equation (1.35) can be obtained by elimination of aformal new indeterminate x0 from the two equations

n+1∑r,s=0

mrsxrxs = 0,

andn+1∑r=0

αrxr = 0.

Page 33: Graphs and Matrices in Geometry Fiedler.pdf

1.4 Matrices assigned to a simplex 23

Corollary 1.4.13 The equation of the circumscribed sphere to the sim-plex Σ is

n+1∑i,k=1

mikxixk = 0. (1.42)

Its center, circumcenter, is the point (q0i), and the square of its radius is 14q00,

where the q0js satisfy (1.21).

Proof. By Theorem 1.4.10 applied to α0 = 1, α1 = . . . = αn+1 = 0, (1.42)is the equation of a real sphere with center q0i and where the square of theradius is 1

4q00. (The conditions (1.36) and (1.37) are satisfied.) Since mii = 0for i = 1, . . . , n + 1, the hypersphere contains all vertices of the simplex Σ.This proves the assertion. In addition, (1.42) is – up to a nonzero factor – theequation of the only hypersphere containing all the vertices of Σ. �

Corollary 1.4.14 Let A = (ai) be a proper point in En, and let α ≡∑n+1i=1 αixi = 0 be the equation of a hyperplane. Then the distance of A from

α is given by

ρ(A,α) =∑n+1

i=1 aiαi√∑n+1i,k=1 qikαiαk

∑n+1i=1 ai

. (1.43)

In particular, 1/√qii is the height (i.e. the length of the altitude of Σ)

corresponding to Ai.

Proof. By (1.40) in Theorem 1.4.10, the dual equation of the hypersphere withcenter A and radius r is∑

i,k

qikξiξk − 1r2(∑

i ai)2(∑

i

aiξi

)2= 0.

If we substitute αi for ξi, this yields the condition that r is the distanceconsidered. Since then r = ρ(A,α) from (1.43), the proof is complete. �

Page 34: Graphs and Matrices in Geometry Fiedler.pdf

2

Simplex geometry

2.1 Geometric interpretations

We start by recalling the basic philosophy of constructions in simplex geom-etry. In triangle geometry, we can perform constructions from the lengthsof segments and magnitudes of plane angles. In solid geometry, we can pro-ceed similarly, adding angles between planes. However, one of the simplesttasks, constructing the tetrahedron when the lengths of all its edges are given,requires using circles in space. Among the given quantities, we usually do nothave areas of faces, and never the sine of a space angle. We shall be usinglengths only and the usual angles. All existence questions will be transferredto the basic theorem (Theorem 1.2.4) on the existence of a simplex with givenlengths of edges.

Corollary 1.4.13 allows us to completely characterize the entries of theGramian and the extended Gramian of the n-simplex Σ.

Theorem 2.1.1 Let Q = [qij ], i, j = 1, . . . , n + 1, be the Gramian of the n-simplex Σ, i.e. the matrix from (1.20), and Q = [qrs], r, s = 0, 1, . . . , n+ 1 theextended Gramian of Σ. The entries qrs then have the following geometricalmeaning:

(i) The number qii, i = 1, . . . , n + 1, is the reciprocal of the square of thelength li of the altitude from Ai

qii =1l2i.

(ii) For i �= j, i = 1, . . . , n+ 1

qij = −cosϕij

lilj,

where ϕik denotes the interior angle between the (n−1)-dimensional facesωi and ωk.

Page 35: Graphs and Matrices in Geometry Fiedler.pdf

2.1 Geometric interpretations 25

(iii) The number q00 is equal to 4r2, r being the radius of the circumscribedhypersphere.

(iv) The number q0i is the (−2)-multiple of the nonhomogeneous ith bary-centric coordinate of the circumcenter.

Proof. (i) was already discussed in Corollary 1.4.14; (ii) is a consequence of(1.16); (iii) and (iv) follow from Corollary 1.4.13 and the fact that

∑q0i = −2

by (1.21). �

Let us illustrate these facts with the example of the triangle.

Example 2.1.2 Let ABC be a triangle with usual parameters: lengths ofsides a, b, and c; angles α, β, and γ. The extended Menger matrix is then

M0 =

⎡⎢⎢⎣0 1 1 11 0 c2 b2

1 c2 0 a2

1 b2 a2 0

⎤⎥⎥⎦ ,the extended Gramian Q0 satisfying M0Q0 = −2I is

Q0 =1

4S2

⎡⎢⎢⎣a2b2c2 −a2bc cos γ −ab2c cosβ −abc2 cos γ

−a2bc cosα a2 −ab cos γ −ac cos β−ab2c cosβ −ab cos γ b2 −bc cosα−abc2 cos γ −ac cosβ −bc cosα c2

⎤⎥⎥⎦ ,where S is the area of the triangle. We can use this fact for checking the clas-sical theorems about the geometry of the triangle. In particular, the Heron’sformula for S follows from (1.28) and the fact that

detM0 = a4 + b4 + c4 − 2a2b2 − 2a2c2 − 2b2c2,

which can be decomposed as

−(−a+ b+ c)(a− b+ c)(a+ b− c)(a+ b+ c).

Of course, if the points A, B, and C are collinear, then

a4 + b4 + c4 − 2a2b2 − 2a2c2 − 2b2c2 = 0. (2.1)

Now, let us use Sylvester’s identity (Appendix, Theorem A.1.16), the rela-tion (1.29), and the formulae (1.28) and (1.30) to obtain further metricproperties in an n-simplex.

Item (iii) in Theorem 2.1.1 allows us to express the radius r of thecircumscribed hypersphere in terms of the Menger matrix as follows:

Theorem 2.1.3 We have

2r2 = − detMdetM0

. (2.2)

Page 36: Graphs and Matrices in Geometry Fiedler.pdf

26 Simplex geometry

Proof. Indeed, the matrix − 12Q0 is the inverse matrix of M0. By Sylvester’s

identity (cf. Appendix, Theorem A.1.16), −12q00 = detM/detM0, which

implies (2.2). �Thus the formula (2.2) allows us to express this radius as a function of the

lengths of the edges of the simplex. The same reasoning yields that

−12qii =

det(M0)i

detM0,

where det(M0)i is the determinant of the submatrix of M0 obtained bydeleting the row and column with index i. Using the symbol Vn(Σ) for then-dimensional volume of the n-simplex Σ, or, simply, V(A1, . . . , An+1) if then-simplex is determined by the vertices A1, . . . , An+1, we can see by (1.30) that

12qii =

2n−1((n− 1)!)2V2n−1(A1, . . . , Ai−1, Ai+1, . . . , An+1)

2n(n!)2V2n(A1, . . . , An+1)

,

or

qii =V2

n−1(A1, . . . , Ai−1, Ai+1, . . . , An+1)n2V2

n(A1, . . . , An+1).

Comparing this formula with (i) of Theorem 2.1.1, we obtain the expectedformula

Vn(A1, . . . , An+1) =1nliVn−1(A1, . . . , Ai−1, Ai+1, . . . , An+1). (2.3)

This also implies that the vector p in (iii) of Theorem 1.3.3 has the propertyclaimed in Remark 1.3.4: that the coordinate pi is proportional to the volumeof the (n − 1)-dimensional face opposite Ai. Indeed, the matrix C in (1.17)has by (i) and (ii) of Theorem 2.1.1 the form

C = DQD,

where D is the diagonal matrix

D = diag(li).

Thus Cp = 0 implies QDp = 0, and since the rank of C is n, Dpis a multiple of the vector e with all ones. By (2.3), pi is a multiple ofVn−1(A1, . . . , Ai−1, Ai+1, . . . , An+1) as we wanted to show; denote this volumeas Si. As before, ϕik denotes the interior angle between the (n−1)-dimensionalfaces ωi and ωk.

Theorem 2.1.4 The volumes Si of the (n − 1)-dimensional faces of an n-simplex satisfy:

(i) Si =∑

j �=i Sj cosϕij for all i = 1, . . . , n+ 1;(ii) S2

n+1 =∑n

j=1 S2j − 2

∑1≤j<k≤n SjSk cosϕjk;

(iii) 2maxi Si <∑

i Si.

Page 37: Graphs and Matrices in Geometry Fiedler.pdf

2.1 Geometric interpretations 27

Proof. Since qij =√qii

√qjj cosϕij , whenever i �= j, the ith equation inQe = 0

from (1.32) implies (i) after dividing by√qii. A simple calculation shows that

qn+1,n+1 = −n∑

j=1

qn+1,j

=n∑

j,k=1

qjk,

which implies (ii).The inequality (iii) follows from Theorem A.1.46 in the Appendix. �

Remark 2.1.5 Analogous results can be obtained for the reciprocals of thelengths of the altitudes using the proportionality of the lis with the Sis.

Let us use now Sylvester’s identity for the principal submatrix in the first nrows and columns of the matrix M0. We obtain (in the same notation as inCorollary 1.4.3)

det M0

detM0=

sin2 ϕn,n+1

l2nl2n+1

.

Therefore, by (1.30) and (1.28), if we denote the volumes by the correspondingvertices as above

nVn(A1, . . . , An+1)Vn−2(A1, . . . , An−1) (2.4)

= (n− 1)Vn−1(A1, . . . , An)Vn−1(A1, . . . , An−1An+1) sinϕn,n+1.

Of course, the same can be done for any two distinct indices. The formulaclearly generalizes the formula for the area of the triangle; we have then toset 1 for V0. In addition, the following holds:

Theorem 2.1.6 An n-simplex is uniquely determined by the lengths of alledges but one and by the interior angle opposite the missing edge. If the missingedge is A1A2, then this simplex exists if there exist both the simplexes with thesets of vertices A1, A3, . . . , An+1 and A2, . . . , An+1.

Proof. Suppose that the n-simplex Σ with the extended Menger matrixM0 andextended Gramian Q0 as well as the n-simplex Σ′ with matrices M ′

0 and Q′0

have both the properties so that mik = m′ik for all i, k, i < k, (i, k) �= (1, 2),

and ϕ12 = ϕ′12. Since by (A.3) in the Appendix for the adjoints

−12Q0 =

adjM0

detM0, −1

2Q′

0 =adjM ′

0

detM ′0

, (2.5)

it follows that

q11 detM0 = q′11 detM ′0, q22 detM0 = q′22 detM ′

0.

Page 38: Graphs and Matrices in Geometry Fiedler.pdf

28 Simplex geometry

Since by (i) and (ii) of Theorem 2.1.1

q12√q11q22

=q′12√q′11q

′22

,

it follows that also

q12 detM0 = q′12 detM ′0.

Consequently, similarly as in (2.5)

det

⎡⎢⎢⎢⎢⎢⎣0 1 1 · · · 11 m12 m23 · · · m2,n+1

1 m13 0 · · · m3,n+1

......

.... . .

...1 m1,n+1 m2,n+1 · · · 0

⎤⎥⎥⎥⎥⎥⎦

= det

⎡⎢⎢⎢⎢⎢⎣0 1 1 · · · 11 m′

12 m23 · · · m2,n+1

1 m13 0 · · · m3,n+1

......

.... . .

...1 m1,n+1 m2,n+1 · · · 0

⎤⎥⎥⎥⎥⎥⎦ ,

which implies m12 = m′12.

The second part is geometrically evident. �

Theorem 2.1.7 Let ϕik, i, k = 1, . . . , n+1, denote the interior angles of then-simplex. Then

det

⎡⎢⎢⎣1 − cosϕ12 . . . − cosϕ1,n+1

− cosϕ12 1 . . . − cosϕ2,n+1

. . .

− cosϕ1,n+1 − cosϕ2,n+1 . . . 1

⎤⎥⎥⎦ = 0. (2.6)

If ν =(n+1

2

)denotes the number of interior angles of the n-simplex,

then any ν − 1 of the interior angles determine the remaining interior angleuniquely.

Proof. The first part was proved in Theorem 1.3.3. To prove the second part,suppose that two n-simplexes Σ and Σ′ have all interior angles the same,ϕik = ϕ′

ik, with the exception of ϕ12 and ϕ′12, for which ϕ12 < ϕ′

12. By (iii)of Theorem 1.3.3 ∑

i

ξ2i −∑

i,k,i�=k

ξiξk cosϕik ≥ 0,

Page 39: Graphs and Matrices in Geometry Fiedler.pdf

2.1 Geometric interpretations 29

with equality only for ξi = ρpi; similarly∑i

ξ′2i −∑

i,k,i�=k

ξ′iξ′k cosϕ′

ik ≥ 0,

with equality only for ξ′i = ρ′p′i.Since cosϕik = cosϕ′

ik for i+ k > 3 and

cosϕ12 > cosϕ′12,

we have

0 ≤∑

i

p′2i −∑

i,k,i �=k

p′ip′k cosϕik

=∑

i

p′2i −∑

i,k,i �=k

p′ip′k cosϕ′

ik − 2p′1p′2(cosϕ12 − cosϕ′

12)

= −2p′1p′2(cosϕ12 − cosϕ′

12) < 0,

a contradiction.Thus ϕ12 ≥ ϕ′

12; analogously, ϕ′12 ≥ ϕ12 , i.e. ϕ12 = ϕ′

12. �Another geometrically evident theorem is the following:

Theorem 2.1.8 An n-simplex is uniquely determined (up to congruence) byall edges of one of its (n−1)-dimensional faces and all n interior angles whichthis face spans with the other faces.

We now make a conjecture:

Conjecture 2.1.9 An n-simplex is uniquely determined by the lengths ofsome (at least one) edges and all the interior angles opposite the not-determined edges.

Remark 2.1.10 It was shown in [8] that the conjecture holds in the casethat all the given angles are right angles.

A corollary to (2.4) will be presented in Chapter 6.As an important example, we shall consider the case of the tetrahedron in

the three-dimensional Euclidean space.

Example 2.1.11 Let T be a tetrahedron with vertices A1, A2, A3, and A4.Denote for simplicity by a, b, and c the lengths of edges A2A3, A1A3, andA1A2, respectively, and by a′, b′, and c′ the remaining lengths, a′ opposite a,i.e. of the edge A1A4, etc. The extended Menger matrix M0 of T is then⎡⎢⎢⎢⎢⎣

0 1 1 1 11 0 c2 b2 a′2

1 c2 0 a2 b′2

1 b2 a2 0 c′2

1 a′2 b′2 c′2 0

⎤⎥⎥⎥⎥⎦ .

Page 40: Graphs and Matrices in Geometry Fiedler.pdf

30 Simplex geometry

Denote by V the volume of T , by Si, i = 1, 2, 3, 4, the area of the face oppo-site Ai, and by α, β, γ, α′, β′, γ′ the dihedral angles opposite a, b, c, a′, b′, c′,respectively. It is convenient to denote the reciprocal of the altitude li fromAi by Pi, so that by (2.3), Pi = V/(3

√2Si).

In this notation, the extended Gramian is⎡⎢⎢⎢⎢⎣4r2 s1 s2 s3 s4s1 p2

1 −p1p2 cos γ −p1p3 cosβ −p1p4 cosα′

s2 −p1p2 cos γ p22 −p2p3 cosα −p2p4 cosβ′

s3 −p1p3 cosβ −p2p3 cosα p23 −p3p4 cos γ′

s4 −p1p4 cosα′ −p2p4 cosβ′ −p3p4 cos γ′ p24

⎤⎥⎥⎥⎥⎦ ,

where r is the radius of the circumscribed hypersphere and the sis are thebarycentric coordinates of the circumcenter. As in (2.2)

2r2 = − detMdetM0

.

Since detM0 = 288V 2 by (1.28), we obtain that

576r2V 2 = −det

⎡⎢⎢⎣0 c2 b2 a′2

c2 0 a2 b′2

b2 a2 0 c′2

a′2 b′2 c′2 0

⎤⎥⎥⎦ .The right-hand side can be written (by appropriate multiplications of rowsand columns) as

−det

⎡⎢⎢⎣0 cc′ bb′ aa′

cc′ 0 aa′ bb′

bb′ aa′ 0 cc′

aa′ bb′ cc′ 0

⎤⎥⎥⎦ ,which can be expanded as (aa′+bb′+cc′)(−aa′+bb′+cc′)(aa′−bb′+cc′)(aa′+bb′ − cc′). We obtain the formula

576r2V 2 = (aa′ + bb′ + cc′)(−aa′ + bb′ + cc′)(aa′ − bb′ + cc′)(aa′ + bb′ − cc′).

The formula (2.4) applied to T yields e.g.

3V a = 2S1S4 sinα.

Thusaa′

sinα sinα′ =bb′

sinβ sinβ′ =cc′

sin γ sin γ′=

4S1S2S3S4

9V 2,

which, in a sense, corresponds to the sine theorem in the triangle.

Page 41: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 31

Let us return for a summary to Theorem 1.4.9, in particular (1.33). Thisshows that orthogonality in barycentric coordinates is the same as polaritywith respect to the degenerate dual quadric with dual equation∑

ik

qikξiξk = 0.

This quadric is a cone in the dual space with the “vertex” (1, 1, . . . , 1)d. Itis called the isotropic cone and, as usual, the pole of every proper hyperplane∑

k αkxk = 0 is the improper point (∑

k qikαk). We use here, as well asin the sequel, the notation (γi)d for the dual point, namely the hyperplane∑

k γkxk = 0.This also corresponds to the fact that the angle ϕ between two proper

hyperplanes (αi)d and (βi)d can be measured by

cosϕ =|∑

i,k qikαiβk|√∑i,k qikαiαk

∑i,k qikβiβk

.

In addition, Theorem 1.4.10 can be interpreted as follows. Every hyper-sphere intersects the improper hyperplane in the isotropic point quadric.This is then the (nondegenerate) quadric in the (n−1)-dimensional improperhyperplane. We could summarize:

Theorem 2.1.12 The metric geometry of an n-simplex is equivalent to theprojective geometry of n + 1 linearly independent points in a real projectiven-dimensional space and a dual, only formally real, quadratic cone (isotropiccone) whose single real dual point is the vertex; this hyperplane (the improperhyperplane) does not contain any of the given points.

Remark 2.1.13 In the case n = 2, the above-mentioned quadratic cone con-sists of two complex conjugate points (so-called isotropic points) on the lineat infinity.

2.2 Distinguished objects of a simplex

In triangle geometry, there are very many geometric objects that can be con-sidered as closely related to the triangle. Among them are distinguished points,such as the centroid, the circumcenter (the center of the circumscribed circle),the orthocenter (the intersection of all three altitudes), the incenter (the cen-ter of the inscribed circle), etc. Further important notions are distinguishedlines, circles, and correspondences between points, lines, etc. In this section, weintend to generalize these objects and find analogous theorems for n-simplexes.

We already know that an n-simplex has a centroid (barycenter). It hasbarycentric coordinates ( 1

n+1 ,1

n+1 , . . . ,1

n+1). In other words, its homogeneousbarycentric coordinates are (1, 1, . . . , 1).

Page 42: Graphs and Matrices in Geometry Fiedler.pdf

32 Simplex geometry

Another notion we already know is the circumcenter. We saw that itshomogeneous barycentric coordinates are (q01, . . . , q0,n+1) by Theorem 2.1.1.

Let us already state here that an analogous notion to the orthocenter existsonly in special kinds of simplexes, so-called orthocentric simplexes. We shallinvestigate them in a special section in Chapter 4.

We shall turn now to the incenter. By the formula (1.43), the dis-tance of the point P with barycentric coordinates (p1, . . . , pn+1) from the(n− 1)-dimensional face ωk is

ρ(P, ωk) =∣∣∣∣ pk√qkk

∑i pi

∣∣∣∣ . (2.7)

It follows immediately that if pk =√qkk, for k = 1, . . . , n+1, the distances

(of course, the qkks are all positive) from this point to all (n− 1)-dimensionalfaces will be the same, and equal to 1/

∑i

√qii. This last number is thus the

radius of the inscribed hypersphere. We summarize:

Theorem 2.2.1 The point (√q11,

√q22, . . . ,

√qn+1,n+1) is the center of the

inscribed hypersphere of Σ. The radius of the hypersphere is1∑

k

√qkk

.

Remark 2.2.2 We can check that also all the points P (ε1, . . . , εn+1) withbarycentric coordinates (ε1

√q11, ε2

√q22, . . . , εn+1

√qn+1,n+1) have the prop-

erty that their distances from all the (n− 1)-dimensional faces are the same.Here, the word “points” should be emphasized since it can happen that thesum of their coordinates is zero (an example is the regular tetrahedron inwhich of the seven possibilities of choices only four lead to points).

Theorem 2.2.3 Suppose Σ is an n-simplex in En. Then there exists a uniquepoint L in En with the property that the sum of squares of the distances fromthe (n− 1)-dimensional faces of Σ is minimal. The homogeneous barycentriccoordinates of L are (q11, q22, . . . , qn+1,n+1). Thus it is always an interior pointof the simplex.

Proof. Let P = (p1, . . . , pn+1) be an arbitrary proper point in the correspond-ing En. The sum of squares of the distances of P to the (n − 1)-dimensionalfaces of Σ satisfies by (2.7) that

n+1∑i=1

ρ2(P, ωi) =1(∑pi

)2 n+1∑i=1

p2i

qii

=1∑n+1

i=1 qii+

1∑i qii(

∑pi)2

[∑i

p2i

qii

∑i

qii −(∑

i

pi

)2]

≥ 1∑n+1i=1 qii

Page 43: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 33

by the formula∑a2

i

∑b2i −

(∑aibi)2 ≥ 0 for ai = pi/

√qii, bi =

√qii. Here

equality is attained if and only if ai = ρbi, i.e. if and only if pi = ρqii. It followsthat the minimum is attained if and only if P is the point (q11, . . . , qn+1,n+1).

Remark 2.2.4 The point L is called the Lemoine point of the simplex Σ. Inthe triangle, the Lemoine point is the intersection point of so-called symme-dians. A symmedian is the line containing a vertex and is symmetric to thecorresponding median with respect to the bisectrix. In the sequel, we shallgeneralize this property and define a so-called isogonal correspondence in ann-simplex. First, we call a point P in the Euclidean space of the n-simplex Σ anonboundary point, nb-point for short, of Σ if P is not contained in any (n−1)-dimensional face of Σ. This, of course, happens if and only if all barycentriccoordinates of P are different from zero.

Theorem 2.2.5 Let Σ be an n-simplex in En. Suppose that a point P withbarycentric coordinates (pi) is an nb-point of Σ. Choose any two distinct(n − 1)-dimensional faces ωi, ωj (i �= j) of Σ, denote by S

(1)ij , S

(2)ij the two

hyperplanes of symmetry (bisectrices) of the faces ωi and ωj , and form ahyperplane νij which is in the pencil generated by ωi and ωj symmetric to thehyperplane πij of the pencil containing the point P with respect to the bisectri-ces S(1)

ij and S(2)ij . Then all such hyperplanes νij (i �= j) intersect at a point Q,

which is again an nb-point of Σ. The (homogeneous) barycentric coordinatesqi of Q are related to the coordinates pi of the point P by

piqi = ρqii, i = 1, . . . , n+ 1.

Proof. The equations of the hyperplanes ωi, ωj are xi = 0, xj = 0, respectively;the equations of the hyperplanes S(1)

ij , S(2)ij can be obtained as those of the loci

of points which have the same distance from ωi and ωj . By (2.7), we obtain

xi√qjj − xj

√qii = 0,

xi√qjj + xj

√qii = 0.

Finally, the hyperplane πij has equation

xipj − xjpi = 0.

To determine the hyperplane νij , observe that it is the fourth har-monic hyperplane to πij with respect to the two hyperplanes S(1)

ij and S(2)ij

(cf. Appendix, Theorem A.5.9). Thus, if

xipj − xjpi = α(xi√qjj − xj

√qii) + β(xi

√qjj + xj

√qii),

then νij has the form

α(xi√qjj − xj

√qii) − β(xi

√qjj + xj

√qii) = 0. (2.8)

Page 44: Graphs and Matrices in Geometry Fiedler.pdf

34 Simplex geometry

We obtainpj√qjj

= α+ β,pi√qii

= α− β,

which implies that (2.8) has the form

xi√qjj

pi√qii

− xj√qii

pj√qjj

= 0,

orxipi

qii− xjpj

qjj= 0.

Therefore, every hyperplane νij contains the point Q = (qi) for qi = qii/pi,as asserted. The rest is obvious. �

This means that if we start in the previous construction with the pointQ, we obtain the point P . The correspondence is thus an involution and thecorresponding points can be called isogonally conjugate.

We have thus immediately:

Corollary 2.2.6 The Lemoine point is isogonally conjugate to the centroid.

Theorem 2.2.7 Each of the centers of the hyperspheres in Theorem 2.2.1and Remark 2.2.2 is isogonally conjugate to itself.

Remark 2.2.8 In fact, we need not assume that both the isogonally conju-gate points are proper. It is only necessary that they be nb-points.

Let us present another interpretation of the isogonal correspondence. Weshall use the following well-known theorem about the foci of conics.

Theorem 2.2.9 Let P and Q be distinct points in the plane. Then the locusof points X in the plane for which the sum of the distances, PX + QX, isconstant is an ellipse; the locus of points X for which the modulus of thedifference of the distances, |PX − QX|, is constant is a hyperbola. In bothcases, P and Q are the foci of the corresponding conic.

We shall use this theorem in the n-dimensional space, which means that(again for two points P and Q) we obtain a rotational quadric instead ofa conic. In fact, we want to find the dual equation, i.e. an equation thatcharacterizes the tangent hyperplanes of the quadric.

Theorem 2.2.10 Let P = (pi), Q = (qi) be distinct points at least one ofwhich is proper. Then every nonsingular rotational quadric with axis PQ andfoci P and Q (in the sense that every intersection of this quadric with a planecontaining PQ is a conic with foci P and Q) has the dual equation∑

qikξiξk − ρ∑

piξi∑

qiξi = 0 (2.9)

with some ρ �= 0.

Page 45: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 35

If both points P , Q are proper, the resulting quadric Q is an ellipsoidor hyperboloid according to whether the number ρ

∑pk

∑qk is positive or

negative. If one of the points is improper, the quadric is a paraboloid.Conversely, every quadric with dual equation (2.9) is rotational with foci P

and Q.

Proof. First, let both foci P and Q be proper. Then the quadric Q is the locusof points X for which either the sum of the distances PX +QX (in the caseof an ellipsoid), or the modulus of the difference of the distances |PX −QX|(in the case of a hyperboloid) is constant, say c. In the first case, in the planePQX, the tangent t1(X) in X to the intersection conic bisects the exteriorangle of the vectors XP and XQ, whereas, in the second case, the tangentt2(X) bisects the angle PXQ itself. This implies easily that the product of dis-tances ρ(P, t1(X))ρ(Q, t1(X)), (respectively, ρ(P, t2(X))ρ(Q, t2(X))) in bothcases is independent of X, and is equal to 1

4(c2 + PQ2) in either case. The

same is, however, true also for the distances from the tangent hyperplanes ofQ in X.

Thus, if∑ξkxk = 0 is the equation of a tangent hyperplane to Q, then

by (2.9)

|∑

k pkξk|√∑qikξiξk|

∑pk|

|∑

k qkξk|√∑qikξiξk|

∑qk|

is constant.For the appropriate ρ, (2.9) follows. Since in the case of an ellipsoid (respec-

tively, hyperboloid) the points P , Q are in the same (respectively, opposite)halfspace determined by the tangent hyperplane, the product

∑ξkpk

∑ξkqk

has the same (respectively, opposite) sign as∑pk

∑qk. Thus for the ellipsoid

the number ρ∑pk

∑qk is positive, and for the hyperboloid it is negative.

The converse in this case follows from the fact that the mentioned propertyof a tangent of a conic with foci P and Q is characteristic.

Suppose now that one of the points P,Q, say Q, is improper. To showthat the quadric Q, this time a rotational paraboloid with the focus P andthe direction of its axis Q, has also its dual equation of the form (2.9), let∑

i βixi = 0 be the equation of the directrix D of Q, the hyperplane for whichQ is the locus of points having the same distance from D and P . A propertangent hyperplane H of Q is then characterized by the fact that the point Ssymmetric to P with respect to H is contained in D. Thus let

∑i ξixi = 0 be

the equation of H. By (1.36)

2(∑

i,j

pjξi

)∑i,k

qikξiβk −(∑

i,k

qikξiξk

)∑i

piβi = 0 (2.10)

Page 46: Graphs and Matrices in Geometry Fiedler.pdf

36 Simplex geometry

expresses the above characterization. Since P is not incident with D,∑

i piβi

�= 0. Also, Q is orthogonal to D so that qi =∑

k qikβk. This implies that(2.10) has indeed the form (up to a nonzero multiple) (2.9).

The converse in this case again follows similarly to the above case. �We can give another characterization of isogonally conjugate points.

Theorem 2.2.11 Let P be a proper nb-point of the simplex. Denote byi

R,i = 1, . . . , n+1, the point symmetric to the point P with respect to the face ωi.

If the pointsi

R are in a hyperplane, then they are in no other hyperplane andthe direction of the vector perpendicular to this hyperplane is the (improper)

isogonally conjugate point to P . If the pointsi

R are not in a hyperplane, thenthey form vertices of a simplex and the circumcenter of this simplex is theisogonally conjugate point to P .

Proof. As in (1.34), the pointsi

R have in barycentric coordinates the form

i

R = (qii p1 − 2qi1 pi, qii p2 − 2qi2 pi, . . . , qii pn+1 − 2qi,n+1 pi),

where

P = (p1, p2, . . . , pn+1), pi �= 0 (i = 1, . . . , n+ 1),∑

pi �= 0.

Suppose first that the pointsi

R are in some hyperplane γ ≡∑γixi = 0. Then,

for k = 1, 2, . . . , n+ 1

qkk

n+1∑i=1

γi pi − 2pk

n+1∑i=1

qikγi = 0,

orn+1∑i=1

qikγi =qkk

2pk

n+1∑i=1

γi pi. (2.11)

The hyperplane γ is proper, i.e. the left-hand sides of the equations (2.11) arenot all equal to zero. Thus

∑γipi �= 0, and by Theorem 2.2.10 the isogonally

conjugate point Q to P is the improper point of the orthogonal direction to thehyperplane γ. This also implies the uniqueness of the hyperplane γ containing

all the pointsi

R.

Suppose now that the pointsi

R are contained in no hyperplane. Then thepoint Q isogonally conjugate to P is proper. Indeed, otherwise the rows of the

matrix [qii pj − 2qij pi] of the coordinates of the pointsi

R would be linearly

dependent (after multiplication of the ith row by1pi

and summation), so that

the columns would be linearly dependent, too. This is a contradiction to the

Page 47: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 37

fact that the pointsi

R are not contained in a hyperplane. A direct computation

using (2.7) and (2.10) shows that the distances of Q to all the pointsi

R areequal, which completes the proof. �

We shall generalize now the isogonal correspondence.

Theorem 2.2.12 Suppose that A = (ak) and B = (bk) are (not necessarilydistinct) nb-points of Σ. For every pair of (n − 1)-dimensional faces ωi, ωj

(i �= j), construct a pencil of (decomposable) quadrics

λxixj + μ(aixj − ajxi)(bixj − bjxi) = 0.

This means that one quadric of the pencil is the pair of the faces ωi, ωj,the other the pair of hyperplanes (containing the intersection ωi ∩ ωj) – onecontaining the point A, the other the point B. If P = (pi) is again an nb-pointof Σ, then the following holds.

The quadric of the mentioned pencil, which contains the point P , decom-poses into the product of the hyperplane containing the point P (and theintersection ωi ∩ωj) and another hyperplane Hij (again containing the inter-section ωi ∩ωj). All these hyperplanes Hij have a point Q in common (for alli, j, i �= j). The point Q is again an nb-point and its barycentric coordinates(qi) have the property that

pi qiai bi

is constant.We can thus write (using analogously to the Hadamard product of matrices

or vectors the elementwise multiplication ◦)

P ◦Q = A ◦B.

Proof. The quadric of the mentioned pencil which contains the point P hasthe equation

det[

(ajxi − aixj)(bjxi − bixj), xixj

(ajpi − aipj)(bjpi − bipj), pipj

]= 0.

From this

det[ajbjx

2i + aibix

2j , xixj

ajbjp2i + aibip

2j , pipj

]= 0,

or

(ajbjxipi − aibixjpj)(pjxi − pixj) = 0.

Thus Hij has the equation

aj bjpj

xi −ai bipi

xj = 0

Page 48: Graphs and Matrices in Geometry Fiedler.pdf

38 Simplex geometry

and all these hyperplanes have the point Q = (qi) in common, where qk =ak bk/pk, k = 1, . . . , n+ 1. �

Corollary 2.2.13 If the points A and B are isogonally conjugate, then soare the points P and Q.

We can say that four nb-points C = (ci), D = (di), E = (ei), F = (fi) of asimplex Σ form a quasiparallelogram with respect to Σ, if for some ρ

ck ek

dk fk= ρ, k = 1, . . . , n+ 1.

The points C,E (respectively, D,F ) are opposite vertices; the points C, D,etc. neighboring vertices of the quasiparallelogram.

To explain the notion, let us define a mapping of the interior of the n-simplexΣ into the n-dimensional Euclidean space En, which is the intersection of En+1

(with points with orthonormal coordinatesX1, . . . , Xn+1) with the hyperplanen+1∑i=1

Xi = 0 as follows: we normalize the barycentric coordinates of an arbitrary

point U = (u1, . . . , un+1) of the interior of Σ (i.e., ui > 0 for i = 1, . . . , n+1) in

such a way thatn+1∏i=1

ui = 1. Then we assign to the point U the point U ∈ En

with coordinates Ui = log ui. (Clearly,∑Ui =

∑log ui = log

∏ui = 0.)

In particular, to the centroid of Σ, the origin in En will be assigned. It isimmediate that the images of the vertices of a quasiparallelogram in En willform a parallelogram (possibly degenerate) in En, and vice-versa.

Theorem 2.2.14 Suppose that the nb-points A, B, C, and D form a quasi-parallelogram, all with respect to an n-simplex Σ. If we project these pointsfrom a vertex of Σ on the opposite face, then the resulting projections willagain form a quasiparallelogram with respect to the (n − 1)-simplex formingthat face.

Proof. This follows immediately from the fact that the projection of the point(u1, u2, . . . , un+1) from, say, the vertex An+1 has (homogeneous) barycen-tric coordinates (u1, u2, . . . , un, 0) in the original coordinate system, and thus(u1, u2, . . . , un, ) in the coordinate system of the face. �

Remark 2.2.15 This projection can be repeated so that, in fact, the previoustheorem is valid even for more general projections from one face onto theopposite face.

An important case of a quasiparallelogram occurs when the second andfourth of the vertices of the quasiparallelogram coincide with the centroid ofthe simplex. The remaining two vertices are then related by a correspondence

Page 49: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 39

(it is an involution again) called isotomy. The barycentric coordinates of pointsconjugate in isotomy are reciprocal.

The following theorem becomes clear if we observe that the projection ofthe centroid is again the centroid of the opposite face.

Theorem 2.2.16 Let P = (pi) be an nb-point of an n-simplex Σ. Denote byPij , i �= j, i, j = 1, . . . , n + 1, the projection of the point P from the (n− 2)-dimensional space ωi ∩ ωj on the line AiAj, then by Qij the point symmetricto Pij with respect to the midpoint of the edge AiAj. Then, for all i, j, i �= j,the hyperplanes joining ωi ∩ ωj with the points Qij have a (unique) commonpoint Q. This point is conjugate to P in the isotomy.

We can also formulate dual notions and dual theorems. Instead of points,we study the position of hyperplanes with respect to the simplex. Recall thatthe dual barycentric coordinates of the hyperplane are formed by the (n+1)-tuple of coefficients, say (α1, . . . , αn+1)d, of the hyperplane with the equation∑

k αkxk = 0. Thus the improper hyperplane has dual coordinates (1, . . . , 1)d,the (n− 1)-dimensional face ω1 dual coordinates (1, 0, . . . , 0)d, etc.

We can again define an nb-hyperplane with respect to Σ as a hyperplanenot containing any vertex of Σ (thus with all dual coordinates different fromzero). Four nb-hyperplanes (αk)d, (βk)d, (γk)d, and (δk)d can be consideredas forming a dual quasiparallelogram if for some ρ �= 0

αk γk

βk δk= ρ, k = 1, . . . , n+ 1.

Theorem 2.2.17 The four nb-hyperplanes (αk)d, (βk)d, (γk)d, and(δk)d form a quasiparallelogram with respect to Σ if and only if there exists inthe pencil of quadrics λ

∑k αkxk

∑k γkxk +μ

∑k βkxk

∑k δkxk = 0 a quadric

which contains all vertices of Σ.

Proof. This follows immediately from the fact that the quadric∑

ik gikxixk =0 contains the vertex Aj if and only if gjj = 0. �

We call two nb-hyperplanes α and γ isotomically conjugate if the four hyper-planes α, ν, γ, and ν, where ν is the improper hyperplane

∑k xk = 0, form a

quasiparallelogram.

Theorem 2.2.18 Two nb-hyperplanes are isotomically conjugate if and onlyif the pairs of their intersection points with any edge are formed by isotomicallyconjugate points on that edge, i.e. their midpoint coincides with the midpointof the edge.

The proof is left to the reader.We should also mention that the nb-point (ai) and the nb-hyperplane∑k akxk = 0 are sometimes called harmonically conjugate. It is clear that

we can formulate theorems such as the following: four nb-points form a

Page 50: Graphs and Matrices in Geometry Fiedler.pdf

40 Simplex geometry

quasiparallelogram if and only if the corresponding harmonically conjugatehyperplanes form a quasiparallelogram. It is also immediate that the centroidof Σ and the improper hyperplane are harmonically conjugate.

Before stating the next theorem, let us define as complementary faces of Σthe faces F1 and F2 of which F1 is determined by some of the vertices of Σand F2 by all the remaining vertices. Observe that the distance between thetwo faces is equal to the distance between the mutually parallel hyperplanesH1 and H2, H1 containing F1 and H2 containing F2. In this notation, we shallprove:

Theorem 2.2.19 Let the face F1 be determined by the vertices Aj for j ∈ J ,J ⊂ N = {1, . . . , n + 1} so that the complementary face is determined by thevertices Al for l ∈ J := N \ J. Then the equation of H1 is∑

j∈J

xj = 0,

and the equation of H2 is ∑l∈J

xl = 0.

The distance ρ(F1, F2) of both faces F1 and F2 is

ρ(F1, F2) =1√∑

i,k∈J qik(2.12)

or, if ρ2(F1, F2) = z, the number z is the only root of the equation

det(M0 − zCJ ) = 0, (2.13)

where M0 is the matrix in (1.26) and CJ = [crs], r, s = 0, 1, . . . , n+1, cik = 1if and only if i ∈ J and k ∈ J, or i ∈ J and k ∈ J , whereas crs = 0 in allother cases.

The common normal to both faces F1 and F2 joins the points P1 =(δiJ∑

k∈J qik) and P2 = (δiJ∑

k∈J qik), the intersection points of this normalwith F1 and F2; here, δiJ etc. is equal to one if i ∈ J , otherwise it is zero.

Proof. First of all, it is obvious that H1 and H2 have the properties thateach contains the corresponding face and they are parallel to each other. Theformula (2.12) then follows from the formula (1.43) applied to the distanceof any vertex from the not-incident hyperplane. To prove the formula (2.13),observe that the determinant on the left-hand side can be written as

det(M0 + 2zP0),

where P0 is the matrix [prs] with prs = δrJδsJ in the notation above. SinceP0 has rank one and M0 is nonsingular with the inverse −1

2Q0 by (1.29), theleft-hand side of (2.13) is equal to

Page 51: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 41

detM0det(I − zQ0P0).

Since the second determinant is 1−z∑

i∈J,k∈J qik, the formula follows fromthe preceding.

The last assertion follows from (1.33) in Theorem 1.4.9. �

Remark 2.2.20 We shall call the minimum of the numbers ρ(F1, F2) overall proper faces F1 (the complementary face F2 is then determined) the thick-ness τ(Σ) of the simplex Σ. It thus corresponds to the minimal completeoff-diagonal block of the Gramian, so that

τ(Σ) =1√

−minJ

∑i∈J,j /∈J qij

(2.14)

in the notation above. Observe that if the set of indices N can be decomposedin such a way that N = N1 ∪N2, N1 ∩N2 = ∅, where the indices i, j of allacute angles ϕij belong to distinct Nis and the indices of all obtuse anglesbelong to the same Ni, then the thickness in (2.14) is realized for J = N1. Weshall call simplexes with this property flat simplexes. Observe also that thesum

∑i∈J,k∈J qik is equal to

∑i∈J,k∈J qik since the row sums of the matrix

Q are equal to zero.

For the second part of this chapter, we shall investigate metric propertiesof quadrics (cf. Appendix, Section 5) in En, in particular of the importantcircumscribed Steiner ellipsoid S of the n-simplex Σ. This ellipsoid S is thequadric whose equation in barycentric coordinates is∑

i<k

xixk = 0. (2.15)

Let us start with a few general facts about quadrics in barycentriccoordinates.

Let B be a nonsingular quadric with the equationn+1∑

i,k=1

bikxixk = 0, B = [bik] = BT . (2.16)

Such a quadric is called central if there is a proper point (the center), thepolar of which is the improper hyperplane (1.13).

Lemma 2.2.21 A nonsingular quadric (2.16) is central if and only if for thevector e of all ones

eTB−1e �= 0. (2.17)

Proof. If the quadric in (2.16) is central and C = (ci) is the center, then∑i

ci �= 0 (2.18)

Page 52: Graphs and Matrices in Geometry Fiedler.pdf

42 Simplex geometry

and ∑i,k

bikcixk = 0

is the equation of the improper hyperplane, i.e.∑i

bikci = K, a nonzero constant, for all k. (2.19)

Thus in matrix form for c = [c1, . . . , cn+1]T

Bc = Ke,

which implies ∑i

ci = eT c = KeTB−1e.

By (2.18), we obtain (2.17).Conversely, if (2.17) holds, then the unique solution ci to∑

i

bikci = 1,

which is c = B−1e, satisfies (2.18) and defines a center of (2.16). �

To find the axes of a nonsingular central quadric, let us use the followingcharacteristic property of their improper points.

An improper point y is the improper point of an axis of a nonsingular centralquadric if and only if the polar π of y is orthogonal to y, i.e. if and only if theorthogonal point to π coincides with y.

Theorem 2.2.22 An improper point y = (y1, . . . , yn+1) is the improper pointof an axis for a nonsingular central quadric (2.16) if and only if the columnvector y = [y1, . . . , yn+1]T is an eigenvector of the matrix QB correspondingto a nonzero eigenvalue λ of QB.

The square l2 of the length of the corresponding halfaxis is then

l2 = − 1λeTB−1e

.

Remark 2.2.23 The square l2 can even be negative (if (2.16) is not anellipsoid).

Proof. By the mentioned characterization, y satisfying∑

i yi = 0 is theimproper point of an axis of (2.16) if and only if the orthogonal point (by(1.33)) z = (zi), where

zi =∑j,k

qijbjkyk

Page 53: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 43

to the polar hyperplane of y ∑j,k

bjkykxj = 0

coincides with y, i.e. if and only if

zi = λyi, i = 1, . . . , n+ 1,

for some

λ �= 0.

This yields for the column vector y �= 0 that

QBy = λy, λ �= 0. (2.20)

The converse is also true.Let us show that yTBy �= 0. Indeed, assume that yTBy = 0. Denote by w

the (nonzero) vector w = By, thus satisfying yTw = 0. By (2.20), Qw = λy

so that wTQw = 0. By Theorem A.1.40, w is a nonzero multiple of e; hence,y = ρB−1e and yw = 0 implies eTB−1e = 0, a contradiction.

The length l of the corresponding halfaxis is the distance from the centerc to the intersection point, say u, of the line cy with the quadric (strictlyspeaking, if it is real).

Thus, in a clear notation, let

u = c+ ξy for some ξ, (2.21)

where ∑i,k

bikuiuk = 0.

This yields ∑i,k

bikcick + 2ξ∑i,k

bikciyk + ξ2∑

bikyiyk = 0.

Since the middle term is zero by (2.19), we obtain

ξ2 = − cTBc

yTBy.

We can assume that in the relation

Bc = Ke, K constant,

the number K and the vector c are such that eT c = 1, i.e. also

eTB−1e =1K. (2.22)

Page 54: Graphs and Matrices in Geometry Fiedler.pdf

44 Simplex geometry

Then

eT u = 1

in (2.21) and the distance l between u and c satisfies by (1.9) the relation

l2 = −12

∑i,k

mik(ui − ci)(uk − ck)

= −12ξ2∑i,k

mikyiyk

= −12ξ2yTMy

= − ξ2

2λyTMQBy

by (2.20).Since

MQ = −2In+1 − eqT0

by (1.21) and yT e = 0, we obtain

yTMQ = −2yT

and

l2 =ξ2

λyTBy

= − 1λcTBc

= −KλcT e

= − 1λeTB−1e

by (2.20) and (2.22). �

Theorem 2.2.24 The center of the Steiner circumscribed ellipsoid S from(2.15) is the centroid (1, . . . , 1) (or ( 1

n+1 , . . . ,1

n+1 ) in nonhomogeneousbarycentric coordinates); S contains all the vertices Ai of Σ, and the tangentplane to S at Ai has equation ∑

j �=i

xj = 0

and is parallel to ωi. The squares l2i of its halfaxes are given by

l2i =n

n+ 11λi

;

Page 55: Graphs and Matrices in Geometry Fiedler.pdf

2.2 Distinguished objects of a simplex 45

here the λis are the nonzero eigenvalues of the matrix Q. The directions ofthe corresponding axes coincide with the eigenvectors y = (yi) of Q. Also, thecorresponding equations

n+1∑i=1

yixi = 0

are the equations of the hyperplanes through the center of S orthogonal to thecorresponding axes (hyperplanes of symmetry).

Proof. This follows immediately from Theorem 2.2.22 applied to B = J − I,J = eeT since

B−1 =1nJ − I,

QB = −Q,

and

eTB−1e =n+ 1n

.

It is easily seen that its center is the centroid (1, . . . , 1) (or ( 1n+1 , . . . ,

1n+1 ) in

nonhomogeneous barycentric coordinates); S contains all the vertices Ai of Σ,the tangent plane to S at Ai has equation∑

j �=i

xj = 0,

and is parallel to ωi. �

Remark 2.2.25 Since the λis are positive, S is indeed an ellipsoid.

Page 56: Graphs and Matrices in Geometry Fiedler.pdf

3

Qualitative properties of the angles in a simplex

3.1 Signed graph of a simplex

In this section, we intend to study interior (dihedral) angles in a simplex.In terms of the quality of an angle of a simplex, we shall understand that itbelongs to just one of the following three classes: acute, obtuse, or right.

The relation (2.6) – together with the conditions of positive semidefinite-ness – is a generalization of the theorem about the sum of angles in a triangle.For a triangle, it follows that at least two of the angles are acute. We intendto generalize this property for the n-simplex.

Theorem 3.1.1 Suppose that ϕik, i, k = 1, . . . , n+ 1, are the interior anglesof an n-simplex Σ. Then there exists no nontrivial decomposition of the indexset N = {1, . . . , n + 1} into two nonvoid subsets N1, N2 (i.e. N1 ∪N2 = N ,N1 ∩N2 = ∅, N1 �= ∅, N2 �= ∅) such that ϕik ≥ π

2 for i ∈ N1, k ∈ N2.

Proof. Suppose there exists such a decomposition. By (iii) of Theorem 1.3.3,the quadratic form ∑

i∈N

x2i −

∑i,k∈N,i�=k

xixk cosϕik ≥ 0 (3.1)

(it is positive semidefinite) and equality is attained if and only if xi = ρpi,pi > 0. By Theorem A.1.40, then also

pi −∑

k∈N,k �=i

pk cosϕik = 0, i ∈ N. (3.2)

Multiply the ith relation (3.2) for i ∈ N1 by pi and add for i ∈ N1. We obtain∑i∈N1

p2i −

∑i,k∈N1,i �=k

pipk cosϕik −∑

i∈N1,k∈N2

pipk cosϕik = 0. (3.3)

Since cosϕik ≤ 0 for i ∈ N1, k ∈ N2, the last summand in (3.3) isnonnegative. The sum of the remaining two summands is also nonnegative

Page 57: Graphs and Matrices in Geometry Fiedler.pdf

3.1 Signed graph of a simplex 47∑i∈N1

p2i −

∑i,k∈N1,i �=k

pipk cosϕik ≥ 0; (3.4)

this follows by substitution of

xi = pi for i ∈ N1, xj = 0 for j ∈ N2 (3.5)

into (3.1).Because of (3.3), there must be equality in (3.4). However, that contradicts

the fact that equality in (3.1) is attained only for xi = ρpi and not for thevector in (3.5) (observe that N2 �= ∅). �

To better visualize this result, we introduce the notion of the signed graphGΣ of the n-simplex Σ (cf. Appendix).

Definition 3.1.2 Let Σ be an n-simplex with (n − 1)-dimensional facesω1, . . . , ωn+1. Denote by G+ the undirected graph with n+ 1 nodes 1, 2, . . . ,n+ 1, and those edges (i, k), i �= k, i, k = 1, . . . , n+ 1, for which the interiorangle ϕik of the faces ωi, ωk is acute

ϕik <π

2.

Analogously, denote by G− the graph with the same set of nodes but withthose edges (i, k), i �= k, i, k = 1, . . . , n+ 1, for which the angle ϕik is obtuse

ϕik >π

2.

We can then consider G+ and G− as the positive and negative parts of thesigned graph GΣ of the simplex Σ. Its nodes are the numbers 1, 2, . . . , n+1, itspositive edges are those from G+, and its negative edges are those from G−.

Now we are able to formulate and prove the following theorem ([4], [7]).

Theorem 3.1.3 If GΣ is the signed graph of an n-simplex Σ, then its positivepart is a connected graph.

Conversely, if G is an undirected signed graph (i.e. each of its edges isassigned a sign + or −) with n+1 nodes 1, 2, . . . , n+1 such that its positive partis a connected graph, then there exists an n-simplex Σ, whose graph GΣ is G.

Proof. Suppose first that the positive part G+ of the graph GΣ of some n-simplex Σ is not connected. Denote by N1 the set consisting of the index 1 andof all such indices k that there exists a sequence of indices j0 = 1, j1, . . . , jt = k

such that all edges (js−1, js) for s = 1, . . . , t belong to G+. Further, let N2

be the set of all the remaining indices. Since G+ is not connected, N2 �= ∅,and the following holds: whenever i ∈ N1, k ∈ N2, then ϕik ≥ 1

2π (otherwisek ∈ N1). That contradicts Theorem 3.1.1.

Page 58: Graphs and Matrices in Geometry Fiedler.pdf

48 Qualitative properties of the angles in a simplex

To prove the converse part, observe that the quadratic form

f =∑

(i,j)∈G+

(ξi − ξj)2

is positive semidefinite and attains the value zero only for ξ1 = ξ2 = · · · =ξn+1; this follows easily from the connectedness of the graph G+. Therefore,the principal minors of order 1, . . . , n of the corresponding (n+ 1) × (n+ 1)matrix are strictly positive (and the determinant is zero). Thus there exists asufficiently small positive number ε so that also the form

f1 ≡ f − ε∑

(i,j)∈G−(ξi − ξj)2 ≡

∑i,j=1

vijξiξj

is positive semidefinite and equal to zero only for ξ1 = ξ2 = · · · = ξn+1. ByTheorem 1.3.3, there exists an n-simplex Σ, the interior angles ϕij of whichsatisfy the conditions

cosϕij = − vij√vii

√vjj

.

Since

ϕij <π2

for (i, j) ∈ G+,

ϕij >π2 for (i, j) ∈ G−,

and ϕij = π2 for the remaining (i, j), i �= j,

Σ fulfills the conditions of the assertion. �It is well known that a connected graph with n + 1 nodes has at least n

edges (cf. Appendix, Theorem A.2.4), and there exist connected graphs withn+ 1 nodes and n edges (so-called trees). Thus we obtain:

Theorem 3.1.4 Every n-simplex has at least n acute interior angles. Thereexist n-simplexes which have exactly n acute interior angles.

Remark 3.1.5 The remaining(n2

)angles can be either obtuse or right.

This leads us to the following definition.

Definition 3.1.6 An n-simplex which has n acute interior angles and all theremaining

(n2

)interior angles right will be called a right simplex.

Corollary 3.1.7 The graph of a right simplex is a tree.

We shall return to this topic later in Chapter 4, Section 1.There is another, perhaps more convenient, approach to visualizing the

angle properties of an n-simplex. It is based on the fact that the interior angleϕij has the opposite edge AiAj in the usual notation.

Definition 3.1.8 Color the edge AiAj red if the opposite interior angle ϕij

is acute, and color it blue if ϕij is obtuse.

Page 59: Graphs and Matrices in Geometry Fiedler.pdf

3.1 Signed graph of a simplex 49

The result of Theorem 3.1.3 can thus be formulated as follows:

Theorem 3.1.9 The coloring of every simplex has the property that the rededges connect all vertices of the simplex. If we color some edges of a simplexin red, some in blue, and leave some uncolored, but in such a way that thered edges connect the set of all vertices, then there exists a “deformation” ofthe simplex for which the opposite interior angles to red edges are acute, theopposite interior angles to blue edges are obtuse, and the opposite interiorangles to uncolored edges are right.

Example 3.1.10 Let the points A1, . . . , An+1 in a Euclidean n-dimensionalspace En be given using the usual Cartesian coordinates

A1 = (0, 0, . . . , 0),

A2 = (a1, 0, . . . , 0),

A3 = (a1, a2, 0, . . . , 0),

A4 = (a1, a2, a3, 0, . . . , 0),

. . .

An+1 = (a1, a2, a3, . . . , an),

Fig. 3.1

Page 60: Graphs and Matrices in Geometry Fiedler.pdf

50 Qualitative properties of the angles in a simplex

where a1, a2, . . . , an are some positive numbers. These points are linearlyindependent, thus forming vertices of an n-simplex Σ. The (n−1)-dimensionalfaces ω1, ω2, . . . , ωn+1 are easily seen to have equations ω1 ≡ x1 − a1 = 0,ω2 ≡ a2x1 − a1x2 = 0, ω3 ≡ a3x2 − a2x3 = 0, . . . , ωn+1 ≡ xn = 0. Usingthe formula (1.2), we see easily that all pairs ωi, ωj , i �= j, are perpendic-ular except the pairs (ω1, ω2), (ω2, ω3), . . . , (ωn, ωn+1). Therefore, only theedges A1A2, A2A3, . . . , AnAn+1 are colored (in red), and the graph of Σ is apath. Observe that these edges are mutually perpendicular. In addition, alltwo-dimensional faces AiAjAk are right triangles with the right angle at Aj ifi < j < k. Thus also the midpoint of A1An+1 is the center of the circumscribedhypersphere of Σ because of the Thalet theorem.

Example 3.1.11 In Fig. 3.1, all possible colored graphs of a tetrahedronare depicted. The red edges are drawn unbroken, the blue edges dashed. Themissing edges correspond to right angles.

3.2 Signed graphs of the faces of a simplex

In this section, we shall investigate some further properties of the interiorangles in an n-simplex and its smaller-dimensional faces. In particular, weshall, as in Chapter 1, be interested in whether these angles are acute, right,or obtuse.

For this purpose, we shall use Theorem 1.3.3 and the Gramian Q of thesimplex Σ. By Theorem 2.1.1, the following is immediate.

Corollary 3.2.1 The signed graph of the simplex Σ is (if the vertices and thenodes are numbered in the same way) the negative of the signed graph of theGramian Q of Σ.

Remark 3.2.2 The negative of a signed graph is the graph with the sameedges in which the signs are changed to the opposite.

Our task will now be to study the faces of the n simplex Σ, using the resultsof Chapter 1, in particular the formula (1.21). First of all, we would like tofind the Menger matrix and the Gramian of such a face. Thus let Σ be ann-simplex, and Σ′ be its face determined by the first m+1 vertices. Partitionthe matrices in (1.21). We then have that⎡⎣ 0 eT

1 eT2

e1 M11 M12

e2 M21 M22

⎤⎦⎡⎣ q00 qT01 qT

02

q01 Q11 Q12

q02 Q21 Q22

⎤⎦ = −2In+2, (3.6)

where M11, Q11 are (m+ 1)× (m+ 1) matrices corresponding to the verticesin Σ′, etc. It is clear that M11 is the Menger matrix of Σ′.

Page 61: Graphs and Matrices in Geometry Fiedler.pdf

3.2 Signed graphs of the faces of a simplex 51

To obtain the Gramian of Σ′, we have, by the formula (1.21), to express interms of the matrices assigned to Σ the extended Gramian

Q0 =[q00 qT

01

q01 Q11

].

By the formula (1.29), this matrix is the (−12)-multiple of the inverse of the

extended Menger matrix [0 eT

1

e1 M11

].

Using the formula (A.5), we obtain[0 eT

1

e1 M11

]−1

=(−1

2

)[ q00 qT01

q01 Q11

]−(−1

2

)[ qT02

Q12

]Q−1

22 [q02 Q21],

i.e.[0 eT

1

e1 M11

]−1

=(−1

2

)[ q00 − qT02Q

−122 q02 qT

01 − qT02Q

−122 Q21

q01 −Q12Q−122 q02 Q11 −Q12Q

−122 Q21

]. (3.7)

This means that the Gramian corresponding to the m-simplex Σ′ isthe Schur complement (Appendix (A.4)) Q11 − Q12Q

−122 Q21 of Q22 in the

Gramian of Σ.Let us summarize, having in mind that the choice of the numbering of the

vertices is irrelevant:

Theorem 3.2.3 Let Σ be an n-simplex with vertices Ai, i ∈ N = {1, . . . ,n+ 1}. Denote by Σ′ the face of Σ determined by the vertices Aj for j ∈M = {1, . . . ,m+ 1} for some m < n. If the extended Menger matrix of Σ ispartitioned as in (3.6), then the extended Gramian of Σ′

Q0 =[q00 qT

01

q01 Q11

]is equal to [

q00 − qT02Q

−122 q02 qT

01 − qT02Q

−122 Q21

q01 −Q12Q−122 q02 Q11 −Q12Q

−122 Q21

].

In particular, the Gramian of Σ′ is the Schur complement of the Gramianof Σ with respect to the indices in N\M .

In Chapter 1, Section 3, we were interested in qualitative properties of theinterior angles in a simplex; under quality of the angle, we understood theproperty of being acute, obtuse, or right. Using the result of Theorem 3.2.3,we can ask if something analogous can be proved for the faces.

In view of Corollary 3.2.1, it depends on the signed graph of the Schurcomplement of the Gramian of the simplex.

Page 62: Graphs and Matrices in Geometry Fiedler.pdf

52 Qualitative properties of the angles in a simplex

In the graph-theoretical approach to the problem of how the zero–nonzerostructure of the matrix of a system of linear equations is changed by elimina-tion of one (or, more generally, several) unknown, the notion of the eliminationgraph was discovered. Also, elimination of a group of unknowns can be per-formed (under mild existence conditions) by performing a sequence of simpleeliminations where just one unknown is eliminated at a time. The theory isbased on the fact (Appendix (A.7)) that the Schur complement of the Schurcomplement is the Schur complement again. In our case, the situation is evenmore complicated by the fact that we have to consider the signs of the entries.This means that only rarely can we expect definite results in this respect.

However, there is one class of simplexes for which this can be donecompletely. This class will be studied in the next section.

3.3 Hyperacute simplexes

In this section, we investigate n-simplexes, no interior angle of which (between(n− 1)-dimensional faces) is obtuse. We call these simplexes hyperacute. Forcompleteness, we consider all 1-simplexes as hyperacute as well. In addition,we say that a simplex is strictly hyperacute if all its interior angles are acute.

As we have seen in Section 3.1, the signed graph of such a simplex haspositive edges only and its colored graph does not contain a blue edge. ItsGramian Q from (1.21) is thus a singular M -matrix (see Appendix (A.3.9)).

The first, and most important result, is the following:

Theorem 3.3.1 Every face of a hyperacute simplex is also a hyperacute sim-plex. In addition, the graph of the face is uniquely determined by the graph ofthe original simplex and is obtained as the elimination graph after removingthe graph nodes corresponding to simplex vertices not contained in the face.

More explicitly, using the coloring of the simplex (this time, of course,without the blue color), the following holds:

Suppose that the edges of the simplex are colored as in Theorem 3.1.9. Thenan edge (i, k), i �= k, in the face obtained by removing the vertex set S will becolored in red if and only if there exists a path from i to k which uses red edgesonly, all of which vertices (different from i and k) belong to S. Otherwise, theedge (i, k) will remain uncolored.

Proof. It is advantageous to use matrices. Let Σ be an n-simplex, and Σ′ beits face determined by the first m + 1 vertices. Partition the matrices as in(3.6). We saw in (3.7) that the Gramian corresponding to the m-simplex Σ′

is the Schur complement Q11 −Q12Q−122 Q21 of Q11 in the Gramian of Σ. This

Schur complement is by Theorem A.3.8 again a (singular) M -matrix with theannihilating vector of all ones so that Σ′ is a hyperacute simplex.

Page 63: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 53

As shown in Theorem A.3.5, the graph of the Schur complement is theelimination graph obtained by elimination of the rows and columns asdescribed, so that the last part follows. �

Since every elimination graph of a complete graph is also complete, weobtain the following result.

Theorem 3.3.2 Every face of a strictly hyperacute simplex is again a strictlyhyperacute simplex.

3.4 Position of the circumcenter of a simplex

In this section, we investigate how the quality of interior angles of the simplexrelates to the position of the circumcenter. The case n = 2 shows such arelationship exists: in a strictly acute triangle, the circumcenter is an interiorpoint of the triangle; the circumcenter of a right triangle is on the boundary;and in the obtuse triangle, it is an exterior point (in the obtuse angle). Wegeneralize these properties only in a qualitative way, i.e. with respect to thehalfspaces determined by the (n − 1)-dimensional faces. It turns out thatthis relationship is well characterized by the use of the extended graph of thesimplex (cf. [10]). Here is the definition:

Definition 3.4.1 Denote by A1, . . . , An+1 the vertices, and by ω1, . . . , ωn+1

the (n − 1)-dimensional faces (ωi opposite Ai), of an n-simplex Σ. Let C bethe circumcenter. The extended graph G∗

Σ is obtained by extending the usualgraph GΣ by one more node 0 (zero) corresponding to the circumcenter Cas follows: the node 0 is connected with the node k, 1 ≤ k ≤ n + 1 (corre-sponding to ωk) by an edge if and only if ωk does not contain C; this edge ispositive (respectively, negative) if C is in the same (respectively, the opposite)halfspace determined by ωk as the vertex Ak.

In Fig. 3.2 a, b, c, the extended graphs of an acute, right, and obtuse triangleare depicted. The positive edges are drawn unbroken, the obtuse dashed. Theright or obtuse angle is always at vertex A2.

0

1

2

3

0

1

2

3

0

1

2

3

Fig. 3.2

Page 64: Graphs and Matrices in Geometry Fiedler.pdf

54 Qualitative properties of the angles in a simplex

By the definition of the extended graph, edges ending in the additional node0 correspond to the signs of the (inhomogeneous) barycentric coordinates ofthe circumcenter C. If the kth coordinate is zero, there is no edge (0, k) in G∗

Σ;if it is positive (respectively, negative), the edge (0, k) is positive (respectively,negative). By Theorem 2.1.1, this means:

Theorem 3.4.2 The extended graph G∗Σ of the n-simplex Σ is the negative of

the signed graph of the (n+2)× (n+2) matrix [qrs], the extended Gramian ofthis simplex, in which the distinguished vertex 0 corresponds to the first row(and column).

Remark 3.4.3 As in Remark 3.2.2, the negative of a signed graph is thegraph with the same edges in which the signs are changed to the opposite.

The proof of Theorem 3.4.2 follows from the formulae in Theorem 2.1.1 andthe definitions of the usual and extended graphs.

In Theorem 3.1.3 we characterized the (usual) graphs of n-simplexes, i.e.we found necessary and sufficient conditions for a signed graph to be a graphof some n-simplex. Although we shall not succeed in characterizing extendedgraphs of simplexes in a similar manner, we find some interesting properties ofthese. First of all, we show that the exclusiveness of the node 0 is superfluous.

Theorem 3.4.4 Suppose a signed graph Γ on n + 2 nodes is an extendedgraph of an n-simplex Σ1, the node u1 of Γ being the distinguished node cor-responding to the circumcenter C1 of Σ1. Let u2 be another node of Γ. Thenthere exists an n-simplex Σ2, the extended graph of which is also Γ, and suchthat u2 is the distinguished node corresponding to the circumcenter C2 of Σ2.

Proof. Let [qrs] be the Gramian of the simplex Σ1. This means that, for anappropriate numbering of the vertices of the graph Γ, in which u1 correspondsto index 0, we have for r �= s, r, s = 0, . . . , n+ 1,

(r, s) is a{

positivenegative

}edge of Γ if and only if

{qrs < 0qrs > 0

}.

We can assume that the vertex u2 corresponds to the index n + 1. Thematrix −2[qrs]−1 equals the matrix [mrs], which satisfies the conditions ofTheorem 1.2.4. We show that the matrix [m′

rs] with entries

m′rr = 0 (r = 0, . . . , n+ 1),

m′i0 = m′

0i = 1 (i = 1, . . . , n+ 1),

m′α,n+1 = m′

n+1,α =1

mα,n+1(α = 1, . . . , n),

m′αβ =

mαβ

mα,n+1mβ,n+1(α, β = 1, . . . , n)

(3.8)

Page 65: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 55

also fulfills the condition of Theorem 1.2.4n+1∑i,k=1

m′ikx

′ix

′k < 0, when

n+1∑1

x′i = 0, (x′i) �= 0.

Thus suppose (x′i) �= 0,n+1∑

1x′i = 0. Define the numbers xα =

x′αmα,n+1

,

α = 1, . . . , n, xn+1 = −n∑

α=1xα. We have then

∑m′

ikx′ix

′k =

n∑α,β=1

mαβx′α

mα,n+1

x′βmβ,n+1

+ 2x′n+1

n∑α=1

x′αmα,n+1

=n∑

α,β=1

mαβxαxβ − 2n∑

α=1

x′αn∑

α=1

=n∑

α,β=1

mαβxαxβ + 2xn+1

n∑α=1

mα,n+1xα

=n+1∑

i,k=1

mikxixk < 0

by Theorem 1.2.4. This means that there exists an n-simplex Σ2, the matrix ofwhich is [m′

rs]. However, the matrix [m′rs] arises from [mrs] by multiplication

from the right and from the left by the diagonal matrix D = diag (dr),where

dα =1

mα,n+1, α = 1, . . . , n,

d0 = dn+1 = 1,

and by exchanging the first and the last row and column. It follows that theinverse matrix −1

2[q′rs] to the matrix [m′

rs] arises from the matrix −12[qrs]

by multiplication by the matrix D−1 from both sides and by exchanging thefirst and last row and column. Since the matrix D−1 has positive diagonalentries, the signs of the entries do not change so that Γ will again be theextended graph of the n-simplex Σ2. The exchange, however, will lead to thefact that the node u2 will be distinguished, corresponding to the circumcenterof Σ2. �

Remark 3.4.5 The transformation (3.8) corresponds to the spherical inver-sion, which transforms the vertices of the simplex Σ1 into vertices of thesimplex Σ2. The center of the inversion is the vertex An+1. This alsoexplains the geometric meaning of the transformation already mentioned inRemark 1.4.5.

Page 66: Graphs and Matrices in Geometry Fiedler.pdf

56 Qualitative properties of the angles in a simplex

Theorem 3.4.6 If we remove from the extended graph of an n-simplex anarbitrary node, then the positive part of the resulting graph is connected.

Proof. This follows from Theorems 3.1.3 and 3.4.4. �

Theorem 3.4.7 The node connectivity number of the positive part of theextended graph with at least four nodes is always at least two.1

Proof. This is an immediate consequence of the previous theorem. �Let us return to Theorem 3.4.2. We can show2 that the following theorem

holds.

Theorem 3.4.8 The set of extended graphs of n-simplexes coincides with theset of all negatively taken signed graphs of real nonsingular symmetric matricesof degree n+2, all principal minors of order n+1 which are equal to zero, havesignature n, and the annihilating vector of one arbitrary principal submatrixof degree n + 1 is positive. Exactly these matrices are the Gramians [qrs] ofn-simplexes.

The following theorem, the proof of which we omit (see [10], Theorem 3,12),expresses the nonhomogeneous barycentric coordinates of the circumcenter bymeans of the numbers qij , i.e. essentially by means of the interior angles ofthe n-simplex.

Theorem 3.4.9 Suppose [qij ], i, j = 1, . . . , n+1, is the matrix Q correspond-ing to the n-simplex Σ. Then the nonhomogeneous barycentric coordinates ciof the circumcenter of Σ can be expressed by the formulae

ci = ρ∑S

(2 − σi(S))π(S), (3.9)

where the summation is extended over all spanning trees S of the graph GΣ

π(S) =∏

(p,q)∈E

(−qpq),

σi(S) is the degree of the node i in the spanning tree S = (N,E) (i.e. thenumber of edges from S incident with i), and

ρ =1

2∑

S π(S).

This implies the following corollary.

1 The node connectivity of a connected graph G is k, if the graph obtained by deletingany k − 1 nodes is still connected, but after deleting some k nodes becomes disconnected(or void).

2 See [10], where, in fact, all the results of this chapter are contained.

Page 67: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 57

Theorem 3.4.10 If we remove in the extended graph G∗ of any n-simplexone node k, then the resulting graph Gk has the following properties:

(i) If j is a node with degree 2 in Gk, which is also a cut-node in Gk, then(j, k) is not an edge in G∗.

(ii) Every node with degree 1 in Gk is joined by a positive edge with k in G∗.If, in addition, Gk is a positive graph, then:

(iii) every node with degree 2 in Gk, which is not a cut-node in Gk, is joinedwith k by a positive edge in G∗.

(iv) Every node in Gk with degree at least 3, which is a cut-node in Gk, isjoined with k by a negative edge in G∗.

Proof. By Theorem 3.4.6, there exists a simplex whose extended graph is G∗

with node k corresponding to the circumcenter. Thus Gk is its usual graph.Let us use the formula (3.9) from Theorem 3.4.9 and observe that ρ > 0. Theassertion (i) follows from this formula since every cut-node j with degree 2 inGk has degree 2 in every spanning tree of Gk, so that σj(K) = 2 and cj = 0.The assertion 2 follows from Theorem 3.4.7 since a node in G∗ cannot havedegree 1.

Suppose now that Gk is a positive graph. If a node j has degree 2 and isnot a cut-node in Gk, then every summand in (3.9) is nonnegative; however,there exists at least one positive summand since for some spanning tree inGk, j has degree 1.

The assertion (iv) follows also from (3.9) since σj(S) ≥ 2, whereas for somespanning tree S, σj(S) > 2. �

Let us consider now so-called totally hyperacute simplexes.

Definition 3.4.11 An n-simplex (n ≥ 2) is called totally hyperacute if it ishyperacute and if its circumcenter is either an interior point of the simplex,or an interior point of one of its faces.

Remark 3.4.12 This means that the extended Gramian [qrs] from(1.27) has all off-diagonal entries nonpositive. A simplex whose circumcenteris an interior point is sometimes called well-centered.

Theorem 3.4.13 Every m-dimensional face (2 ≤ m < n) of a totally hyper-acute n-simplex is again a totally hyperacute simplex. The extended graph ofthis face Σ1 is by the extended graph of the given simplex uniquely determinedand is obtained as its elimination graph by eliminating the vertices of the graphwhich correspond to all the (n− 1)-dimensional faces containing Σ1.

Proof. Suppose Σ is a totally hyperacute n-simplex with vertices Ai and (n−1)-dimensional faces ωi, i = 1, . . . , n+1. Since the Gauss elimination operationis transitive, it suffices to prove the theorem for the case m = n − 1. Thuslet Σ1 be the (n − 1)-dimensional face in ωn+1. If M = [mrs] and Q = [qrs]

Page 68: Graphs and Matrices in Geometry Fiedler.pdf

58 Qualitative properties of the angles in a simplex

are the matrices of Σ from Corollary 1.4.3, so that M Q = −2I, I being theidentity matrix, r, s = 0, 1, . . . , n+ 1, then the analogous matrices for Σ1 areM = [mr′s′ ], Q = [qr′s′ ], r′, s′ = 0, 1, . . . , n, where

qr′s′ = qr′s′ − qr′,n+1qs′,n+1

qn+1,n+1. (3.10)

Indeed, these numbers fulfill the relation M Q = −2I, where I is the identitymatrix of order n + 1. Since qn+1,n+1 > 0 and qr′s′ ≤ 0 for r′ �= s′, r′, s′ =0, 1, . . . , n, we obtain by (3.10) that qr′s′ ≤ 0.

Therefore, Σ1 is also a totally hyperacute simplex. The extended graph ofΣ1 is a graph with nodes 0, 1, . . . , n. Its nodes r′ and s′ are joined by a positiveedge if and only if qr′s′ < 0, i.e. by (3.10) if and only if at least one of thefollowing cases occurs:

(i) qr′s′ < 0,(ii) both inequalities qr′,n+1 < 0 and qs′,n+1 < 0 hold.

Analogous to the proof of Theorem 3.3.1, this means that Q∗Σ1

is the elim-ination graph of Q∗

Σ obtained by elimination of the node {n+ 1}. �

Theorem 3.4.14 A positive polygon (circuit) is the extended signed graph ofa simplex, namely of the simplex in Example 3.1.10.

Proof. Evident from the results in the example. �Combining the two last theorems, we obtain:

Theorem 3.4.15 The extended graph of a totally hyperacute simplex is eithera polygon (circuit), or a graph with node connectivity at least 3.

Proof. By Theorem 3.4.7, the node connectivity of such a graph is at least 2.Suppose it is equal to 2 and that the number of its nodes is at least 4. Supposethat two of its nodes i and j form a cut. We show that the degree of the nodei is 2. This will easily imply that every neighboring node to i also has degree 2and forms a cut with j, or is a neighbor with j. By connectivity, the graph isthen a polygon (circuit).

To prove the assertion, we use (iv) of Theorem 3.4.10 for the node j. Sincethe node i is then a cut-node in Gj , it cannot have degree ≥ 3 (since thereis no negative edge in the graph), and thus also not degree 1. It has thusdegree 2 and the proof is complete. �Let us mention some consequences of this theorem.

Theorem 3.4.16 A positive graph with a node of degree 2 is an extendedgraph of a simplex if and only if it is a polygon (circuit).

Proof. Evident. �

Page 69: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 59

Theorem 3.4.17 Two-dimensional faces of a totally hyperacute simplex areeither all right, or all acute triangles.

Proof. If the node connectivity of the extended graph of a totally hyperacutesimplex is at least 3, then every elimination graph with four vertices has nodeconnectivity 3. By Theorem 3.4.13, every two-dimensional face is an acutetriangle.

The case that the vertex connectivity is 2 follows from Theorem 4.1.8, whichwill be (independently) proved in Chapter 4, section 1. �

The next theorem presents a very strong property of extended graphs oftotally hyperacute simplexes. We might conjecture that it even characterizesthese graphs, i.e. that this condition is also sufficient.

Theorem 3.4.18 Suppose that G0 is the extended graph of a totally hyper-acute n-simplex. If we remove from G0 a set of k nodes and all incidentedges, then the resulting graph G1 has at most k components. If it has exactlyk components, then G0 is either a polygon (circuit), or n + 2 = 2k andthere is no edge in G0 joining any two removed nodes from G0 or any twonodes in G1.

Proof. Denote by N = {0, 1, . . . , n + 1} the set of nodes of the graph G0.Let us remove from N a subset N1 containing k ≥ 1 nodes. We shall useinduction. The theorem is correct for k = 1. Suppose k ≥ 2 and that the node0 belongs to N1, which is possible by Theorem 3.4.4. Denote N2 = N1 \ {0}.By Theorem 3.4.9, there exist numbers qij , i, j = 1, . . . , n + 1, and numbersci, i = 1, . . . , n+ 1, such that

ci = ρ∑S

(2 − σi(S))π(S),

where we sum over all spanning trees S of the graph G obtained by removingfrom G0 the node 0 and edges incident with 0, where σi(S) is the degree ofthe node i in S, and

π(S) =∏

(pq)∈S

(−qpq).

All the numbers π(S) are positive since qpq < 0 for p �= q, so the number ρ isalso positive. Denote by l the number of components of G1 and by S an arbi-trary spanning tree of G. Let e(S) be the number of edges of S betweennodes in N2, and l(S) the number of components obtained after remov-ing from S the nodes in N2 (and the incident edges). A simple calculationyields ∑

i∈N2

σi(S) = n2l(S) + e(S) − 1,

Page 70: Graphs and Matrices in Geometry Fiedler.pdf

60 Qualitative properties of the angles in a simplex

where n2 = k − 1 is the number of nodes in N2. Since ci ≥ 0 and l(S) ≥ l

0 ≤∑i∈N2

si =∑S

[∑i∈N2

(2 − σi(S))]π(S)

=∑S

[2(k − 1) − (k − 1) − l(S) − e(S) + 1]π(S)

≤∑S

(k − l(S) − e(S))π(S)

≤∑S

(k − l)π(S).

Thus k ≥ l, which means that the number of components of the graph G1 isat most k. In the case that k = l, ci = 0 for i ∈ N2, and further l(S) = l,and e(S) = 0 for all spanning trees S, so that indeed there is no edge in G0

between two nodes from N1. Suppose that G0 is not a polygon (circuit). Thenthe node connectivity of G0 is, by Theorem 3.4.15, at least 3 and the graphG has no cut-node.

Suppose that some component of G1 has at least two nodes u1, u2. Sincethere is no cut-node in G, we can find for every node v ∈ N2 a pathu1 . . . v . . . u2 in G. This path can be completed into a spanning tree S0 ofG. However, u1 and u2 are in different components of the graph obtainedfrom S0 by removing the nodes from N2, i.e. l(S0) ≥ l+ 1. This contradictionwith l(S) = l for all spanning trees S proves that all components of the graphG1 are isolated nodes. Thus n+2 = 2k and there is not an edge in G0 betweenany two nodes in G1. �

We prove now some theorems about extended graphs of a general simplex.

Theorem 3.4.19 Every signed graph, the node connectivity of whose positivepart is at least 2 and which is transitive (i.e., for every two of its nodes u, vthere exists an automorphism of the graph which transforms u into v), is anextended graph of some simplex.

Proof. Denote by 0, 1, . . . , n + 1 the nodes of the given graph G0 and definetwo matrices A = [ars] and B = [brs], r, s = 0, 1, . . . , n+ 1, of order n+ 2 asfollows:

ars = 1, if r �= s and if r, s is a positive edge in G0, otherwise, ars = 0;brs = 1, if r �= s and if r, s is a negative edge in G0, otherwise, brs = 0.

Denote by A0 and B0 the principal submatrices of A and B obtained byremoving the row and column with index 0. The matrix A0 is nonnegative andirreducible since its graph is connected. By the Perron–Frobenius theorem (cf.Theorem A.3.1), there exists a positive simple eigenvalue α0 of A0, which has

Page 71: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 61

of all eigenvalues the maximum modulus, and the corresponding eigenvectorz0 can be chosen positive

A0z0 = α0z0.

It follows that the matrix α0I0−A0 (I0 identity matrix) is positive semidefiniteof rank n (and order n + 1), and all its principal minors of orders ≤ n arepositive. This implies that there exists a number ε > 0 such that also thematrix

C0 = A0 − εB0

has a positive simple eigenvalue γ0, for which there exists a positiveeigenvector z

C0z = γ0z,

whereas the matrix

P0 = γ0I0 − C0

is positive semidefinite of rank n. Observe that P0z = 0.Form now the matrix

P = γ0I −A+ εB,

where I is the identity matrix of order n+ 2.Using the transitivity of the graph G0, we obtain that all principal minors

of order n+ 1 of the matrix P are equal to zero, whereas all principal minorsof orders ≤ n are positive: indeed, if we remove from P the row and columnwith index 0, we obtain P0 which has this property. If we remove from P a rowand column with index k > 0, we obtain some matrix Pk; however, since thereexists an automorphism of G0 transforming the vertex 0 into the vertex k,there exists a permutation of rows and (simultaneously) columns of the matrixPk, transforming it into P0. Thus also detPk = 0 and all principal minors ofPk of degree ≤ n are positive as well. We can show that for sufficiently smallε > 0, the matrix P is nonsingular: P cannot be positive definite (its principalminors of order n + 1 are equal to zero) so that detP < 0 and the signatureof P is n. By Theorem 3.4.8, the negative of the signed graph of the matrix Pis an extended graph of some n-simplex. According to the definitions of thematrices A and B, this is the graph G0. �

Other important examples of extended graphs of n-simplexes are those ofright simplexes. These simplexes will be studied independently in the nextchapter.

Theorem 3.4.20 Suppose a signed graph G0 on n+1 nodes has the followingproperties:

Page 72: Graphs and Matrices in Geometry Fiedler.pdf

62 Qualitative properties of the angles in a simplex

(i) if we remove from G0 one of its nodes u and the incident edges, theresulting graph is a tree T ;

(ii) the node u is joined with each node v in T by a positive, or negative edge,according to whether v has in T degree 1, or ≥ 3 (and thus u is not joinedto v if v has degree 2).

Then G0 is the extended graph of some n-simplex.

Proof. This follows immediately from Theorem 4.1.2 in the next chapter sinceG0 is the extended graph of the right n-simplex having the usual graph T . �

Theorem 3.4.21 Suppose G0 is an extended graph of some n-simplex suchthat at least one of its nodes is saturated, i.e. it is joined to every other nodeby an edge (positive or negative). Then every signed supergraph of the graphG0 (with the same set of nodes) is an extended graph of some n-simplex.

Proof. Suppose Σ is an n-simplex, the extended graph of which is G0; let thesaturated node of G0 correspond to the circumcenter C of Σ. Thus C is notcontained in any face of Σ. If G1 is any supergraph of G0, G1 has the sameedges at the saturated node as G0.

Let [qrs], r, s = 0, . . . , n + 1, be the matrix corresponding to Σ. Thesubmatrix Q = [qij ], i, j = 1, . . . , n+ 1, satisfies:

(i) Q is positive semidefinite of rank n;(ii) Qe = 0, where e = (1, . . . , 1)T ;(iii) if 0, 1, . . . , n+1 are the nodes of the graph G0 and 0 the saturated node,

then for i �= j, i, j = 1, . . . , n+ 1,

qij < 0, if and only if (i, j) is positive in G0,qij > 0, if and only if (i, j) is negative in G0.

Construct a new matrix Q = [qij ], i, j = 1, . . . , n+ 1, as follows:qij = qij , if i �= j and qij �= 0;qij = −ε, if i �= j, qij = 0 and (i, j) is a positive edge of G1;qij = ε, if i �= j, qij = 0 and (i, j) is a negative edge in G1;qii = −

∑j �=i qij .

We now choose the number ε positive and so small that Q remains positivesemidefinite and, in addition, such that the signs of the new numbers ci from(3.4.9) for the numbers qij coincide with the signs of the numbers ci from(3.4.9) for the numbers qij . Such a number ε > 0 clearly exists since allnumbers ci as barycentric coordinates of the circumcenter of Σ are differentfrom zero.

It now follows easily that Q is the Gramian of some n-simplex Σ, which hasG0 as its extended graph. �

Page 73: Graphs and Matrices in Geometry Fiedler.pdf

3.4 Position of the circumcenter of a simplex 63

Remark 3.4.22 This theorem can also be formulated as follows: Suppose G0

is a signed graph on n + 1 nodes, which is not an extended graph of any n-simplex. Then no subgraph of G0 with the same set of nodes and a saturatednode can be the extended graph of an n-simplex.

We shall return to the topic of extended graphs in Chapter 4 in variousclasses of special simplexes and in Chapter 6, Section 4, where all possibleextended graphs of tetrahedrons will be found.

Page 74: Graphs and Matrices in Geometry Fiedler.pdf

4

Special simplexes

In this chapter, we shall study classes of simplexes with special properties inmore detail.

4.1 Right simplexes

We start with the following lemma.

Lemma 4.1.1 Let the graph G = (V,E) be a tree with the node set V ={1, 2, . . . , n + 1} and edge set E. Assign to every edge (i, k) in E a nonzeronumber cik. Denote by Γ the matrix [γrs], r, s = 0, 1, . . . , n + 1, with entriesγ00 =

∑ik∈G

1/cik, γ0i = γi0 = si − 2, where si is the degree of the node i in G

(i.e. the number of edges incident with i)

γik = γki = −cik, if i �= k, (i, k) ∈ E,

γik = 0 for i �= k, (i, k) �∈ E,

γii =∑

k,(i,k)∈E

cik.

Further, denote by M the matrix [mrs], r, s = 0, . . . , n + 1, with entriesm00 = 0, m0i = mi0 = 1, mii = 0

mik = mki =1cij1

+1

cj1j2

+ · · · + 1cjsk

,

if i �= k and (i, j1, . . . , js, k) is the (unique) path in G from the node i to k.Then

MΓ = −2I,

where I is the identity matrix of order n+2. (In the formulae above, we writeagain i, j, k for indices 1, 2, . . . , n+ 1.)

Page 75: Graphs and Matrices in Geometry Fiedler.pdf

4.1 Right simplexes 65

Proof. We have to show thatn+1∑s=0

mrsγst = −2δrt.

We begin with the case r = t = 0. Thenn+1∑s=0

m0sγs0 =∑n+1

i=1 m0iγ0i =n+1∑i=1

(si − 2) = −2, sincen+1∑i=1

si = 2n (∑si is

twice the number of edges of G and the number of edges of a tree is by oneless than the number of nodes).

For r = 0, t = i = 1, . . . , n we haven+1∑s=0

m0sγsi

(=

n+1∑k=1

γik

)= 0,

whereas for r = i, t = 0n+1∑s=0

misγ0s =∑jl∈E

1cjl

+∑k �=i

(sk − 2)(

1cij1

+1

cj1j2

+ · · · + 1cjsk

).

To show that the sum on the right-hand side is zero, let us prove that inthe sum ∑

k �=i

(sk − 2)(

1cij1

+1

cj1j2

+ · · · + 1cjsk

)the term 1/cjl appears for every edge (j, l) with the coefficient −1. Thus let(j, l) ∈ E; the node i is in one of the parts Vj , Vl obtained by deleting theedge (j, l) from E. Suppose that i is in Vj , say. Then 1/cjl appears in thosesummands k, which belong to the branch Vl containing l, namely with thetotal coefficient ∑

k∈Vl

(sk − 2).

Let p be the number of nodes in Vl. Then∑

k∈Vlsk = 1+2(p−1) (the node

l has degree sl by one greater than that in Vl, and the sum of the degreesof all the nodes is twice the number of edges). Consequently,

∑k∈Vl

(sk − 2) =

2(p− 1) − 2p+ 1 = −1.For r = i, t = i, we obtain

n+1∑s=0

misγsi = γ0i +∑j,j �=i

mijγij

= si − 2 −∑

(i,j)∈E

1cijcij

= si − 2 − si

= −2.

Page 76: Graphs and Matrices in Geometry Fiedler.pdf

66 Special simplexes

Finally, if r = i, t = j �= i, we have

n+1∑s=0

misγsj = m0iγ0j +mijγjj +∑

k,i �=k �=j

mikγkj

= sj − 2 +mij

∑l,(j,l)∈E

cjl −mij

∑k,(j,k)∈E

ckj +mjtctj

−∑

l �=t,(j,l)∈E

mjlcjl,

where t is the node neighboring j in the path from i to j, sincemit = mij−mjt,whereas for the remaining neighbors l of j, mil = mij +mjl.

This confirms that the last expression is indeed equal to zero. �In Definition 3.1.6, we introduced the notion of a right n-simplex as such an

n-simplex, which has exactly n acute interior angles and all of the remaininginterior angles (

(n2

)in number) right.

The signed graph GΣ (cf. Corollary 3.1.7) of such a right n-simplex is thusa tree with all edges positive. In the colored form (cf. Definition 3.1.8), theseedges are colored red. In agreement with usual notions for the right triangle,we call cathetes, or legs, the edges opposite to acute interior angles, whereasthe hypotenuse of the simplex will be the face containing exactly those verticeswhich are incident with one leg only.

We are now able to prove the main theorem on right simplexes ([6]). It alsogives a hint for an easy construction of such a simplex.

Theorem 4.1.2 (Basic theorem on right simplexes)

(i) Any two legs of a right n-simplex are perpendicular to each other; theyform thus a cathetic tree.

(ii) The set of n+1 vertices of a right n-simplex can be completed to the set of2n vertices of a rectangular parallelepiped (we call it simply a right n-box)in En, namely in such a way that the legs are (mutually perpendicular)edges of the box.

(iii) Conversely, if we choose among the n2n−1 edges of a right n-box a con-nected system of n mutually perpendicular edges, then the vertices of theedges (there are n+1 of them) form a right n-simplex whose legs coincidewith the chosen edges.

(iv) The barycentric coordinates of the center of the circumscribed hypersphereof a right n-simplex are 1 − 1

2si, where si is the degree of the vertex Ai

in the cathetic tree.

Proof. Let a right n-simplex Σ be given. The numbers qik, i.e. the entries ofthe Gramian of this simplex, satisfy all assumptions for the numbers γik in

Page 77: Graphs and Matrices in Geometry Fiedler.pdf

4.1 Right simplexes 67

Theorem 4.1.1; the graph G = (V,E) from Theorem 4.1.1 is the graph of thesimplex.

The angles ϕik defined by

cosϕik = − qik√qii

√qkk

,

are then the interior angles of the simplex Σ.Let mrs be the entries of the matrix M from Theorem 4.1.1. We intend to

show that for i, k = 1, . . . , n+ 1, the numbers mik are squares of the lengthsof edges of the simplex Σ.

To this end, we construct in some Euclidean n-space En with the usualorthonormal basis e1, . . . , en an n-simplex Σ.

Observe first that the numbers qik, for (i, k) ∈ E, are always negative sinceqik = −pipk cosϕik, where pi > 0 and cosϕik > 0.

Choose now a point An+1 arbitrarily in En and define the points Ai, i =1, . . . , n in En as follows:

Number the n edges of the graph G in some way by numbers 1, 2, . . . , n. If(n+1, j1, . . . , js, i) is the path from the node n+1 to the node i in the graphG, set

Ai = An+1 +1√−qn+1,j1

ec1 +1√−qj1,j2

ec2 + · · · + 1√−qjk,ieck

,

where c1 is the number assigned to the edge (n+1, j1), c2 the number assignedto the edge (j1, j2), etc.

The squares mik of the distances of the points Ai and Ak then satisfy

mik = − 1qik

if (i, k) ∈ G,

and, since there is a unique path between nodes of G, we have in general

mik = −( 1qij1

+1

qj1j2

+ · · · + 1qjsk

),

if (i, j1, . . . , js, k) is the path from i to k in G.The points A1, . . . , An+1 are linearly independent in En, thus forming

vertices of an n-simplex Σ.The corresponding numbers qik and q0i of the Gramian of this n-simplex

are clearly equal to the numbers γik and γ0i. By Theorem 2.1.1, the interiorangles of Σ and Σ are the same and these simplexes are similar.

The vertices A1, . . . , An+1 of the n-simplex Σ are indeed contained in theset of 2n vertices of the right n-box

P (ε1, . . . , εn) = An+1 +n∑

i=1

εiei√−qpq

,

Page 78: Graphs and Matrices in Geometry Fiedler.pdf

68 Special simplexes

where εi = 0 or 1 and −qpq is the weight of that edge of the graph G, whichwas numbered by i.

It follows that the same holds for the given simplex Σ. It also follows thatthe legs of Σ are perpendicular.

To prove the last assertion (iv), observe that by Theorem 2.1.1, the numbersq0i are homogeneous barycentric coordinates of the circumcenter of Σ, andthese are proportional to the numbers 1− 1

2si, the sum of which is already 1.

�Observe that a right n-simplex is (uniquely up to congruence) determined

by its cathetic tree, i.e. the structure and the lengths of the legs. There is alsoan intimate relationship between right simplexes and weighted trees, as thefollowing theorem shows.

Theorem 4.1.3 Let G be a tree with n+1 nodes U1, . . . , Un+1. Let each edge(Up, Uq) ∈ G be assigned a positive number μ(Up, Uq); we call it the lengthof the edge. Define the distance μ(Ui, Uk) between an arbitrary pair of nodesUi, Uk as the sum of the lengths of the edges in the path between Ui and Uk.

Then there exists an n-simplex with the property that the squares mik of thelengths of edges satisfy

mik = μ(Ui, Uk). (4.1)

This n-simplex is a right n-simplex and its cathetic tree is isomorphic to G.Conversely, for every right n-simplex there exists a graph G isomorphic to

the cathetic tree of the simplex and a metric μ on G such that (4.1) holds.

Proof. First let a tree G with the prescribed properties be given. Then thereexists in some Euclidean n-space En a right box and a tree T isomorphic to Gconsisting of edges, no two of them being parallel and such that the lengthsof the edges of the box are equal to the square roots of the correspondingnumbers μ(Ui, Uk). The nodes A1, . . . , An+1 of T then satisfy

ρ2(Ai, Ak) = μ(Ui, Uk)

for all i, k = 1, . . . , n+1; they are not in a hyperplane, and thus form verticesof a right n-simplex with the required properties.

Conversely, let Σ in En be a right n-simplex. Then there exists in En a rightbox, n edges of which coincide with the cathetes of Σ. By the Pythagorean the-orem, the tree isomorphic to the cathetic tree, each edge of which is assignedthe square of the length of the corresponding cathete, has the property (4.1).

�An immediate corollary of Theorem 4.1.2 is the following:

Theorem 4.1.4 The volume of a right n-simplex is the (1/n!)-multiple of theproduct of the lengths of the legs.

Page 79: Graphs and Matrices in Geometry Fiedler.pdf

4.1 Right simplexes 69

Theorem 4.1.5 The hypotenuse of a right simplex is a strictly acute face,which among strictly acute faces has the maximum dimension.

Proof. Suppose Σ is a right n-simplex so that the graph GΣ is a tree. Let Ube the set of nodes of GΣ, and U1 be the subset of U consisting of all nodesof degree one of GΣ. Let m = |U1|. It follows immediately from the propertyof the elimination graph obtained by eliminating the nodes from U \ U1 thatthe graph of the hypotenuse is complete.

To prove the maximality, suppose there is in Σ a strictly acute face F withmore than m vertices. Denote by H the hypotenuse.

Case 1. F contains H. Let u be a vertex in F which is not in H. Sinceremoving u from GΣ leads to at least two components, there exist two verticesp, q in H, which are in such different components. The triangle with verticesp, q, and u has a right angle at u, a contradiction.

Case 2. There is a vertex v in H which is not in F . The face F0 of Σopposite to v is a right (n − 1)-simplex Σ′. Its hypotenuse has at most mvertices and is a strictly acute face of Σ′ having the maximal dimension. Thisis a contradiction to the fact that F is contained in Σ′. �

In the last theorem of this section, we need the following lemma whosegeometric interpretation is left to the reader.

Lemma 4.1.6 Suppose that in a hyperacute n-simplex with vertices Ai and(n− 1)-dimensional faces ωi, the angle AiAjAk is right. Then the node uj ofthe graph G of the simplex (corresponding to the face ωj) is a cut-node in G,which separates the nodes ui and uk (corresponding to ωi and ωk).

Proof. The fact that AiAjAk is a right triangle with the right angle at Aj

means that there is no path in G from ui to uk not containing uj . Therefore,ui and uk are in different components of the graph G′ obtained from G bydeleting the node uj and the outgoing edges. Thus G′ is not connected. �

We now summarize properties of one special type of right simplex.

Theorem 4.1.7 Let Σ be an n-simplex. Then the following are equivalent:

(i) The signed graph of Σ is a positive path.(ii) There exist positive numbers a1, . . . , an and a Cartesian coordinate sys-

tem such that the vertices of Σ can be permuted into the position as inExample 3.1.10.

(iii) There exist distinct real numbers c1, . . . , cn+1 such that the squares of thelengths of edges mik satisfy

mik = |ci − ck|. (4.2)

(iv) All two-dimensional faces of Σ are right triangles.

Page 80: Graphs and Matrices in Geometry Fiedler.pdf

70 Special simplexes

(v) Σ is a hyperacute n-simplex with the property that its circumcenter is apoint of its edge.

(vi) The Gramian of Σ is a permutation of a tridiagonal matrix.

Proof. (i) ↔ (ii). Suppose (i). Then Σ is a right simplex and the con-struction from Theorem 4.1.2 yields immediately (ii). Conversely, we saw inExample 3.1.10 that (ii) implies (i).

(ii) ↔ (iii). Given (ii), define c1 = 0, ci =∑i−1

k=1 a2k. Then (iii) will be

satisfied. Conversely, if the vertices of Σ in (iii) are renumbered so thatc1 < c2 < · · · < cn+1, then ai =

√ci+1 − ci is realized as the simplex in

Example 3.1.10.(iii) ↔ (iv). Suppose (iii). If Ai, Aj , Ak are distinct vertices, the distances

satisfy the Pythagorean equality. Let us prove that (iv) implies (iii). Supposethat all two-dimensional faces of an n-simplex Σ with vertices A1, . . . , An+1

are right triangles. Observe first that if AiAj is one from the set of edges of Σwith maximum length, then it is the hypotenuse in all triangles AiAjAk fori �= k �= j. This means that the hypersphere having AiAj as the diameter isthe circumscribed sphere of the simplex. Therefore, just one such maximumedge exists.

Let now Ai, Aj , Ak, Al be four distinct vertices and let AiAj have themaximum length of the six edges connecting them. We show that the angle∠AkAiAl cannot be right. Suppose to the contrary that ∠AkAiAl = 1

2π.Distinguish two cases:Case A. ∠AkAjAl = 1

2π; this implies that the midpoint of the edge AkAl

has the same distance from all four vertices considered, a contradiction to theabove since the midpoint of AiAj has this property.

Case B. One of the angles ∠AkAlAj , ∠AlAkAj is right. Since both thesecases differ by changing the indices k and l only, we can suppose that∠AkAlAj = 1

2π. We have then by the Pythagorean theorem that

mik +mjk = mij , mil +mjl = mij ,

mik +mil = mkl, mkl +mjl = mjk.

These imply that

mik +mil +mjl = mkl +mjl = mjk,

as well as

mik +mil +mjl = mik +mij = 2mik +mjk, i.e. mik = 0.

This contradiction shows that ∠AkAiAl �= 12π.

Suppose that AiAj is the edge of the simplex of maximum length. Choosean arbitrary real number ci and set for each k = 1, 2, . . . , n+ 1

ck = ci +mik. (4.3)

Page 81: Graphs and Matrices in Geometry Fiedler.pdf

4.1 Right simplexes 71

Let us prove that (4.2) holds and that all the numbers ck are distinct. Indeed,if ck = cl for k �= l, then necessarily j �= k, j �= l (AiAj is the only longestedge) and, as was shown above, ∠AkAiAl �= 1

2π, i.e. exactly one of the edges

AiAk, AiAl is the hypotenuse in the right triangle AiAkAl, i.e. mil �= mjl,contradicting ck = cl. By (4.3)

mik = ck − ci = |ck − ci|; (4.4)

since mij = mik +mjk for all k, it follows that also

mjk = cj − ck = |cj − ck|. (4.5)

Suppose now that k, l, i, j are distinct indices. Then ∠AkAiAl �= 12π, so that

either mik +mkl = mil or mil +mkl = mik. In the first case, mkl = cl − ci −ck + ci = cl − ck; in the second, mkl = ck − cl. Thus in both cases

mkl = |ck − cl|. (4.6)

The equations (4.4), (4.5), and (4.6) imply (4.2).(ii)→ (v). This was shown in Example 3.1.10.(v)→ (i). Suppose now that A1, . . . , An+1 are the vertices of a hyperacute

simplex Σ and ω1, . . . , ωn+1 are its (n − 1)-dimensional faces, ωi oppositeto Ai. Let the circumcenter be a point of the edge A1An+1, say. By theThalet theorem, all the angles ∠A1AjAn+1 are right for all j, 1 < j < n+ 1.Lemma 4.1.6 implies that every vertex uj corresponding to ωj , 1 < j < n+1,is a cut-vertex of the graph GΣ separating the vertices u1 and un+1.

We intend to show that GΣ is a path between u1 and un+1. Since GΣ isconnected, there exists a path P in GΣ from u1 to un+1, with the minimumnumber of vertices. Suppose that a vertex uk (corresponding to ωk) is not inP . Since uk separates u1 and un+1 in GΣ, it is a contradiction. Minimality ofP then implies that GΣ is P and Σ satisfies (i).

(i) ↔ (vi). This follows from Corollary 3.2.1 and the notion of the tridiagonalmatrix (cf. Appendix). �

We call a simplex that satisfies any one of the conditions (i)–(vi) a Schlaeflisimplex since Schlaefli used this simplex (he called it Orthoscheme) in hisstudy of volumes in noneuclidean spaces.

We now have the last result of this section.

Theorem 4.1.8 Every face of dimension at least two of a Schlaefli simplexis again a Schlaefli simplex.

Proof. This follows immediately from (ii) of Theorem 4.1.7. �

Page 82: Graphs and Matrices in Geometry Fiedler.pdf

72 Special simplexes

4.2 Orthocentric simplexes

Whereas altitudes of a triangle meet in one point, this is no longer true ingeneral for the tetrahedron. In this section, we characterize those simplexesfor which the altitudes meet in one point; they are called orthocentric andthey were studied before (cf. [2]). We suppose that the dimension is at least 2.

Theorem 4.2.1 Let Σ be an orthocentric n-simplex. Then the squares mik ofthe lengths of edges have the property that there exist real numbers π1, . . . , πn+1

such that for all i, k, i �= k

mik = πi + πk. (4.7)

In addition, the numbers πi satisfy the following:

(i) either all of them are positive (and the simplex is acute orthocentric), or(ii) one of the numbers πi is zero and the remaining positive (the simplex is

right orthocentric), or finally(iii) one of the numbers πi is negative, and the remaining positive (the simplex

is obtuse orthocentric), andn+1∑k=1

1πk

< 0. (4.8)

Conversely, if π1, . . . , πn+1 are real numbers for which one of the conditions(i), (ii), (iii) holds, then there exists an n-simplex, whose squares of lengths ofedges satisfy (4.7), and this simplex is orthocentric. The intersection point Vof the altitudes, i.e. the orthocenter, has homogeneous barycentric coordinates

V ≡ (vi), vi =n+1∏

k=1,k �=i

πk; i.e., in cases (i) and (iii), V ≡ (1/πi).

Proof. Let Σ with vertices A1, . . . , An+1 have orthocenter V . Denote by ci thevectors

ci = Ai − An+1, i = 1, . . . , n, (4.9)

by di the vectors

di = Ai − V, i = 1, . . . , n+ 1. (4.10)

The vector di, i ∈ {1, . . . , n+1}, is either the zero vector, or it is perpendicularto the face ωi, and therefore to all vectors in this face.

We have thus

〈di, ck〉 = 0 for k �= i, i, k = 1, . . . , n. (4.11)

The vector dn+1 is also either the zero vector, or it is perpendicular to all

vectors−→AiAk = ck − ci for i �= k, i, k = 1, . . . , n. Thus, in all cases

〈dn+1, ck − ci〉 = 0, i �= k, i, k = 1, . . . , n.

Page 83: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 73

Denote now 〈dn+1, c1〉 = −πn+1; we obtain

〈dn+1, ck〉 = −πn+1, k = 1, . . . , n.

Denote also

〈dk, ck〉 = πk, k = 1, . . . , n.

We intend to show that (4.7) holds.First, observe that by (4.9) and (4.10)

ci − cj = di − dj , i, j = 1, . . . , n,

ci = di − dn+1, i = 1, . . . , n.

Therefore, we have for i = 1, . . . , n, that

mi,n+1 = 〈ci, ci〉= 〈ci, di − dn+1〉= 〈ci, di〉 − 〈ci, dn+1〉= πi + πn+1.

If i �= k, i, k = 1, . . . , n, then (4.11) implies that

mik = 〈ci − ck, ci − ck〉= 〈ci − ck, di − dk〉= 〈ci, di〉 + 〈ck, dk〉= πi + πk.

The relations (4.7) are thus established. Since πi + πk > 0 for all i, k =1, . . . , n+ 1, i �= k, at most one of the numbers πi is not positive.

Before proving the condition (4.8) in case (iii), express the quadratic form∑mik xi xk as

n+1∑i,k=1

mik xi xk =∑

i,k=1,i �=k

(πi + πk)xi xk

=n+1∑

i,k=1

(πi + πk)xi xk − 2n+1∑i=1

πi x2i ,

orn+1∑i,k=1

mik xi xk = −2n+1∑i=1

πi x2i + 2

n+1∑i=1

πi xi

n+1∑i=1

xi. (4.12)

Suppose now that one of the numbers πi, say πn+1, is negative. Then πi > 0for i = 1, . . . , n. By Theorem 1.2.4,

∑mik xi xk < 0 for xi = 1/πi, i =

1, . . . , n, xn+1 = −n∑

i=1

(1/πi). By (4.12), we obtain

Page 84: Graphs and Matrices in Geometry Fiedler.pdf

74 Special simplexes

n+1∑i,k=1

mik xi xk = −2( n∑

i=1

πi1π2

i

+ πn+1

( n∑i=1

1πi

)2)

= −2n∑

i=1

1πi

(1 + πn+1

n∑i=1

1πi

)

= 2|πn+1|n∑

i=1

1πi

(n+1∑k=1

1πi

).

This proves (4.8).Suppose now that π1, . . . , πn+1 are real numbers which fulfill one of the

conditions (i), (ii), or (iii). If (iii) or (ii) holds, then, by (4.12), whenevern+1∑i=1

xi = 0 and mik = πi + πk, i �= k, i, k = 1, . . . , n+ 1

n+1∑i,k=1

mik xi xk = −2n+1∑i=1

πi x2i ≤ 0.

Equality is attained only if x1 = · · · = xn+1 = 0. By Theorem 1.2.4, then-simplex really exists.

Suppose now that condition (iii) holds, i.e. xn+1 < 0, together with theinequality (4.8). Suppose that xi are real numbers not all equal to zero and

such thatn+1∑i=1

xi = 0. The numbers mik = πi + πk, i �= k, i, k = 1, . . . , n + 1

satisfy

n+1∑i,k=1

mik xi xk = −2n+1∑i=1

πi x2i = −2

[ n∑i=1

πix2i + πn+1

( n∑i=1

xi

)2].

By Schwarz inequalityn∑

i=1

1πi

n∑i=1

πi x2i ≥( n∑

i=1

1√πi

√πixi

)2

=( n∑

i=1

xi

)2

.

Therefore, since πn+1 < 0

n+1∑i,k=1

mik xi xk ≤ −2[ n∑

i=1

πix2i + πn+1

n∑i=1

1πi

n∑i=1

πi x2i

]

= 2|πn+1|n∑

i=1

πi x2i

(1

πn+1+

n∑i=1

1πi

)< 0.

By Theorem 1.2.4 again, there exists an n-simplex satisfying mik = πi +πk for i �= k. It remains to show that in all cases (i), (ii), and (iii), thepoint V ≡ (vi), vi =

∏k �=i

πk, is the orthocenter. If πn+1 = 0, V ≡ An+1 and

Page 85: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 75

mik = mi,n+1 + mk,n+1 for i �= k, i, k = 1, . . . , n. The vectors−→AiAn+1 are

thus mutually perpendicular and coincide with the altitudes orthogonal tothe faces ωi, i = 1, . . . , n. Since also the last altitude contains the point An+1,An+1 ≡ V is indeed the orthocenter.

Suppose now that all πis are different from zero. Let us show that the vector−→V Ai, where V ≡ (1/πi), is perpendicular to all vectors

−→AjAk for i �= j �= k �= i,

i, j, k = 1, . . . , n+1. As we know from (1.10), the vectors p, q are perpendicularif and only if

∑mikpiqk = 0. By (4.12), since for such vectors

∑pi =

∑qi = 0∑

mikpiqk = −2n+1∑i=1

πi pi qi. (4.13)

Further, if we simply denoten+1∑k=1

(1/πi) by τ

−→V Ai = (pk), pk = τδik − 1

πk,

where δik is the Kronecker delta.Now

−→AjAk = (ql), ql = δjl − δkl,

so that, by (4.13)

−12

∑mikpiqk =

n+1∑s=1

πs

(τδis −

1πs

)(δjs − δks)

= τn+1∑s=1

πs δis(δjs − δks) −n+1∑s=1

(δjs − δks).

Since both summands are zero,−→V Ai ⊥ ωi, and V is the orthocenter. �

The second part could have been proved using the numbers qik in (1.21),which are determined by the numbers mik in formula (4.7). Let us show thatif all the numbers πi are different from zero, the numbers qrs satisfying (1.21)are given by

ρq00 =n+1∑k=1

πk

n+1∑k=1

1πk

− (n− 1)2,

ρq0i =n− 1πi

−n+1∑k=1

1πk, (4.14)

ρqii =1πi

(n+1∑k=1

1πk

− 1πi

),

ρqij = − 1πi πj

for i �= j,

Page 86: Graphs and Matrices in Geometry Fiedler.pdf

76 Special simplexes

where ρ =n+1∑k=1

(1/πk); observe that ρ is positive in case (i) and negative in

case (iii) of Theorem 4.2.1.Indeed

ρ

n+1∑i=0

m0rqi0 = (n− 1)n+1∑k=1

1πk

− (n+ 1)n+1∑k=1

1πk

= −2n+1∑k=1

1πk

= −2ρ,

so thatn+1∑r=0

m0r qk0 = −2.

Further, for i, j, k, l = 1, . . . , n+ 1

ρn+1∑r=0

m0rqri = ρ

(qii +

∑k �=i

qki

)

=1πi

(∑ 1πk

− 1πi

)− 1πi

∑k �=i

1πk

= 0,

as well as

ρn+1∑r=0

mirqr0 = ρ

(q00 +

∑k �=i

mikqk0

)=∑

πk

∑ 1πk

− (n− 1)2

+∑k �=i

(πi + πk)(n− 1πk

−∑ 1

πk

)= 0,

since the last summand is equal to

(n− 1)πi

∑k �=i

1πk

+ n(n− 1) − nπi

∑ 1πl

−∑k �=i

πk

∑ 1πl

= (n− 1)2 −∑

πk

∑ 1πk.

Finally, for i �= k

ρn+1∑r=0

mirqrk = ρ

(q0k +mikqkk +

∑j �=i,j �=k

mijqjk

)=

n− 1πk

−∑ 1

πl+ (πi + πk)

1πk

∑l�=k

1πl

−∑

j �=i,j �=k

(πi + πj)1

πjπk

Page 87: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 77

=n− 1πk

+πi

πk

∑l �=k

1πl

− 1πk

− πi

πk

∑j �=i,j �=k

1πj

− (n− 1)1πk

=πi

πk

1πi

− 1πk

= 0,

as well as

ρn+1∑r=0

mirqri = ρ

(q0i +

∑k �=i

mikqki

)

=n− 1πi

−∑ 1

πl−∑k �=i

(πi + πk)1

πiπk

=n− 1πi

−∑ 1

πl−∑k �=i

1πk

− n

πi

= −2∑ 1

πl= −2ρ,

as we wanted to prove.The formulae (4.14) imply that in the case that all the πis are nonzero,

the point V ≡ (1/πi) belongs to the line joining the point Ak with the point(qk1, . . . , qk,n+1); however, this is the improper point, namely the directionperpendicular to the face ωk. It follows that V is the orthocenter.

In the case that πn+1 = 0, we obtain, instead of formulae (4.14)

q00 =n∑

k=1

πk,

q0i = −1, i = 1, . . . , n,

q0,n+1 = n− 2,

qii =1πi, i = 1, . . . , n, (4.15)

qn+1,n+1 =n∑

k=1

1πk,

qij = 0, i �= j, i, j = 1, . . . , n,

qi,n+1 = − 1πi, i = 1, . . . , n.

These formulae can be obtained either directly, or by using the limit pro-cedure πn+1 → 0 in (4.14). Also, in this second case, we can show that V isthe orthocenter of the simplex.

For the purpose of this section, we shall call an orthocentric simplex differentfrom the right, i.e. of type (i) or (iii) in Theorem 4.2.1 proper.

Page 88: Graphs and Matrices in Geometry Fiedler.pdf

78 Special simplexes

Using the formulae (4.14), we can show easily:

Theorem 4.2.2 A simplex Σ is proper orthocentric if and only if there existnonzero numbers c1, . . . , cn+1 such that:

(i) either all of them have the same sign, or,(ii) one of them has a sign different from the remaining, and the interior

angles ϕij of Σ satisfy

cosϕij = εcicj , i �= j, i, j = 1, . . . , n+ 1; (4.16)

here, ε = 1 in case (i) and −1 in case (ii).

Proof. Let Σ be proper orthocentric. If all the numbers πi in Theorem 4.2.1are positive, then, for i �= j, by (4.14)

cosϕij = − qij√qii

√qjj

=1

πi√qii

· 1πj√qjj

,

so that (4.16) is fulfilled for ci = 1/πi√qii �= 0.

If one of the numbers πi is negative, then ρ in (4.14) is negative and (4.16)is also fulfilled.

Suppose now that (4.16) holds. Then, for i �= j

qij = −√qii

√qjj cosϕij = −ε√qii

√qjjcicj = −ελiλj

for λi =√qiici �= 0, i = 1, . . . , n + 1. Let us show that the point V =

(λ1, . . . , λn+1) is the orthocenter of the simplex.However, this follows immediately from the fact that there is a linear

dependence relation ελkV + Sk = (ελ2k + qkk)Ak between each point Ak,

V = (λ1, . . . , λn+1), and the direction Sk = (q1k, . . . , qkk, . . . , qk,n+1), perpen-dicular to ωk. �Corollary 4.2.3 The extended signed graph of an orthocentric n-simplexbelongs to one of the following three types:

(i) A complete positive graph with n+ 2 nodes.(ii) There is a subset S of n nodes among which all edges are negative; the

remaining two nodes are connected to all nodes in S by positive edges andthe edge between them is negative.

(iii) There is a subset S of n nodes among which all edges are missing; theremaining two nodes are connected to all nodes in S by positive edges andthe edge between them is negative.

Before proceeding further, we recall the notion of the d-rank of a squarematrix A. It was defined in [29] as the number

d(A) = minD

{rank(A+D) : D is a diagonal matrix}.

In [23], conditions were found for which d(A−1) = d(A) if A is nonsingular.

Page 89: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 79

Theorem 4.2.4 Let A be an n× n matrix, n ≥ 3. If neither of the matricesA and A−1 has a zero entry, then A has d-rank one if and only if A−1 hasd-rank one.

Proof. Under our assumptions, let d(A) = 1. Then A = D0 + XY T , whereboth X and Y are n×1 matrices with no zero entry and D0 is diagonal. If D0

were singular, then just one diagonal entry, say the last one, would be zero,because of the rank condition. But then the (1,2)-entry of A−1 would be zero(the adjoint of A has two columns proportional), a contradiction. Therefore,D0 is nonsingular. Observe that the number 1 + Y TD−1

0 X is different fromzero since detA = detD0(1 + Y TD−1

0 X). Thus

A−1 = D−10 −D−1

0 X(1 + Y TD−10 X)−1Y TD−1

0

so that d(A−1) = 1. Because of the symmetry of the formulation with respectto inversion, the converse is also true. The rest is obvious. �

Using the notion of the d-rank, we can prove:

Theorem 4.2.5 A simplex is proper orthocentric if and only if the Gramianhas d-rank one.

Proof. This follows immediately from Theorem 4.2.2 and (ii) of Theorem 2.1.1.�

Before proceeding further, we shall find other characterizations of the acuteorthocentric n-simplex.

Theorem 4.2.6 Let Σ be an n-simplex in En with vertices A1, . . . , An+1.Then the following are equivalent:

(i) Σ is an acute orthocentric simplex.(ii) In an (n + 1)-dimensional Euclidean space En+1 containing En, there

exists a point P such that the n + 1 halflines PAi are mutuallyperpendicular in En+1.

(iii) There exist n+ 1 hyperspheres in En, each with center in Ai, i = 1, . . . ,n+ 1 such that any two of them are orthogonal.

In case (ii), the orthogonal projection of P onto En is the orthocenter of Σ.In case (iii), the point having the same potence with respect to all n + 1hyperspheres is the orthocenter of Σ.

Proof. (i) ↔ (ii). Let Σ be an acute orthocentric simplex. Then there existpositive numbers π1, . . . , πn+1 such that for the squares of the edges, mij =πi + πj , i �= j, i, j = 1, . . . , n+ 1.

In the space En+1, choose a perpendicular line L through the orthocenterV of Σ and choose P as such a point of L which has the distance |πn+2| fromEn. If i, j are distinct indices from {1, . . . , n + 1}, the triangle AiPAj is by

Page 90: Graphs and Matrices in Geometry Fiedler.pdf

80 Special simplexes

the Pythagorean theorem right with the right angle at P since the square ofthe distance of P to Ai is πi.

Conversely, if P is such a point that all the PAis are perpendicular, chooseπi as the square of the distance of P to Ai. These πis are positive and by thePythagorean theorem, the square of the length of the edge AiAj is πi + πj

whenever i �= j.(i) ↔ (iii). Given Σ, choose the hypersphere Hi with center Ai and radius√πi, i = 1, . . . , n+1. If i �= j, thenKi andKj intersect; ifX is any point of the

intersection, then AiXAj is a right triangle with the right angle at X by thePythagorean theorem, which implies that Ki and Kj intersect orthogonally.

The converse is also evident by choosing the numbers πi as squares of theradii of the hyperspheres.

The fact that the orthocenter V of Σ has the same potence with respect tothe hyperspheres, namely πn+2, is easily established. �

Remark 4.2.7 Because of the fact that the orthocenter has the same potencewith respect to all hyperspheres, Ki can also be formulated as follows. Thereexists a (formally real, cf. Remark 1.4.12) hypersphere with center in V anda purely imaginary radius which is (in a generalized manner) orthogonal toall hyperspheres Ki. This formulation applies also to the case of an obtuseorthocentric n-simplex for which a characterization analogous to (iii) can beproved.

Let us find now the squares of the distances of the orthocenter from the ver-tices in the case that the simplex is proper. The (nonhomogeneous) barycentric

coordinates qi of the vector−→A1V are qi = (1/ρπi) − δ1i, where ρ =

n+1∑i=1

(1/πi)

and δ. is the Kronecker delta, so that

ρ2(A1, V ) = −12

∑i,k

mikqiqk

=n+1∑i=1

πiq2i

=∑

i

πi

( 1ρπi

− δ1i

)2

=∑

i

(1

ρ2πi− 2

δ1i

ρ+ πiδ1i

)= π1 −

1ρ.

If we denote

−1ρ

= πn+2, (4.17)

Page 91: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 81

we have

ρ2(A1, V ) = π1 + πn+2,

and for all k = 1, . . . , n+ 1

ρ2(Ak, V ) = πk + πn+2. (4.18)

This means that if we denote the point V as An+2, the relations (4.7) hold,by (4.18), for all i, k = 1, . . . , n+ 2, and, in addition, by (4.17)

n+2∑k=1

1πk

= 0. (4.19)

The relations (4.7) and the generalized relations (4.18) are symmetric withrespect to the indices 1, 2, . . . , n+ 2. Also, the inequalities∑

k=1k �=i

1πk

�= 0

are fulfilled for all i = 1, . . . , n+2 due to (4.19). Thus the points A1, . . . , An+2

form a system of n+ 2 points in En with the property that each of the pointsof the system is the orthocenter of the simplex generated by the remainingpoints. Such a system is called an orthocentric system in En.

As the following theorem shows, the orthocentric system of points in En canbe characterized as the maximum system of distinct points in En, the mutualdistances of which satisfy (4.18).

Theorem 4.2.8 Let a system of m distinct points A1, . . . , Am in En havethe property that there exist numbers λ1, . . . , λm such that the squares of thedistances of the points Ai satisfy

ρ2(Ai, Ak) = λi + λk, i �= k, i, k = 1, . . . ,m. (4.20)

Then m ≤ n + 2. If m = n + 2, then the system is orthocentric. If m <

n + 2, the system can be completed to an orthocentric system in En, if and

only ifm∑

k=1

(1/λk) �= 0. Ifm∑

k=1

(1/λk) = 0, then there exists in En an (m −

2)-dimensional subspace in which the system is orthocentric.

Proof. By (4.20), at most one of the numbers λk is not positive. Let, say,λ1, . . . , λm−1 be positive. By Theorem 4.2.1, the points A1, . . . , Am−1 arelinearly independent and, as points in En, m−1 ≤ n+1, or m ≤ n+2. Supposenow that m = n + 2. The points A1, . . . , An+2 are thus linearly dependentand there exist numbers α1, . . . , αn−2, not all equal to zero, and such that

bothn+2∑i=1

αiAi = 0 andn+2∑i=1

αi = 0 are satisfied. We have necessarily λn+2 < 0,

since otherwise, by Theorem 4.2.1, the points A1, . . . , An+2 would be linearly

Page 92: Graphs and Matrices in Geometry Fiedler.pdf

82 Special simplexes

independent, and αn+2 �= 0, since otherwise the points A1, . . . , An+1 wouldbe linearly dependent.

Now we apply a generalization of Theorem 1.2.4, which will independently

be proved as Theorem 5.5.2. Whenevern+2∑i=1

xi = 0, then

n+2∑i,k=1

mik xi xk = −2n+2∑i=1

λi x2i ≤ 0

i.e.n+2∑i=1

λi x2i ≥ 0

with equality, if and only if xi = ραi.

In particular, for xi = 1/λi, i = 1, . . . , n+ 1, xn+2 = −n+1∑i=1

(1/λi)

n+1∑i=1

1λi

− |λn+2|(n+1∑

i=1

1λi

)2

≥ 0,

i.e.

|λn+2| ≤1∑n+1

i=1 (1/λi).

By Schwarz inequality, we also have for xi = αi

|λn+2|α2n+2 =

n+1∑k=1

λkα2k

≥ 1∑n+1k=1(1/λk)

(n+1∑k=1

αk

)2

=1∑n+1

k=1(1/λk)α2

n+2,

so that

|λn+2| ≥1∑n+1

i=1 (1/λi).

This shows that

−λn+2 =1∑n+1

k=1(1/λk),

orn+2∑i=1

1λi

= 0.

Page 93: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 83

By (4.19), the system of points A1, . . . , An+2 is orthocentric. The conversealso holds.

Now, letm < n+2, and supposem∑

k=1

(1/λk) �= 0. Then either all the numbers

λ1, . . . , λm are positive, or just one is negative andm∑

k=1

(1/λk) < 0 (otherwise,

such a system of points in En would not exist). In both cases, the pointsA1, . . . , Am are linearly independent and form vertices of a proper orthocen-tric (m−1)-simplex, the vertices of which can be completed by the orthocenterto an orthocentric system in some (m− 1)-dimensional space, or also by com-

pleting by further positive numbers λm−1, . . . , λn+1 (such thatn+1∑k=1

(1/λk) < 0,

if one of the numbers λ1, . . . , λm is negative) to an orthocentric n-simplex con-

tained in an orthocentric system in En. Ifm∑

i=1

(1/λi) = 0, the points A1, . . . , Am

are linearly dependent (for xi = 1/λi, we obtain∑mikxixk = 0) and by the

first part they form an orthocentric system in the corresponding (m − 2)-dimensional space. �

An immediate consequence of Theorems 4.2.1 and 4.2.8 is the following.

Theorem 4.2.9 Every (at least two-dimensional) face of a proper orthocen-tric n-simplex is again a proper orthocentric simplex. Every such face of anacute orthocentric simplex is again an acute orthocentric simplex.

It is well known that a tetrahedron is (proper) orthocentric if and only ifone of the following conditions is fulfilled.

(i) Any two of the opposite edges are perpendicular.(ii) The sums of squares of the lengths of opposite edges are mutually equal.

Let us show that a certain converse of Theorem 4.2.9 holds.

Theorem 4.2.10 If all three-dimensional faces Ai, Ai+1, Aj , Aj+1, 1 ≤ i <

j−1 ≤ n−1 of an n-simplex with vertices A1, . . . , An+1 are proper orthocentrictetrahedrons, the simplex itself is proper orthocentric.

Proof. We use induction with respect to the dimension n to prove the assertion.Since for n = 3 the assertion is trivial, let n > 3 and suppose that the assertionis true for (n− 1)-simplexes.

Suppose that A1, . . . , An+1 are vertices of Σ. By the induction hypothesis,the (n−1)-simplex with vertices A1, . . . , An is orthocentric so that |AiAj |2 =πi + πj for some real π1, . . . , πn, whenever 1 ≤ i < j ≤ n holds. By theassumption, the tetrahedron A1, A2, An, An+1 is orthocentric so that

|A1An+1|2 + |A2An|2 = |A1An|2 + |A2An+1|2 (4.21)

Page 94: Graphs and Matrices in Geometry Fiedler.pdf

84 Special simplexes

by (ii) of Theorem 4.2.9. Choose πn+1 = |A1An+1|2 − π1. We shall show that|AkAn+1|2 = πk+πn+1 for all k = 1, . . . , n. This is true for k = 1, as well as fork = 2 by (4.21). Since the tetrahedron A2, A3, An, An+1 is also orthocentric,we obtain the result for k = 3, etc., up to k = n. �

In the following corollary, we call two edges of a simplex nonneighboring, ifthey do not have a vertex in common. Also, two faces are called complemen-tary, if the sets of vertices which generate them are complementary, i.e. theyare disjoint and their union is the set of all vertices.

Corollary 4.2.11 Let Σ be an n-simplex, n ≥ 3. Then the following areequivalent:

(i) Σ is proper orthocentric.(ii) Any two nonneighboring edges of Σ are perpendicular.(iii) Every edge of Σ is orthogonal to its complementary face.

In the next theorem, a well-known property of the triangle is generalized.

Theorem 4.2.12 The centroid T , the circumcenter S, and the orthocenter Vof an orthocentric n-simplex are collinear. In fact, the point T is an interiorpoint of the segment SV and

ST

TV=

12(n− 1).

Proof. By Theorem 2.1.1, the numbers q0i are homogeneous barycentriccoordinates of the circumcenter S. We distinguish two cases.

Suppose first that the simplex is not right. Then all the numbers πi fromTheorem 4.2.1 are different from zero and the (not homogeneous) barycentriccoordinates si of the point S are by (4.14) obtained from

−2τsi =n− 1πi

−n+1∑k=1

1πk,

where

τ =n+1∑k=1

1πk.

The (inhomogeneous) barycentric coordinates vi of the circumcenter V areby Theorem 4.2.1

τvi =1πi,

and the coordinates ti of the centroid T satisfy

ti =1

n+ 1.

Page 95: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 85

We have thus

−2τS = (n− 1)τV − τ(n+ 1)T,

or

T =2

n+ 1S +

n− 1n+ 1

V.

The theorem holds also for the right n-simplex; the proof uses therelations (4.15). �

Remark 4.2.13 The line ST (if S �≡ T ) is called the Euler line, analogouslyto the case of the triangle.

As is well known, the midpoints of the edges and the heels of the altitudesin the triangle are points of a circle, the so-called Feuerbach circle. In thesimplex, we even have a richer relationship.1

Theorem 4.2.14 Let m ∈ {1, . . . , n−1}. Then the centroids and the circum-centers of all m-dimensional faces of an orthocentric n-simplex belong to thehypersphere Km

Km ≡ (m+ 1)n+1∑i=1

πi x2i −

n+1∑i=1

πi xi

n+1∑i=1

xi = 0.

The centers of the hyperspheres Km are points of the Euler line.

Remark 4.2.15 For m = 1, the orthocenter is to be taken as the orthogonalprojection of the orthocenter on the corresponding edge.

Proof. Suppose first that the given n-simplex with vertices Ai is not right. Itis immediate that whenever M is a subset of N = {1, . . . , n + 1} with m+ 1elements, then the centroid of the m-simplex with vertices Ai having indicesin M has homogeneous coordinates ti = 1 for i ∈M and ti = 0 for i �∈M . Itsorthocenter has coordinates vi = 1/πi for i ∈ M and vi = 0 for i �∈ M . Theverification that all these points satisfy the equation of Km is then easy. Thecase of the right simplex is proved analogously. �

Another notion we shall need is the generalization of a conic in P2. A rationalnormal curve of degree n in the projective space Pn is the set of points withprojective coordinates (x1, x2, . . . , xn+1) which satisfy the system of equations

xi =ia0t

n1 +

ia1t

n−11 t2 +

ia2t

n−21 t22 + · · · + i

antn2 ,

where (t1, t2) is a homogeneous pair of parameters and [iak] a nonsingular fixed

matrix (cf. Appendix, Section 5).

1 See [2].

Page 96: Graphs and Matrices in Geometry Fiedler.pdf

86 Special simplexes

By a suitable choice of the coordinate system, every real rational normalcurve (of degree n) in Pn passing through the basic coordinate pointsO1, O2, . . . , On+1 and another point Y = (y1, y2, . . . , yn+1) can be writtenin the form2

x1 =y1

t− t1, x2 =

y2t− t2

, . . . , xn+1 =yn+1

t− tn+1. (4.22)

We can now generalize another well-known property of the triangle. Let uscall an equilateral n-hyperbola in En such a rational normal curve in En, then asymptotic directions of which are perpendicular.

In addition, we call two such equilateral n-hyperbolas independent if bothn-tuples of their asymptotic directions are independent in the sense ofTheorem A.5.13.

Theorem 4.2.16 Suppose that a rational normal curve in En contains n+ 2points of an orthocentric system. Then this curve is an equilateral n-hyperbola.

Theorem 4.2.17 Suppose that two independent equilateral n-hyperbolas inEn have n + 2 distinct points in common. Then these n + 2 points form anorthocentric system in En.

Proof. We shall prove both theorems together. We start with the proof of thesecond theorem. Suppose there are two independent equiangular n-hyperbolascontaining n + 2 distinct points in En. Let O1, O2, . . . , On+1 be some n + 1of them; they are necessarily linearly independent since otherwise the firstn-hyperbola would have with (any) hyperplane H containing these pointsat least n + 1 points in common and would completely belong to H. Letthe remaining point have barycentric coordinates Y = (y1, y2, . . . , yn+1) withrespect to the simplex with basic vertices in O1, . . . , On+1. By the same rea-soning as above, yi �= 0, i = 1, . . . , n+ 1. Denote again by mij the squares ofthe distances between the points Oi and Oj .

Both real rational normal curves containing the points O1, . . . , On+1,

Y , have equations of the form (4.22); the second, say, with numberst′1, t

′2, . . . , t

′n+1.

By assumption they are independent, thus having both n-tuples of asymp-totic directions independent and, by Theorem A.5.13 in the Appendix, thereis a unique nonsingular quadric (in the improper hyperplane) with respect towhich both n-tuples of directions are autopolar.

One such quadric is the imaginary intersection of the circumscribedhypersphere

∑mikxixk = 0 with the improper hyperplane

∑xi = 0.

2 Its usual parametric form is obtained by multiplication of the right-hand sides by theproduct (t − t1) . . . (t − tn+1).

Page 97: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 87

We now show that the intersection of the improper hyperplane with the

quadric a ≡n+1∑i,j=1

aij xi xj = 0, aij = 0 for i = j, aij =1yi

+1yj

for i �= j,

has this property. The first n-hyperbola has the form (4.22); denote by Zr,r = 1, . . . , n, its improper points, i.e. the points Zr = (

rz1,

rz2, . . . ,

rzn+1), where

rzi =

yi

τr − ti, τ1, . . . , τn being the (necessarily distinct) roots of the equation

n+1∑i=1

yi

τ − ti= 0. (4.23)

However, for r �= s

n+1∑i,j=1

aijrzi

szj =

n+1∑i�=j,i,j=1

(1yi

+1yj

)yi yj

(τr − ti)(τs − tj)

=n+1∑i,j=1

yi + yj

(τr − ti)(τs − tj)− 2

n+1∑i=1

yi

(τr − ti)(τs − ti)

=n+1∑i=1

yi

τr − ti

n+1∑j=1

1τs − tj

+n+1∑j=1

yj

τs − tj

n+1∑i=1

1τr − ti

− 2τs − τr

[n+1∑i=1

yi

τr − ti−

n+1∑i=1

yi

τs − ti

]= 0

in view of (4.23); the asymptotic directions Zr and Zs are thus conjugatepoints with respect to the quadric a.

The same is also true for the second hyperbola. This implies that theimproper part of the quadric a coincides with the previous. Thus a is a hyper-sphere, and, since it contains the points O1, . . . , On+1, it is the circumscribedhypersphere. Consequently, for some ρ �= 0 and i �= j

mij = ρ

(1yi

+1yj

), (4.24)

i.e.

mij = πi + πj , i �= j.

The simplex O1, . . . , On+1 is thus orthocentric and the point Y (1/πi) is itsorthocenter. Since this orthocentric simplex is not right, the given system ofn+ 2 points is indeed orthocentric.

To prove Theorem 4.2.16, suppose that O1, . . . , On+1, Y is an orthocentricsystem in En. Choose the first n+1 of them as basic coordinate points of a sim-plex and let Y = (yi) be the last point, so that (4.24) holds. We already sawthat every real rational normal curve of degree n, which contains the points

Page 98: Graphs and Matrices in Geometry Fiedler.pdf

88 Special simplexes

O1, O2, . . . , On+1, Y , has by (4.22) the property that its improper points are

conjugate with respect to the quadricn+1∑i,j=1

mij xi xj = 0, i.e. with respect to

the circumscribed hypersphere. This means that every such curve is an equi-lateral n-hyperbola. �

We introduce now the notion of an equilateral quadric and find itsrelationship to orthocentric simplexes and systems. We start with a definition.

Definition 4.2.18 We call a point quadric α with equation

n+1∑i,k=1

αikxixk = 0

in projective coordinates and a dual quadric b with equation

n+1∑i,k=1

bik ξi ξk = 0

in dual coordinates of the same space apolar, ifn+1∑i,k=1

αik bik = 0.

Remark 4.2.19 It is well known that apolarity is a geometric notion thatdoes not depend on the coordinate system. One such geometric characteriza-tion of apolarity is, in terms of the quadrics α and b:

There exists a simplex which is autopolar with respect to b and all of whosevertices are points of α.

One direction is easy: in such a case all the biks with i �= k are equal tozero, whereas all the aiis are equal to zero. The converse is more complicatedand we shall not prove it.

Definition 4.2.20 We call a quadric in the Euclidean space equilateral if itis apolar to the isotropic improper dual quadric.

Remark 4.2.21 Observe that if the dimension of the space is two, a non-singular quadric (in this case a conic) is equilateral if and only if it is anequilateral hyperbola.

Remark 4.2.22 In the barycentric coordinates with respect to a usual

n-simplex Σ, the condition that the quadricn+1∑i,k=1

αik xi xk = 0 is equilateral

is given byn+1∑i,k=1

qik αik = 0.

Page 99: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 89

Theorem 4.2.23 Suppose that an n-simplex Σ is orthocentric but not a rightone. Then every equilateral quadric containing all vertices of Σ contains theorthocenter as well.

Conversely, every quadric containing all vertices, as well as the orthocenterof Σ, is equilateral.

Proof. Let α ≡n+1∑i,k=1

αik xi xk = 0 be equilateral, containing all the vertices

of Σ. Then

αii = 0, i = 1, . . . , n+ 1, (4.25)

n+1∑i,k=1

αik qik = 0. (4.26)

By the formulae (4.14), (4.26) can be written as

n+1∑i,k=1

αik1πi

1πk

= 0, (4.27)

since by (4.25), the numbers qii are irrelevant.This means, however, that α contains the orthocenter.If, conversely, the quadric α contains all vertices as well as the orthocenter,

then both (4.25) and (4.27) hold, thus by (4.14) also (4.26). Consequently, αis equilateral. �

In the sequel, we shall use the theorem:

Theorem 4.2.24 A real quadric in a Euclidian n-dimensional space is equi-lateral if and only if it contains n mutually orthogonal asymptotic directions.

Proof. Suppose first that a real point quadric α contains n orthogonal asymp-totic directions. Choose these directions as directions of the axes of somecartesian coordinate system. Then the equation of α in homogeneous cartesiancoordinates (with the improper hyperplane xn+1 = 0) is

n+1∑i,k=1

αik xi xk = 0, where αii = 0 for i = 1, . . . , n.

The isotropic quadric has then the (dual) equation ξ21 + · · · + ξ2n = 0 ≡n+1∑

i,k=1

bik ξi ξk = 0. Sincen+1∑

i,k=1

αik bik(=∑n

i=1αii) = 0, α is equilateral.

We shall prove the second part by induction with respect to the dimensionn of the space. If n = 2, the assertion is correct since the quadric is then anequilateral hyperbola. Thus, let n > 2 and suppose the assertion is true forequilateral quadrics of dimension n− 1.

Page 100: Graphs and Matrices in Geometry Fiedler.pdf

90 Special simplexes

We first show that the given quadric α contains at least one asymptoticdirection.

In a cartesian system of coordinates in En, the equation of the dual

isotropic quadric isn∑

i=1

ξ2i = 0, so that for the equilateral quadric α ≡n+1∑

i,k=1

αik xi xk = 0,n∑

i=1

αii = 0 holds. If all the numbers αii are equal to

zero, our claim is true (e.g. for the direction (1, 0, . . . , 0)). Otherwise, thereexist two numbers, say αjj and αkk, with different signs. The direction(c1, . . . , cn+1), for which cj , ck are the (necessarily real) roots of the equa-tion αjj c

2j + 2αjk cj ck + αkk c

2k = 0, whereas the remaining ci are equal to

zero, is then a real asymptotic direction of the quadric α.Thus let s be some real asymptotic direction of α. Choose a cartesian sys-

tem of coordinates in En such that the coordinates of the direction s are

(1, 0, . . . , 0). If the equation of α in the new system isn+1∑

i,k=1

αik xi xk = 0, then

α11 = 0. The dual equation of the isotropic quadric is ξ21 + · · ·+ξ2n = 0, so thatn∑

i=1

αii = 0, and thus alson∑

i=2

αii = 0. This implies that in the hyperplane En−1

with equation x1 = 0, which is perpendicular to the direction s, the intersec-

tion quadric α of the quadric α with En−1 has the equationn+1∑i,k=2

αik xi xk = 0.

Since the dual equation of the isotropic quadric in En−1 is ξ22 +· · ·+ξ2n = 0, the

quadric α is again equilateral sincen∑

i=2

αii = 0. By the induction hypothesis,

there exist in α n − 1 asymptotic directions, which are mutually orthogo-nal. These form, together with s, n mutually orthogonal directions of thequadric α. �

We now present a general definition.

Definition 4.2.25 A point algebraic manifold ν is called 2-apolar to a dualalgebraic manifold V if the following holds: whenever α is a point quadriccontaining ν and b is a dual quadric containing V , then α is apolar to b.

In this sense, the following theorem was presented in [13].

Theorem 4.2.26 The rational normal curve Sn of degree n with parametricequations xi = ti1 t

n−i2 , i = 0, . . . , n, in the projective n-dimensional space is

2-apolar to the dual quadric b ≡∑bik ξi ξk = 0 if and only if the matrix

[bik] is a nonzero Hankel matrix, i.e. if bik = ci+k, i, k = 0, 1, . . . , n, for somenumbers c0, c1, . . . , c2n, not all equal to zero.

Proof. Suppose first that Sn is 2-apolar to b. Let i1, k1, i2, k2 be some indices in{0, 1, . . . , n} such that i1 +k1 = i2 +k2, i1 �= i2, i1 �= k2. Since Sn is contained

Page 101: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 91

in the quadric xi1xk1 − xi2xk2 = 0, this quadric is apolar to b. Therefore,bi1 k1 = bi2 k2 , so that [bik] is Hankel.

Conversely, let [bik] be a Hankel matrix, i.e. bik = ci+k, i, k = 0, 1, . . . , n,

and let a point quadric α ≡n∑

i,k=0

αik xi xk = 0 contain Sn. This means that

n∑i,k=0

αik ti+k1 t2n−i−k

2 ≡ 0.

Consequentlyn∑

i,k=0,i+k=r

αik = 0, r = 0, . . . , 2n,

so thatn∑

i,k=0

αik bik =n∑

r=0

∑ni,k=0,i+k=rαik bik =

n∑r=0

cr∑n

i+k=rαik = 0. It

follows that α is apolar to b, and thus Sn is 2-apolar to b, as asserted. �We shall use the following classical theorem.

Theorem 4.2.27 A real positive semidefinite Hankel matrix of rank r can beexpressed as a sum of r positive semidefinite Hankel matrices of rank one.

We are now able to prove:

Theorem 4.2.28 A rational normal curve of degree n in a Euclidean n-spaceEn is 2-apolar to the dual isotropic quadric of En if and only if it is anequilateral n-hyperbola.

Proof. An equilateral n-hyperbola H has n asymptotic directions mutuallyperpendicular. Thus every quadric that contains H contains this n-tuple aswell. By Theorem 4.2.24, this quadric is an equilateral quadric; thus, byTheorem 4.2.20, apolar to the isotropic quadric. Therefore, every equilateraln-hyperbola is 2-apolar to the isotropic quadric.

Conversely, suppose that Sn is a rational normal curve in a Euclideann-space En, which is 2-apolar to the isotropic quadric of En. There existsa coordinate system in En in which Sn has parametric equations xi = ti1t

n−i2 ,

i = 0, . . . , n. The isotropic quadric b has then an equation, the coefficients ofwhich form, by Theorem 4.2.26, an (n+ 1) × (n+ 1) Hankel matrix B. Thismatrix is positive semidefinite of rank n. By Theorem 4.2.27, B is a sum of n

positive semidefinite Hankel matrices of rank one, B =n∑

j=1

Bj . Every Hankel

positive semidefinite matrix of rank one has the form [pik], pik = yi+kz2n−i−k,where (y, z) is a real nonzero pair. Hence b has equation

n∑j=1

(znj ξ0 + yjz

n−1j ξ1 + y2

j zn−2j ξ2 + · · · + yn

j ξn)2,

Page 102: Graphs and Matrices in Geometry Fiedler.pdf

92 Special simplexes

which implies that the n-tuples (znj , yjz

n−1j , . . . , yn

j ) of points of Sn form anautopolar (n−1)-simplex of the quadric b. These n points are improper pointsof En (since b is isotropic), and, being asymptotic directions of Sn, are thusperpendicular. �

In the conclusion, we investigate finite sets of points, which are 2-apolar tothe isotropic quadric.

Definition 4.2.29 A generalized orthocentric system in a Euclidean n-spaceEn is a system of any m ≤ 2n mutually distinct points in En, which (as a pointmanifold) is 2-apolar to the dual isotropic quadric of En.

Theorem 4.2.30 A system of m pointsra ≡ (

ra1, . . . ,

ran+1), r = 1, . . . ,m, is

2-apolar to the dual quadric b (in a projective n-space) if and only if b has theform

b ≡m∑

r=1

λr

(n+1∑i=1

rai ξi

)2

= 0 (4.28)

for some λ1, . . . , λm.

Proof. Suppose b has the form (4.28), so that bik =m∑

r=1λr

rai

rak, i, k = 1, . . . ,

n + 1. If α ≡n−1∑i,k=1

αik xi xk = 0 is a quadric containing all the pointsra, then

∑i,k

αikrai

rak = 0 for r = 1, . . . ,m. We have thus also

n+1∑i,k=1

αikbik = 0, i.e. α is

apolar to b.Conversely, suppose that whenever α is a quadric containing all the points

ra, then α is apolar to b ≡

∑bik ξi ξk. This means that whenever

n+1∑i,k=1

αikrai

rak = 0, αik = αki, r = 1, . . . ,m,

holds, then alson+1∑

i,k=1

αik bik = 0

holds. Since the conditions are linear, we obtain that

bik =m∑

r=1

λrrai

rak, i, k = 1, . . . , n+ 1.

This implies (4.28). �

Theorem 4.2.31 An orthocentric system with n + 2 points in a Euclideanspace En is at the same time a generalized orthocentric system in En.

Page 103: Graphs and Matrices in Geometry Fiedler.pdf

4.2 Orthocentric simplexes 93

Proof. Choose n+ 1 of these points as vertices of an n-simplex Σ, so that theremaining point is the orthocenter of this simplex. In our usual notation, theorthocenter has barycentric coordinates (1/πi), and by equations (4.15), wehave identically

ρn+1∑

i,k=1

qik ξi ξk ≡n+1∑r=1

(n+1∑k=1

1πk

)ξ2rπr

−(n+1∑

i=1

ξiπi

)2

.

By Theorem 4.2.30, the given points form a system of n + 2 (≤ 2n) points2-apolar to the isotropic quadric. �

In particular, generalized orthocentric systems in En, which consist of 2npoints, are interesting. They can be obtained (however, not all of them) inthe way presented in the following theorem:

Theorem 4.2.32 Let S be an equilateral n-hyperbola and let Q be any equi-lateral quadric which does not contain S, but intersects S in 2n distinct realpoints. In such a case, these 2n points form a generalized orthocentric system.

Proof. Suppose that R is an arbitrary quadric in En which contains those 2nintersection points. If R ≡ Q, then R is equilateral. If R �≡ Q, choose on S anarbitrary point p different from all the intersection points. Since p �∈ Q, thereexists in the pencil of quadrics αQ+ βR a quadric P containing the point p.The quadric P has with the curve S at least 2n + 1 points in common. Bya well-known result from basic algebraic geometry (since S is irreducible ofdegree n and P is of degree 2), P necessarily contains the whole curve S andis thus equilateral.

The number β �= 0 since p �∈ Q. Consequently, the quadric R is also equi-lateral. Thus the given system is generalized orthocentric as asserted. �

Remark 4.2.33 The quadric consisting of two mutually distinct hyperplanesis clearly equilateral if and only if these hyperplanes are orthogonal. We canthus choose as Q in Theorem 4.2.23 a pair of orthogonal hyperplanes.

Theorem 4.2.34 If 2n points form a generalized orthocentric system in En,then whenever we split these points into two subsystems of n points each, thetwo hyperplanes containing the systems are perpendicular.

Example 4.2.35 Probably the simplest example of 2n points forming a gen-eralized orthocentric system is the following: let a1, . . . , an, b1, . . . , bn be realnumbers, all different from zero, ai �= bi, i = 1, . . . , n, and such that

n∑i=1

1aibi

= 0.

Then the 2n points in an n-dimensional Euclidean space En with a cartesiancoordinate system A1 = (a1, 0, . . . , 0), A2 = (0, a2, 0, . . . , 0), . . . , An = (0, . . . ,

Page 104: Graphs and Matrices in Geometry Fiedler.pdf

94 Special simplexes

0, an), B1 = (b1, 0, . . . , 0), B2 = (0, b2, 0, . . . , 0), . . . , Bn = (0, . . . , 0, bn) forma generalized orthocentric system.

Indeed, choosing for i = 1, . . . , n, αi = 1ai(ai−bi)

, βi = 1bi(bi−ai)

, then for

the pointsia = (0, . . . , 0, ai, 0, . . . , 0, 1), and

i

b = (0, . . . , 0, bi, 0, . . . , 0, 1) in theprojective completion of En, (4.28) reads

n∑i=1

αi(aiξi + ξn+1)2 +n∑

i=1

βi(biξi + ξn+1)2 =n∑

i=1

ξ2i ,

which is the equation of the isotropic quadric.

4.3 Cyclic simplexes

In this section, the so-called normal polygons in the Euclidean space play thecrucial role.

Definition 4.3.1 Let {A1, A2, . . . , An+1} be a cyclically ordered set of n+ 1linearly independent points in a Euclidean n-space En. We call the set of thesepoints, together with the set of n+1 segments A1A2, A2A3, A3A4, . . . , An+1A1

a normal polygon in En; we denote it as V = [A1, A2, . . . , An+1].The points Ai are the vertices, and the segments AiAi+1 are the edges of

the polygon V .

It is clear that we can assign to every normal (n+1)-gon [A1, A2, . . . , An+1]a cyclically ordered set of n + 1 vectors {v1, v2, . . . , vn+1} such that

vi =−→AiAi+1, i = 1, 2, . . . , n + 1 (and An+2 = A1). Then

n+1∑i=1

vi = 0, and

any arbitrary m (m < n + 1) of the vectors v1, v2, . . . , vn+1 are linearlyindependent. If conversely {v1, v2, . . . , vn+1} form a cyclically ordered set of

n + 1 vectors in En for whichn+1∑i=1

vi = 0 holds and if any m < n + 1 vectors

of these are linearly independent, then there exists in En a normal polygon

[A1, A2, . . . , An+1] such that vi =−→AiAi+1 holds. It is also evident that choos-

ing one of the vertices of a normal polygon as the first (say, A1), then theGram matrix M = [〈vi, vj〉] of these vectors has a characteristic property thatit is positive semidefinite of order n+1 and rank n, satisfying Me = 0, wheree is the vector of all ones.

Observe that also conversely such a matrix determines a normal polygonin En, even uniquely up to its position in the space and the choice of the firstvertex.

To simplify formulations, we call the following vertex the second vertex,etc., and use again the symbol V = [A1, A2, . . . , An+1]. All the definitions andtheorems can, of course, be formulated independently of this choice.

Page 105: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 95

Definition 4.3.2 Suppose that V1 = [A1, A2, . . . , An+1] and V2 = [B1,B2, . . . , Bn+1] are two normal polygons in En. We call the polygon V2

left(respectively, right) conjugate to the polygon V1, if for k = 1, 2, . . . , n + 1(and An+2 = A1) the line AkAk+1 is perpendicular to the hyperplane βk

(respectively, βk+1), where βi is the hyperplane determined by the pointsA1, . . . , Ai−1, Ai+1, . . . , An+1.

Theorem 4.3.3 Let V1 and V2 be normal polygons in En. If V2 is left (respec-tively, right) conjugate to V1, then V1 is right (respectively, left) conjugateto V2.

Proof. Suppose that V2 = [B1, B2, . . . , Bn+1] is left conjugate to V1 =[A1, A2, . . . , An+1], so that the line AkAk+1 is perpendicular to the linesBk+1Bk+2, Bk+2Bk+3, . . . , Bn+1B1, B1B2, . . . , Bk−2Bk−1. It follows thatfor k = 1, 2, . . . , n+ 1 (with Bn+2 = B1) the line BkBk+1 is perpendicular tothe lines AkAk−1, Ak−1Ak−2, . . . , A1An+1, An+1An, . . . , Ak+3Ak+2, thus alsoto the hyperplane αk+1 determined by the points A1, . . . , Ak, Ak+2, . . . , An+1.Therefore, V1 is right conjugate to V2. The second case can be proved analo-gously. �

Theorem 4.3.4 To every normal polygon V1 in En there exists in En a normalpolygon V2 (respectively, V3), which is with respect to it left (respectively, right)conjugate. If V and V ′ are two normal polygons which are both (left or right)conjugate to V1, then V and V ′ are homothetic, i.e. the corresponding edgesare parallel and their lengths proportional. The vectors of the edges are eitherall oriented the same way, or all the opposite way.

Proof. Suppose V1 = [A1, A2, . . . , An+1]. Denote by ωi (i = 1, 2, . . . , n+1) thehyperplane in En which contains the point Ai and is perpendicular to the lineAiAi+1 (again An+2 = A1). Observe that these hyperplanes ω1, ω2, . . . , ωn+1

do not have a point in common: if P were to be such a point, then wewould have PAi < PAi+1 for i = 1, 2, . . . , n + 1 since P ∈ ωi and Ai

is the heel of the perpendicular line from Ai+1 on ωi. (Also, the hyper-planes ω1, ω2, . . . , ωn+1 do not have a common direction since otherwise thepoints A1, A2, . . . , An+1 would be in a hyperplane.) It follows that the points

Bi =n+1⋂

k=1,k �=i

ωk, i = 1, 2, . . . , n + 1, are linearly independent. It is clear that

the normal polygon V2 = [B1, B2, . . . , Bn+1] (respectively, the normal polygonV3 = [Bn+1, B1, . . . , Bn]) is left (respectively, right) conjugate to V1.

If now V = [B1, B2, . . . , Bn+1] and V ′ = [B′1, B

′2, . . . , B

′n+1] are two normal

polygons, both left conjugate to V1, then we have, by Theorem 4.3.2, that

both vectors vi =−→BiBi+1 and v′i =

−→B′

iB′i+1 of the corresponding edges are

perpendicular to the same hyperplane and thus parallel, and, in addition,

Page 106: Graphs and Matrices in Geometry Fiedler.pdf

96 Special simplexes

they sum to the zero vector. This implies that for some nonzero constants

c1, c2, . . . , cn+1, v′i = civi. Thusn+1∑i=1

civi = 0; sincen+1∑i=1

vi = 0 is the only linear

relation among v1, . . . , vn+1, we finally obtain ci = C for i = 1, 2, . . . , n+ 1.�

Theorem 4.3.5 Suppose Σ is an n-simplex, and v1, v2, . . . , vn+1 is a systemof n+ 1 nonzero vectors, each perpendicular to one (n− 1)-dimensional faceof Σ. Then there exist positive numbers α1, . . . , αn+1 such that

n+1∑i=1

αivi = 0,

if and only if either all the vectors vi are vectors of outer normals of Σ, or allthe vectors vi are vectors of interior normals of Σ.

Proof. This follows from Theorem 1.3.1 and the fact that the system of vectorsv1, v2, . . . , vn+1 contains n linearly independent vectors. �

Theorem 4.3.6 Suppose V1 = [A1, A2, . . . , An+1] is a normal polygon in En

and V2 = [B1, B2, . . . , Bn+1] a normal polygon left (respectively, right) con-

jugate to V1. Then all the vectors vi =−→BiBi+1 are vectors of either outer or

inner normals to the (n− 1)-dimensional faces of the simplex determined bythe vertices of the polygon V1.

Proof. This follows from Theorem 4.3.5 sincen+1∑i=1

vi = 0. �

Theorem 4.3.7 Suppose V = [A1, A2, . . . , An+1] is a normal polygon in En.

Assign to every point X in En the sum of squaresn+1∑i=1

XiAi2, where Xi is the

heel of the perpendicular from the point X on the line AiAi+1 (XiAi meaningthe distance between Xi and Ai). Then this sum is minimal if X is the centerof the hypersphere containing all the points A1, A2, . . . , An+1.

Proof. Consider the triangle Ai Ai+1 X. Observe that XiA2

i − XiA2

i+1 =XA

2

i −XA2

i+1. On the other hand, we can see easily from (2.1) that

4AiA2

i+1XiA2

i

= (AiX2

i − Ai+1X2

i )2 − 2(AiX

2

i − Ai+1X2

i )AiA2

i+1 + AiA4

i+1,

which implies

XiA2

i =14AiA

2

i+1 +12(AiX

2

i −Ai+1X2

i ) +14AiA

2

i+1(AiX2

i −Ai+1X2

i )2.

Page 107: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 97

Thus also

XiAi2

=14AiA

2

i+1 +12(AiX

2 −Ai+1X2) +

14AiA

2

i+1(AiX2 −Ai+1X

2)2.

Summing these relations for i = 1, 2, . . . , n+ 1, we arrive at

n+1∑i=1

XiAi2

=14

n+1∑i=1

AiA2

i+1 +n+1∑i=1

14AiA

2

i+1(AiX2 − Ai+1X

2)2,

which proves the theorem. �

Theorem 4.3.8 Suppose Σ is an n-simplex. Let a cyclic (oriented) orderingof all its vertices Pi (and thus also of the opposite (n− 1)-dimensional facesωi) be given, say P1, P2, . . . , Pn+1, P1.

Then there exists a unique normal polygon V = [A1, A2, . . . , An+1] suchthat for i = 1, 2, . . . , n+ 1, the points Ai belong to the hyperplane ωi and theline AiAi+1 is perpendicular to ωi. In addition, the circumcenter (in a clearsense) of the polygon V coincides with the Lemoine point of Σ.

If we form in the same way a polygon V ′ choosing another cyclic ordering ofthe vertices of Σ, then V ′ is formed by the same vectors as V , which, however,can have the opposite orientation.

Before we prove this theorem, we present a definition and a remark.

Definition 4.3.9 In the situation described in the theorem, we say that thepolygon V is perpendicularly inscribed into the simplex Σ.

Remark 4.3.10 Theorem 4.3.8 can also be formulated in such a way thatthe edges of every such inscribed polygon are segments which join, in theperpendicular way, the two corresponding (n− 1)-dimensional faces of Σ andthe simplex Σ′, obtained from Σ by symmetry with respect to the Lemoinepoint.

Proof. (Theorem 4.3.8) The first part follows immediately from Theorem 4.3.4.The second part is a consequence of the well-known property of the Lemoinepoint (see Theorem 2.2.3) and the fact that in Theorem 4.3.7, AiX

2is at the

same time the square of the distance of X from the hyperplane ωi.The third part follows from the second since the length of the vectors of the

polygon V , which are perpendicular to ωi, is twice the distance of the Lemoinepoint from ωi. Of course, there are two possible orientations of vectors in V

by Theorem 4.3.5. �

Theorem 4.3.11 Suppose V1 = [A1, A2, . . . , An+1] is a normal polygon, andV2 = [B1, B2, . . . , Bn+1] a polygon left (respectively, right) conjugate to V1. Ifωi are the (n−1)-dimensional faces of the n-simplex Σ determined by vertices

Page 108: Graphs and Matrices in Geometry Fiedler.pdf

98 Special simplexes

Ai (ωi opposite to Ai), then the interior angles ϕij of the faces ωi and ωj

(i �= j) satisfy

cosϕij =〈vi, vj〉√

〈vi, vi〉√〈vj , vj〉

,

where vi =−→

Bi−1Bi (respectively, vi =−→BiBi+1).

Proof. By Theorems 4.3.3 and 4.3.2, the vectors vi are perpendicular to ωi.Theorem 4.3.6 then implies that the angle between the vectors vi and vj

(i �= j) is equal to π − ϕij . �

Theorem 4.3.12 Suppose that V1 = [A1, A2, . . . , An+1] and V2 = [B1,B2, . . . , Bn+1] are normal polygons in En such that V2 is right conjugate to V1.

Denote by ai =−→AiAi+1, bi =

−→BiBi+1, i = 1, 2, . . . , n + 1; An+2 = A1;

Bn+2 = B1 the vectors of the edges and by A = [〈ai, aj〉], B = [〈bi, bj〉] thecorresponding Gram matrices. Let Z be the (n+ 1) × (n+ 1) matrix

Z =

⎡⎢⎢⎢⎢⎣1 −1 0 . . . 00 1 −1 . . . 00 0 1 . . . 0

. . . . . . . . . . . . . . .

−1 0 0 . . . 1

⎤⎥⎥⎥⎥⎦ . (4.29)

Then there exists a nonzero number c such that the matrix[A cZ

cZT B

](4.30)

is symmetric positive semidefinite of rank n.Conversely, let (4.30) be a symmetric positive semidefinite matrix of rank

n for some number c �= 0. Then A is the Gram matrix of vectors of edgesof some normal polygon V1 = [A1, A2, . . . , An+1] in some En, B is the Grammatrix of vectors of edges of some normal polygon V2 = [B1, B2, . . . , Bn+1] inEn, and, in addition, V2 is the right conjugate to V1.

Proof. Suppose that V2 is the right conjugate to V1. Then the vectors ai, bisatisfy

〈ai, bj〉 = 0 (4.31)

for i �= j �= i + 1. Denote 〈ai, bi〉 = ci, 〈ai, bi+1〉 = di (i = 1, . . . , n + 1,bn+2 = b1).

By (4.31) and∑ai =

∑bi = 0, we have ci = c, di = −c, c �= 0 for

i = 1, 2, . . . , n + 1. It follows that (4.30) is the Gram matrix of the systema1, . . . , an+1, b1, . . . , bn+1, thus symmetric positive semidefinite of rank n.

Conversely, let (4.30) be a positive semidefinite matrix of rank n for some cdifferent from zero. Let a1, a2, . . . , an+1, b1, b2, . . . , bn+1 be a system of vectors

Page 109: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 99

in some n-dimensional Euclidean space En, the Gram matrix of which is thematrix (4.30). Since Z has rank n, and Ze = 0 for e with all coordinates one,A has also rank n and Ae = 0. Similarly, B has rank n and Be = 0. Therefore,∑ai =

∑bi = 0, and the vectors a1, a2, . . . , an+1 can be considered as vectors

of edges, ai =−→AiAi+1, of some normal polygon V1 = [A1, A2, . . . , An+1], and

b1, b2, . . . , bn+1 as vectors of edges, bi =−→BiBi+1 of some normal polygon

V2 = [B1, B2, . . . , Bn+1]. Since 〈ai, bj〉 = 0 for i �= j �= i + 1, V2 is the rightconjugate to V1. �

Theorem 4.3.13 To every symmetric positive semidefinite matrix A of ordern+ 1 which has rank n and for which Ae = 0, there exists a unique matrix Bsuch that the matrix (4.30) is positive semidefinite and has for a fixed c �= 0rank n. When c �= 0 is not prescribed, the matrix B is determined uniquely upto a positive multiplicative factor.

Proof. This follows from Theorems 4.3.12 and 4.3.4. �

Definition 4.3.14 A normal polygon in En is called orthocentric, if thesimplex with the same vertices is orthocentric.

Theorem 4.3.15 A normal polygon V1 = [A1, A2, . . . , An+1] is orthocentric

if and only if the vectors ai =−→AiAi+1 satisfy

〈ai, aj〉 = 0

for j �≡ i − 1, j �≡ i, j �≡ i + 1 mod (n + 1), i, j = 1, . . . , n + 1 (an+2 = a1).The corresponding n-simplex is acute if all the numbers

di = −〈ai−1, ai〉, i = 1, . . . , n+ 1,

are positive; it is right (respectively, obtuse) if dk = 0 (respectively, dk < 0)for some index k. In fact, dk ≤ 0 cannot occur for more than one k.

Proof. Suppose V1 is an orthocentric normal polygon, denote by O the

orthocenter. Then the vectors vi =−→OAi satisfy

〈vi, vj − vk〉 = 0

for j �= i �= k. Thus, for j �≡ i− 1, j �≡ i, j �≡ i+ 1 mod (n+ 1)

〈ai, aj〉 = 〈vi+1 − vi, vj+1 − vj〉 = 0.

Conversely, let

〈ai, aj〉 = 0 for j �≡ i− 1, j �≡ i, j �≡ i+ 1 mod (n+ 1).

Page 110: Graphs and Matrices in Geometry Fiedler.pdf

100 Special simplexes

If we denote, in addition, by ci the inner product 〈ai, ai〉, we obtain

0 =⟨ai,

n+1∑1

aj

⟩= −di + ci − di+1,

thus

ci = di + di+1.

A simple computation shows, if i < k, that

0 < 〈ai + ai+1 + · · · + ak−1, ai + ai+1 + . . . ak−1〉= −〈ai + ai+1 + · · · + ak−1, ak + ak+1 + · · · + an−1 + a1 + · · · + ai−1〉= di + dk.

It follows that for at most one k we have dk ≤ 0. If dk = 0, the vertexAk is the orthocenter and the line AkAi is perpendicular to AkAj for alli, j, i �= k �= j �= i (the simplex is thus right orthocentric): any two of thevectors

ak−1, ak, ak + ak+1 (= −ak+2 + · · · + an+1 + a1 + · · · + ak−1),

ak + ak+1 + ak+2 (= −(ak+3 + · · · + an+1 + a1 + · · · + ak−1)),...

ak + ak+1 + · · · + an+1 + a1 + · · · + ak−3(= −(ak−2 + ak−1))

are perpendicular.Suppose now that all the numbers dk are different from zero. Let us show

that the number

γ =n+1∑k=1

1dk

is also different from zero, and the point

O =n+1∑k=1

1γdk

Ak

is the orthocenter.Let i, j = 1, 2, . . . , n. Then

0 < det[〈ai, aj〉]

= det

⎡⎢⎢⎢⎢⎣d1 + d2 −d2 0 . . . 0 0−d2 d2 + d3 −d3 . . . 0 00 −d3 d3 + d4 . . . 0 0. . . . . . . . . . . . . . . . . .

0 0 0 . . . −dn dn + dn+1

⎤⎥⎥⎥⎥⎦

Page 111: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 101

=n+1∑i=1

∏j �=i

dj = γ

n+1∏j=1

dj . (4.32)

Since∑ak = 0, the vectors

vi =−→OAi = −

n+1∑j=1

1γdj

−→AiAj

=1γ

[1

di+1ai +

1di+2

(ai + ai+1) + · · · + 1dn+1

(ai + · · · + an)

+1d1

(ai + · · · + an+1) + . . .

+1

di−1(ai + · · · + an+1 + a1 + · · · + ai−1)

],

i = 1, . . . , n+ 1,

satisfy

〈vi, aj〉 = 0

for i− 1 �= j �= i.If dk < 0 for some k, then, by (4.32), γ < 0, so that the point O is an exterior

point of the corresponding simplex, and due to Section 2 of this chapter, thesimplex is obtuse orthocentric. If all the dks are positive, γ > 0 and O is aninterior point of the (necessarily acute) simplex. �

Definition 4.3.16 We call an n-simplex Σ (n ≥ 2) cyclic, if there existssuch a cyclic ordering of its (n − 1)-dimensional faces, in which any two notneighboring faces are perpendicular. If then all the interior angles between (inthe ordering) neighboring faces are acute, we call the simplex acutely cyclic; ifone of them is obtuse, we call Σ obtusely cyclic. In the remaining case that oneof the angles is right, Σ is called right cyclic. Analogously, we call also cyclicthe normal polygon formed by vertices and edges opposite to the neighboringfaces of the cyclic simplex (again acutely, obtusely, or right cyclic).

Remark 4.3.17 The signed graph of the cyclic n-simplex is thus either apositive circuit in the case of the acutely cyclic simplex, or a circuit with oneedge negative and the remaining positive in the case of the obtusely cyclicsimplex, or finally a positive path in the case of the right cyclic simplex (it isthus a Schlaefli simplex, cf. Theorem 4.1.7).

Theorem 4.3.18 A normal polygon is an acutely (respectively, obtusely, orright) cyclic if and only if the left or right conjugate normal polygon is acute(respectively, obtuse, or right) orthocentric.

Proof. Follows immediately from Theorem 4.3.15. �

Page 112: Graphs and Matrices in Geometry Fiedler.pdf

102 Special simplexes

Theorem 4.3.19 Suppose that the numbers d1, . . . , dn+1 are all differentfrom zero, namely either all positive, or exactly one negative and in this case

1/σ =n+1∑i=1

1/di < 0. The (n+ 1) × (n+ 1) matrix

M =[

P Z

ZT Q

], (4.33)

where

P =

⎡⎢⎢⎣d1 + d2 −d2 0 . . . −d1

−d2 d2 + d3 −d3 . . . 0. . . . . . . . . . . . . . .

−d1 0 . . . −dn+1 d1 + dn+1

⎤⎥⎥⎦ ,

Q =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1d1

− σ

d21

− σ

d1d2− σ

d1d3. . . − σ

d1dn+1

− σ

d1d2

1d2

− σ

d22

− σ

d2d3. . . − σ

d2dn+1

. . . . . . . . . . . . . . .

− σ

d1dn+1− σ

d2dn+1. . . . . .

1dn+1

− σ

d2n+1

⎤⎥⎥⎥⎥⎥⎥⎥⎦and Z is defined in (4.29), is then positive semidefinite of rank n.

Proof. Denote by I the identity matrix, by D the diagonal matrix withdiagonal entries d1, d2, . . . , dn+1, by C the matrix of order n+ 1

C =

⎡⎢⎢⎢⎢⎣0 1 0 . . . 00 0 1 . . . 0. . . . . . . . . . . . . . .

0 0 0 . . . 11 0 0 . . . 0

⎤⎥⎥⎥⎥⎦ and e = [1, 1, . . . , 1]T .

Then (CT is the transpose of C)

M =[

(I − C)D(I − CT ) I − C

I − CT D−1 − σD−1eeTD−1

].

However

(I − C)D(D−1 − σD−1eeTD−1) = I − C,

so that[I −(I − C)D0 I

]M

[I 0

−D(I − CT ) I

]=[0 00 D−1 − σD−1eeTD−1

].

Since (D−1 − σD−1eeTD−1)e = D−1e−D−1eσ−1(eTD−1e) = 0, the rank ofthe matrix M is at most n. The formula (4.32) shows that all the principalminors of the matrix (I − C)D(I − CT ) of orders 1, 2, . . . , n are positive. �

Page 113: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 103

Theorem 4.3.20 A normal polygon V = [A1, A2, . . . , An+1] is acutely cyclicif and only if there exist positive numbers p1, p2, . . . , pn+1 such that the vectors

ai =−→AiAi+1 (i = 1, . . . , n+1; An+2 = A1) and the number p =

n+1∑k=1

pk satisfy

〈ai, aj〉 =1ppipj , i �= j, (4.34)

〈ai, ai〉 =1ppi(p− pi).

This polygon is obtusely cyclic if and only if there exist numbers p1,

p2, . . . , pn+1, one of which is negative, the remaining positive, and their sum

p =n+1∑i=1

pi negative, again fulfilling (4.34).

Proof. This follows immediately from Theorems 4.3.8, 4.3.15, and 4.3.20,where di is set as 1/pi; the matrices P and Q in (4.33) are then the matricesof the orthocentric normal polygon and of the polygon from (4.34). �

Another metric characterization of cyclic normal polygons is the following:

Theorem 4.3.21 A normal polygon V = [A1, A2, . . . , An+1] is acutely cyclicif and only if there exist positive numbers p1, p2, . . . , pn+1 such that thedistances √

mij = ρ(Ai, Aj) (i < j) satisfy

mij =1p(pi +pi+1+ · · ·+pj−1)(pj +pj+1+ · · ·+pn+1+p1+ · · ·+pi−1), (4.35)

where p =n+1∑

1pj.

This polygon is obtusely cyclic if and only if there exist numbers p1,

p2, . . . , pn+1 such that one of them is negative, the remaining positive, andtheir sum negative, for which again (4.35) holds.

Proof. Since in the notation of Theorem 4.3.20

mij = 〈ai + ai+1 + · · · + aj−1, ai + ai+1 + · · · + aj−1〉,

it suffices to show that the relations (4.34) and (4.35) are equivalent. This isdone easily by induction with respect to j − i. �

Theorem 4.3.22 Every m-dimensional (m ≥ 2) face Σ′ of a cyclic n-simplexΣ is also cyclic, namely of the same kind (acutely, obtusely, or right) as thatof Σ. In addition, the cyclic ordering of the vertices in Σ′ is induced by thatof Σ.

Proof. Suppose V = [A1, A2, . . . , An+1] is a cyclic normal polygon correspond-ing to the simplex Σ. Let Ak1 , Ak2 , . . . , Akm+1 (k1 < k2 < · · · < km+1) be thevertices of the face Σ′. It suffices to show that V ′ = [Ak1 , Ak2 , . . . , Akm+1 ]

Page 114: Graphs and Matrices in Geometry Fiedler.pdf

104 Special simplexes

is a cyclic polygon in the corresponding m-dimensional space Em. If V is aright cyclic polygon, the assertion is correct by Theorem 4.1.7. If V is anacutely or obtusely cyclic polygon, there exist by Theorem 4.3.21 numbersp1, p2, . . . , pn+1 (in the first case all positive; in the second case one negative,the remaining positive, with negative sum) for which (4.35) holds. Denote

k2−1∑k=k1

pk = q1,

k3−1∑k=k2

pk = q2, . . . ,

km+1−1∑k=km

pk = qm,

pkm+1 + · · · + pn−1 + p1 + · · · + pk1−1 = qm+1.

Since p =n+1∑i=1

pi =m+1∑i=1

qi = q, then either all the qjs are positive (in the first

case), or exactly one of the numbers qk is negative (namely that in whose sumthe negative number pl enters); we also have then q < 0.

By the formulae (4.35), it now follows that for i < j

mkikj=

1q(qi + qi−1 + · · · + qj−1)(qj + · · · + qm+1 + q1 + · · · + qi−1).

Indeed, V ′ and Σ′ are cyclic of the same kind as Σ. �Before formulating the main theorem about cyclic normal polygons, we

recall the following lemma (the proof is in [11]). There we call a solutionx0, x1, . . . , xm of the system

x1(x1 + x2 + · · · + xm) = x0c1,

x2(x1 + x2 + · · · + xm) = x0c2,

. . . (4.36)

xm(x1 + x2 + · · · + xm−1) = x0cm,

x20 = 1,

feasible, if either all the numbers x1, x2, . . . , xm are positive (and then x0 = 1),or exactly one of the numbers x1, x2, . . . , xm is negative, the remaining positive

withm∑

i=1

xi negative (and then x0 = −1).

Theorem 4.3.23 Suppose c1, c2, . . . , cm (m ≥ 3) are positive numbers, andx0, x1, . . . , xm is a feasible solution of the system (4.36). Then the followingholds:

(i) If for some index k, 1 ≤ k ≤ m

√ck ≥

m∑j=1j �=i

√cj or ck =

m∑j=1,j �=i

cj ,

then no feasible solution of the system (4.36) exists.

Page 115: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 105

(ii) If for every index k = 1, 2, . . . ,m the inequality

√ck <

m∑j=1,j �=i

√cj

is fulfilled, together with

ck �=m∑

j=1,j �=i

cj ,

then there exists one feasible solution of the system (4.36); this solution ispositive if for every index k, 1 ≤ k ≤ m

ck <

m∑j=1,j �=k

cj ,

and nonpositive (with x0 = −1), if for some k

ck >

m∑j=1,j �=k

cj .

In this last case xk < 0.

We are now able to prove the main theorem.

Theorem 4.3.24 A cyclic normal polygon in En is uniquely determined bythe lengths of its n+1 edges and their ordering, apart from its position in thespace. If l1, l2, . . . , ln+1 are these lengths, and, say, ln+1 = max(l1, . . . , ln+1),then there exists such a cyclic normal polygon if and only if

ln+1 <n∑

i=1

li. (4.37)

This cyclic polygon is then acutely right, or obtusely cyclic, according towhether

l2n+1 <n∑

i=1

l2i , or

l2n+1 =n∑

i=1

l2i , or (4.38)

l2n+1 >

n∑i=1

l2i .

Proof. The necessity of condition (4.37) is geometrically clear. The right cyclicpolygons fulfill the second relation in (4.38) by Theorem 4.32 and such apolygon is uniquely determined by the lengths li.

Page 116: Graphs and Matrices in Geometry Fiedler.pdf

106 Special simplexes

The second relation in (4.34) implies that for every acutely (respectively,obtusely) cyclic polygon there exists a positive (respectively, nonpositive)feasible solution of the system (4.36) for m = n+ 1, ci = l2i , and

xi =pi√

|∑pk|

, x0 = sgn∑

pk, i = 1, . . . , n+ 1.

The same relation shows that for every positive (respectively, nonpositive)feasible solution of the system (4.36) and ci = l2i there exists an acutely(respectively, obtusely) cyclic polygon with given lengths of edges (by setting

pi = xi

∣∣∣n+1∑k=1

xk

∣∣∣ in (4.34)). Thus the existence and uniqueness follow from

Theorem 4.3.23 and (4.37). �

Remark 4.3.25 The relations (4.37) and (4.38) generalize the triangleinequality and the Pythagorean conditions in the case n = 2.

Definition 4.3.26 For a moment, we call two oriented normal polygons inEn directly (respectively, indirectly) equivalent, if there exists such a bijectionbetween the vectors of their edges in which the corresponding vectors areequal (respectively, opposite).

Remark 4.3.27 It is clear that every normal polygon V2 directly equivalentto a normal polygon V1 is obtained by permuting the vectors of orientededges and putting them one after another, possibly changing the position bytranslation.

It also can be shown that all perpendicularly inscribed polygons (cf.Definition 4.3.26) into a simplex are equivalent.

Theorem 4.3.28 All polygons, directly or indirectly equivalent to acyclic polygon, are again cyclic, even of the same kind, and their circum-scribed hyperspheres have the same radii. All (cyclic) polygons perpendicularlyinscribed into an orthocentric simplex have the circumscribed hypersphere incommon. The center of this hypersphere is the Lemoine point of the simplex.

Proof. The first part follows from Theorems 4.3.8 and 4.3.19. The second partis, in the case of a right simplex, a consequence of the fact that the altitudefrom the vertex (which is at the same time the orthocenter) is the longestedge of all right cyclic polygons perpendicularly inscribed into the simplex.To complete the proof of the second part for the acutely or obtusely cyclicpolygons, we use the relations (4.35) for the squares mik of the lengths of itsedges. A simple check shows that the numbers qrs (cf. Chapter 1) are givenby (i, j = 1, . . . , n+ 1)

q00 =1p2

[ n+1∑i,j=1,i�=j

p2i pj +

n−1∑i,j,k=1,i<j<k

pi pj pk

], q0i = −1

p(pi−1 + pi),

Page 117: Graphs and Matrices in Geometry Fiedler.pdf

4.3 Cyclic simplexes 107

qii =1

pi−1+

1pi, qi−1,i = qi,i−1 =

1pi−1

, p =n+1∑k=1

pk,

qij = 0 for j �≡ i− 1, j �≡ i, j �≡ i+ 1 mod (n+ 1), p0 = pn+1.

By Theorem 2.1.1, the radius of the circumscribed hypersphere of the sim-plex is 1

2

√q00. This number is, however, a symmetric expression in the values

of p1, p2, . . . , pn+1. Consequently, the radii of the circumscribed hyperspheresof every polygon equivalent to the given are all equal.

The last assertion follows from the second and from Theorem 4.3.8. �The relations in the previous proof for the entries of the Gramian of a

cyclic n-simplex yield, together with the three possibilities for the numberspi (all positive for the acute simplex; one negative, the remaining positiveand the sum negative for the obtuse simplex, and the Schlaefli simplex), acomplete characterization of the possibilities for extended signed graphs ofcyclic simplexes.

Theorem 4.3.29 The signed extended graph of an acute cyclic n-simplex isa wheel (cf. Appendix, Section 6.2) with n + 2 nodes. The signed extendedgraph of an obtuse cyclic n-simplex is a positive circuit with n + 2 nodes inwhich one node u is connected with all the not-neighboring nodes by negativeedges (diagonals) and the two neighboring nodes to u are joined by a negativeedge. The extended signed graph of the right cyclic simplex is, of course, apositive circuit.

In the conclusion, we mention nets on a simplex.

Definition 4.3.30 Let an n-simplex Σ be given. By a net on Σ, we under-stand the set N(Σ) consisting of all the vertices of Σ and a subset of edgesof Σ. If, in addition, a length to every edge in N(Σ) is assigned, we speakabout a metric net on Σ.

Remark 4.3.31 A net on an n-simplex can be considered as a mapping ofan undirected graph with n+1 nodes on the zero- and one-dimensional struc-ture of Σ. We can thus speak about connectivity of a net, of nets which aretrees, etc.

Suppose now that Σ is some n-simplex and N is a net on Σ. We say thatan n-simplex Σ′ is N-equivalent to Σ if there is a bijection between the set ofvertices of Σ and Σ′ such that the lengths of edges in N and the correspondingedges in Σ′ are equal.

We can now formulate and prove a result which was established in [16].

Theorem 4.3.32 Let N be a net on the n-simplex Σ. Then there exists ann-simplex Σ′ which is N -equivalent to Σ having the maximal n-dimensionalvolume if and only if the net N is connected. The simplex Σ′ is then

Page 118: Graphs and Matrices in Geometry Fiedler.pdf

108 Special simplexes

characterized by the fact that each interior angle opposite an edge notcontained in N is right.

Proof. Geometrically evident is the fact that if N is not connected, then somedistances in Σ′ can be arbitrarily large and the volume is not bounded fromabove. However, if N is connected and one vertex of Σ′ is fixed, then all pos-sible simplexes are within some hypersphere and the volume is bounded fromabove. Consider the volume of all such simplexes, including those “degenerate”ones having n-dimensional volume zero.

By (1.28), the formula for the volume V using the lengths of edges is

V 2 =(−1)n+1

2n(n!)2det[mrs].

Since this function is a differentiable function of the variables correspondingto edges of Σ not contained in N , on a compact set, it attains its maximum,and in the attained solution, ∂ det[mrs]/∂muv = 0, whenever u �= v andAuAv �∈ N . This partial derivative is proportional to the (u, v) entry of theinverse matrix of [mrs] which is (cf. formula (1.21)) −2quv so that quv = 0.This means that the interior angles corresponding to all edges not containedin N are right as asserted. �

Here are two important corollaries.

Theorem 4.3.33 If N consists of the edges A1A2, A2A3, . . . , AnAn−1,An−1A1, the maximum n-simplex is cyclic. The simplex is by the metric netdetermined uniquely up to the position in the space.

Proof. By Theorem 4.3.32, the maximal simplex is cyclic. The rest followsfrom Theorem 4.3.24. �

Theorem 4.3.34 If the metric net is a tree, the corresponding simplex isright and the given edges are its legs.

Proof. This follows again from Theorem 4.3.32 and the theory of right sim-plexes. �

Remark 4.3.35 Observe that Theorem 4.3.32 confirms the conjecture 2.1.9from Chapter 2 in the case that all the given angles are right.

4.4 Simplexes with a principal point

In this short section, we present a class of special simplexes which con-tains generalizations of some properties of the triangle. It is defined by thecorresponding Menger matrix as follows:

Page 119: Graphs and Matrices in Geometry Fiedler.pdf

4.4 Simplexes with a principal point 109

We say that the set of points A1, . . . , An+1 in En forms vertices of ann-simplex with a principal point if the points are linearly independent andthe squares of distances between Ai and Aj , i �= j, satisfy

|AiAj |2 = α(t2i + t2j ) + βtitj (4.39)

for some real t1, . . . , tn+1, α and β, whenever i �= j and i, j = 1, . . . , n+ 1.

Remark 4.4.1 We proved in [5] that necessary and sufficient conditions forthe existence of an n-simplex satisfying (4.39) are that

α+ β > 0

and one of the following holds:

(i) all the tis are different from zero and

β

(n+1∑i=1

1ti

)2

+ (α− nβ)n+1∑i=1

1t2i> 0; (4.40)

(ii) one of the tis is zero and α > (n− 1)β.

As is well known, if a triangle has all angles less than 23π, there exists in

the triangle a point from which all the sides are seen at the angle 23π. This

point is called the Toricelli point. The following theorem is a generalization.

Theorem 4.4.2 A necessary and sufficient condition that an n-simplex withvertices A1, . . . , An+1 possesses a point P �= Ai (for all i) such that the angles∠AiPAj are all equal, is that (4.39) holds for α = nβ and all the tis have thesame sign.

Proof. Suppose that such a point exists. Then the points on the halflinesPAi having the same positive distance from P form vertices of a regularn-simplex. As we proved in Theorem 4.39, the common angle ψ = ∠AiPAj

satisfies cosψ = −1/n. If we denote the distance PAi as ti, we obtain by thecosine theorem

|AiAj |2 = t2i + t2j +2ntitj ,

i.e. (4.39) for α = nβ (up to a positive multiple).The converse is also easily established. �Another property of the triangle is the following. If P1, P2, and P3 are the

points in which the incircle touches the sides of the triangle A1A2A3, then thesegments AiPi meet at one point, called the Gergonne point of the triangle.For a simplex, we have two generalizations.

Theorem 4.4.3 A necessary and sufficient condition that in an n-simplexthere exists a hypersphere which touches all edges (as segments) is that for

Page 120: Graphs and Matrices in Geometry Fiedler.pdf

110 Special simplexes

the squares of the lengths of edges, (4.39) holds for α = β positive, and allthe tis have the same sign. In addition, all the hyperplanes determined by anyn − 1 vertices and that point on the opposite edge in which the hyperspheretouches the edge meet at one point.

Proof. Suppose that such a hypersphere H exists. If Ai is a vertex, then allthe points Pij in which H touches the edges AiAj have the same distance tifrom Ai. Since |AiAj |2 = t2i + t2j + 2titj , (4.39) holds for α = β and ti > 0.The converse also holds. The point with barycentric coordinates (1/ti) hasthen the mentioned property. �

The second possibility is the following:

Theorem 4.4.4 A necessary and sufficient condition that for the inscribedhypersphere the points of contacts Pi in the (n − 1)-dimensional faces havethe property that the lines AiPi meet at one point is that for the squares ofthe lengths of edges, (4.39) holds for α = (n− 1)β positive, and the tis of thesame sign.

Proof. First, we shall show that for an nb-point Q = (q1, . . . , qn+1) there is justone quadric which touches all the (n− 1)-dimensional faces in the projectionpoints of Q on these faces, namely the quadric with equation∑

i

(xi

qi

)2

−(∑

i

xi

qi

)2

= 0. (4.41)

Indeed, let ∑i,k

cikξiξk = 0

be the dual equation of such a quadric. Then cii = 0 for all i since the face ωi

is the tangent dual hyperplane, and thus∑k

cikξk = 0

is the equation of the tangent point. Therefore, cik = λiqk for some nonzeroconstant λi. Since cik = cki for all i, k, we obtain altogether cik = λqiqk forall i, k, i �= k. The matrix of the dual quadric is thus a multiple of

Z = Q0(J − I)Q0,

where Q0 is the diagonal matrix diag (q1, . . . , qn+1), J is the matrix of all ones,and I the identity matrix. The inverse of Z is thus a multiple of Q−1

0 (nI −J)Q−1

0 , which exactly corresponds to (4.41).Now, the equation (4.41) has to be the equation of a hypersphere, thus of

the form

Page 121: Graphs and Matrices in Geometry Fiedler.pdf

4.4 Simplexes with a principal point 111

α0

∑i,k

mikxixk − 2∑

k

αkxk

∑j

xj = 0, α0 �= 0.

Comparing both equations, we obtain

αi = −n− 12q2i

,

and for i �= j

−2α0mij = (n− 1)(

1q2i

+1q2j

)+

2qiqj

,

so that (4.41) holds for α = (n− 1)β and ti = 1/qi. �A generalization of so-called isodynamical centers of the triangle is the

following theorem. In the formulation, if A, B, and C are distinct points,H(A,B;C) will be the Apollonius hypersphere, the locus of the points Xwhere the ratio of distances |XA| and |XB| is the same as that of |CA| and|CB|.

Theorem 4.4.5 Let A1, . . . , An+1 be vertices of a simplex in En. Then anecessary and sufficient condition that all the hyperspheres H(Ai, Aj ;Ak), fori, j, k distinct, have a proper point in common is that (4.39) holds for α = 0,and ti �= 0 for all i.

Proof. Suppose that all the hyperspheres H(Ai, Aj ;Ak) have a point D =(d1, . . . , dn+1) in common. The equation of H(Ai, Aj ;Ak) is (using (1.9))

mik

[∑p,q

mpqxpxq − 2∑

p

mjpxp

∑q

xq

]−mjk

[∑p,q

mpqxpxq − 2∑

p

mipxp

∑q

xq

]= 0. (4.42)

Denote as ti the numbers∑

p,q mpqdpdq − 2∑

pmipdp

∑q dq; since they are

proportional to the squares of distances between D and the Ais, at least oneti, say tl, is different from zero. By miktl = miltk, it follows that all the tksare different from zero. By (4.42), all the numbers mik/titk are equal, so thatindeed mik = ρtitk as asserted.

The converse is also easily established. �Another type of simplex with a principal point will be obtained from the so-

called (n+1)-star in En. It is the set of n+1 halflines, any two of which span thesame angle. This (n+1)-tuple is congruent to the (n+1)-tuple of halflines CAi

in a regular n-simplex with vertices Ai and centroid C. Therefore, the angle ωbetween any two of these halflines satisfies, as we shall see in Theorem 4.5.1,cosω = −1/n.

Page 122: Graphs and Matrices in Geometry Fiedler.pdf

112 Special simplexes

Theorem 4.4.6 A necessary and sufficient condition that an n-simplex Σwith vertices A1, . . . , An+1 in En has the property that in a Euclidean (n+1)-dimensional space containing En there exists a point A such that the halflinesAA1, . . . , AAn+1 can be completed by a halfline AA0 into an (n + 2)-star isthat the squares of the lengths of edges of Σ fulfill (4.39) with α = −(n+ 1)βpositive, and the tis have the same sign.

Proof. Suppose such a point A exists. Since the cosine of the angle AiAAj is− 1

n+1 , the cosine theorem implies that, denoting |AAi| as ti

|AiAj |2 = t2i + t2j +2

n+ 1titj , (4.43)

whenever i �= j. The converse is also easily established. �Recalling the property (4.7) from Theorem 4.2.1, we have also

Theorem 4.4.7 A necessary and sufficient condition that an n-simplex isacute orthocentric is that the squares of the lengths of its edges fulfill (4.39)with α positive and β = 0 (and all the tis nonzero).

Remark 4.4.8 We did not complicate the theorems by allowing some of thetis to be negative. This case is, however, interesting, similarly as in Theo-rem 4.2.8 for the orthocentric (n+ 2)-tuple, in one case: if α = −(n+ 1)β ispositive and t0 satisfies

n+1∑i=0

1ti

= 0.

In this case, the whole (n+ 2)-tuple behaves symmetrically and (4.43) holdsfor all n+ 2 points.

4.5 The regular n-simplex

For completeness, we list some well-known properties of the regular n-simplex,which has all edges of the same length.

Theorem 4.5.1 If all edges of the regular n-simplex Σ have length one, thenthe radius of the circumscribed hypersphere is

√n

2(n+1), the radius of the

inscribed hypersphere is 1√2n(n+1)

, all the interior dihedral angles are equal

to ϕ, for which cosϕ = 1/n, and all edges are seen from the centroid by theangle ψ satisfying cosψ = −1/n. All distinguished points, such as the centroid,the center of the circumscribed hypersphere, the center of the inscribed hyper-sphere, the Lemoine point, etc., coincide. The Steiner cicumscribed ellipsoidcoincides with the circumscribed hypersphere.

Page 123: Graphs and Matrices in Geometry Fiedler.pdf

4.5 The regular n-simplex 113

Proof. If e is the vector with n + 1 coordinates, all equal to one, then theMenger matrix of Σ is [

0 eT

e eeT − I

].

Since [0 eT

e eeT − I

]⎡⎢⎣2nn+ 1

− 2n+ 1

eT

− 2n+ 1

e − 2n+ 1

eeT + 2I

⎤⎥⎦ = −2In+2,

the values of the entries qrs of the extended Gramian of Σ result. The radiithen follow from the formulae in Corollary 1.4.13 and in Theorem 2.2.1, theangle from cosϕik = −qik/

√qiiqkk. The rest is obvious. �

Remark 4.5.2 It is clear that the regular n-simplex is hyperacute, eventotally hyperacute, and orthocentric.

Page 124: Graphs and Matrices in Geometry Fiedler.pdf

5

Further geometric objects

We begin with an involutory relationship within the class of all n-simplexes.

5.1 Inverse simplex

It is well known (e.g. [28]) that for every complex (or, real) matrix (evennot necessarily square) A, there exists a unique complex matrix A+ with theproperties

AA+A = A,

A+AA+ = A+,

(AA+)∗ = AA+,

(A+A)∗ = A+A,

where ∗ means conjugate transpose.The matrix A+ is called the Moore–Penrose inverse of A and clearly

constitutes with A an involution in the class of all complex matrices

(A+)+ = A.

In addition, the following holds:

Theorem 5.1.1 If A is a (real) symmetric positive semidefinite matrix, thenA+ is also a symmetric positive semidefinite matrix, and the sets of vectorsx, for which Ax = 0 and A+x = 0, coincide. More explicitly, if

A = U

[D 00 0

]UT ,

where U is orthogonal and D is nonsingular diagonal, then

A+ = U

[D−1 0

0 0

]UT .

Page 125: Graphs and Matrices in Geometry Fiedler.pdf

5.1 Inverse simplex 115

Proof. This is a consequence of the well-known theorem ([28], p.64) on thesingular value decomposition. �

We shall also use a result from [27]. Recall that in Chapter 1, Theorem1.1.2, we saw that for n ordered linearly independent vectors u1, . . . , un in aEuclidean space En there exists another linearly independent ordered systemof n vectors v1, . . . , vn such that the Gram matrix of 2n vectors ui, vi has theform [

G(u) I

I G(v)

],

and this matrix has rank n. These two ordered n-tuples of vectors were calledbiorthogonal bases in En.

We use the generalization of the biorthogonal system defined in [27]:

Theorem 5.1.2 Let u1, . . . , um be a system of m vectors in En with rank(maximum number of linearly independent vectors of the system) n. Thenthere exists a unique system of m vectors v1, . . . , vm in En such that the Grammatrix of the 2m vectors ui, vi has the form

G1 =[G(u) P

PT G(v)

],

where the matrix P satisfies P 2 = P , P = PT , and this matrixG1 has rank n. In fact, we can find the matrix P as follows: P =I − R(RTR)−1RT , where R is an arbitrary matrix whose columns areformed by the maximal number of linearly independent vectors x satisfyingG(u)x = 0; this means that these x’s are the vectors of the linear dependencerelations among the vectors ui.

Remark 5.1.3 The matrix G(v) is then the Moore–Penrose inverse of G(u).Moreover, the vectors vi fulfill the same linear dependence relations as thevectors ui. We shall call the system of vectors vi the generalized biorthogonalsystem to the system ui.

All these properties will be very useful for the Gramian Q of an n-simplexΣ. In this case, we have a system of n + 1 privileged vectors in En, namelythe system of (normalized in a sense) outer normals whose Gram matrix isthe Gramian Q.

Theorem 5.1.4 For the n-simplex Σ with vertices A1, . . . , An+1 and centroidC, there exists a unique n-simplex Σ−1 such that for its vertices B1, . . . , Bn+1

the following holds: the vectors−→CBi form the generalized biorthogonal system

to the vectors−→CAi.

Page 126: Graphs and Matrices in Geometry Fiedler.pdf

116 Further geometric objects

We shall call this simplex Σ−1 the inverse simplex of the simplex Σ.

Remark 5.1.5 The inverse simplex to the inverse simplex is clearly theoriginal simplex.

Let us describe now the metric relations between the two simplexes usingtheir characteristics in the Menger matrices and Gramians.

The vectors ui =−→CAi for i = 1, . . . , n + 1, are the vectors of the medians

and they satisfy a single (linearly independent) relation∑

i ui = 0. Using theresult in Theorem 5.1.2, we obtain

P = I − 1n+ 1

J, (5.1)

where J = eeT , e = [1, . . . , 1]T so that 〈ui, vj〉 = nn+1

for i = j, and 〈ui, vj〉 =− 1

n+1 for i �= j. Thus if i, j, k are distinct indices, then 〈ui, vj −vk〉 = 0, whichmeans that the vector ui is orthogonal to the (n − 1)-dimensional face ωi ofthe simplex Σ−1 (opposite to the vertex Bi). It is thus the vector of the (ascan be shown, outer) normal of Σ−1.

Let us summarize that in a theorem:

Theorem 5.1.6 The vectors of the medians of a simplex are the outer nor-mals of the inverse simplex. Also, the medians of the inverse simplex are theouter normals of the original simplex.

Remark 5.1.7 This statement does not specify the magnitudes of thesimplexes. In fact, the unit in the space plays a role.

Let us return to the metric facts. Here, P will again be the matrix from(5.1). We start with a lemma.

Lemma 5.1.8 Let X be the set of all m×m real symmetric matricesX = [xij ]satisfying xii = 0 for all i, and let Y be the set of all m ×m real symmetricmatrices Y = [yij ] satisfying Y e = 0. If P is the m×m matrix P = I− 1

meeT

as in (5.1), then the following are equivalent for two matrices X and Y :

(i) X ∈ X and Y = − 12PXP ∈ Y;

(ii) Y ∈ Y and xik = yii + ykk − 2yik for all i, k;(iii) Y ∈ Y and X = yeT + eyT − 2Y , where y = [y11, . . . , ynn]T .

In addition, if these conditions are fulfilled, then X is a Menger matrix (forthe corresponding m) if and only if the matrix Y is positive semidefinite ofrank m− 1.

Proof. The conditions (ii) and (iii) are clearly equivalent. Now suppose that(i) holds. Then Y ∈ Y. Define the vector y = 1

nXe − (trY )e, where tr Y isthe trace of the matrix Y ,

∑i yii. Then

yeT + eyT − 2Y = X;

Page 127: Graphs and Matrices in Geometry Fiedler.pdf

5.1 Inverse simplex 117

since xii = 0, we have y = [y11, . . . , ynn]T , i.e. (iii). Conversely, assume that(iii) is true. Then xii = 0 for all i, so that X ∈ X , and also − 1

2PXP = Y , i.e.(i) holds.

To complete the proof, suppose that (i), (ii), and (iii) are fulfilled and thatX is a Menger matrix, so that X ∈ X . If z is an arbitrary vector, thenzTY z = −1

2zTPXPz, which is nonnegative by Theorem 1.2.4 since u = Pz

fulfills eTu = 0.

Suppose conversely that Y ∈ Y is positive semidefinite. To show that thecorresponding matrix X is a Menger matrix, let u satisfy eTu = 0. ThenuTXu = uT (yeT + eyT − 2Y )u, which is −2uTY u, and thus nonpositive. ByTheorem 1.2.4, X is a Menger matrix. �

For simplexes, we have the following result:

Theorem 5.1.9 Suppose that A, B are the (ordered) sets of vertices of twomutually inverse n-simplexes Σ and Σ−1. Let P = I − 1

n+1J . Then theMenger matrices MΣ and MΣ−1 satisfy the condition: the matrices 1

2PMΣP

and 12PMΣ−1P are mutual Moore–Penrose inverse matrices.

Also, the Gramians Q(Σ) and Q(Σ−1) of both n-simplexes are mutualMoore–Penrose inverse matrices, and

−12PMΣ−1P = Q(Σ), − 1

2PMΣP = Q(Σ−1).

The following relations hold between the entries mik of the Menger matrix ofthe n-simplex Σ and the entries qik of the Gramian of the inverse simplex Σ−1

mik = qii + qkk − 2qik, (5.2)

qik = −12

(mik − 1

n+ 1

∑j

mij −1

n+ 1

∑j

mkj +1

(n+ 1)2∑j,l

mjl

). (5.3)

Analogous relations hold between the entries of the Menger matrix of thesimplex Σ−1 and the entries of the Gramian of Σ.

Proof. Denote by ui the vectors Ai−C, where C is the centroid of the simplexΣ, and analogously let vi = Bi −C. The (i, k) entry mik of the matrix MΣ is〈Ai − Ak, Ai − Ak〉, which is 〈Ai − C,Ai − C〉 + 〈Ak − C,Ak − C〉 − 2〈Ai −C,Ak − C〉. If now the ciks are the entries of the Gram matrix G(u), thenmik = cii+ckk−2cik. By (ii) of Lemma 5.1.8, G(u) = −1

2PMΣP . Analogously,

G(v) = − 12PMΣ−1P , which implies the first assertion of the theorem.

The second part is a direct consequence of Theorem 5.5.4, since bothG(v) and Q(Σ) are the Moore–Penrose inverse of the matrix −1

2PMΣP . Theremaining formulae follow from Lemma 5.1.8. �

Page 128: Graphs and Matrices in Geometry Fiedler.pdf

118 Further geometric objects

In the conclusion of this section, we shall extend the definition of the inversesimplex from Theorem 5.1.6 to the case that the number of points of thesystem is greater by more than one than the dimension of the system.

Definition 5.1.10 Let A = (A1, . . . , Am) be an ordered m-tuple ofpoints in the Euclidean point space En, and let C be the centroid of the sys-tem. If V = (v1, . . . , vm) is the generalized biorthogonal system of the systemU = (A1−C, . . . , Am−C), then we call the ordered system B = (B1, . . . , Bm),where Bi is defined by vi = Bi −C, the inverse point system of the system A.

The following theorem is immediate.

Theorem 5.1.11 The points of the inverse system fulfill the same lineardependence relations as the points of the original system. Also, the inversesystem of the inverse system is the original system.

Example 5.1.12 Let A1, . . . , Am, m ≥ 3, be points on a (Euclidean) lineE1, with coordinates a1, . . . , am, at least two of which are distinct. If e isthe unit vector of E1, then the centroid of the points is the point C withcoordinate c = 1

m

∑i ai and the vectors of the “medians” are (a1 − c)e, (a2 −

c)e, . . . , (am − c)e. The Gram matrix G is thus the m×m matrix [gij ], wheregij = (ai − c)(aj − c). It has rank one and it is easily checked that the matrixG1 from Theorem 5.1.2 is

G1 =[

G ωG

ωG ω2G

],

where ω is such that the matrix ωG satisfies (ωG)2 = ωG; thus ω is easilyseen to be [

∑i(ai − c)2]−1. This means that the inverse set is obtained from

the original set by extending it (or, diminishing it) proportionally from thecentroid by the factor ω.

Let us perform this procedure in usual cartesian coordinates. Denote by Ythe m × n matrix [aip], i = 1, . . . ,m, p = 1, . . . , n; the ith row is formed bythe cartesian coordinates of the vector ui = Ai − C in En. Since Σui = 0

eTY = 0.

Let K be the matrix K = 1mY TY .1

Then form the system of hyperquadrics

xTK−1x− ρ = 0,

where x is the column vector of cartesian variable coordinates and ρ is apositive parameter. Such a hyperquadric (since K is positive semidefinite,

1 The matrix K occurs in multivariate factor analysis when the points Ai correspond tomeasurements; it is usually called the covariance matrix.

Page 129: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 119

it is an ellipsoid, maybe degenerate) can be brought to the simplest form,so-called principal axes form, by transforming K to the diagonal form using(vii) of Theorem A.1.37. The eigenvectors of K correspond to the vectorsof the principal axes; the eigenvalues are proportional to the lengths of thehalfaxes of the ellipsoid.

The eigenvalues of the matrix K = 1mY

TY are at the same time also thenonzero eigenvalues of the matrix 1

mY Y T , i.e. of the matrix 1

mG(u), where

G(u) is the Gram matrix of the system ui.If thus B1, . . . , Bm form the generalized point biorthogonal system to the

system A1, . . . , Am, and K is a similarly formed covariance matrix of the sys-tem Bi, then the eigenvalues of the matrix K are the nonzero eigenvalues of1m (G(u))+. For positive definite matrices, the Moore–Penrose inverses havemutually reciprocal nonzero eigenvalues and common eigenvectors. For ourcase, it means that the systems of elliptic quadrics of the systems Ai and Bi

have the same systems of principal axes and the lengths of the halfaxes are (upto constant nonzero multiples) reciprocal. We shall show that if m = n+1, theSteiner ellipsoids belong to the system of the just-mentioned elliptic hyper-quadrics. Indeed, the matrix U = Y K−1Y T has then rank n, and satisfiesU = UT , U2 = U , and also Ue = 0. Therefore, U = I − 1

n+1J , which means

that the diagonal entries of U are mutually equal to nn+1 . It follows that the

hyperquadric of the system corresponding to ρ = nn+1 contains all the points

Ai, and is thus the Steiner circumscribed ellipsoid.

5.2 Simplicial cones

Let us start with a theorem which completes the theory of biorthogonal bases(Appendix, Theorem A.1.47)

Theorem 5.2.1 Let a1, . . . , an, b1, . . . , bn, n ≥ 2 be biorthogonal bases in En,n ≥ 2, and 1 ≤ k ≤ n− 1. Then the biorthogonal completion of a1, . . . , ak inthe corresponding Ek is b1, . . . , bk, where, for each j, 1 ≤ j ≤ k

bj = bj − [〈bj , bk+1〉, 〈bj , bk+2〉, . . . , 〈bj , bn〉]G−1[bk+1, . . . , bn]T , (5.4)

where G is the Gram matrix G(bk+1, . . . , bn); in other words, bj is the orthog-onal projection of bj on Ek along the linear space spanned by bk+1, . . . , bn.

Proof. It is clear that each vector bj is orthogonal to every vector ai

for i �= j, i = 1, . . . , k. Also, 〈aj , bj〉 = 1. It remains to prove thatbj ∈ Ek for all j = 1, . . . , k. It is clear that Ek is the set of all vec-tors orthogonal to all vectors bk+1, . . . , bn. If now i satisfies k + 1 ≤ i ≤n, then denote G−1[〈bk+1, bi〉, . . . , 〈bn, bi〉]T = v, or equivalently Gv =[〈bk+1, bi〉, . . . , 〈bn, bi〉]T . Thus v is the vector with all zero entries except the

Page 130: Graphs and Matrices in Geometry Fiedler.pdf

120 Further geometric objects

one with index i, equal to one. It follows by (5.4) that bj is orthogonal to allthe bis so that bj ∈ Ek and it is the orthogonal projection on Ek. �

Now let a1, . . . , am be linearly independent vectors in En. The cone gen-erated by these vectors is the set of all nonnegative linear combinations∑m

i=1 λiai, λi ≥ 0 for all i. We then say that a set C in En is a simpli-cial cone if there exist n linearly independent vectors which generate C. Toemphasize the dimension, we speak sometimes about a simplicial n-cone.

Theorem 5.2.2 Any simplicial cone C has the following properties:

(i) The elements of C are vectors.(ii) C is a convex set, i.e. if u ∈ C, v ∈ C, then αu + βv ∈ C, whenever α

and β are nonnegative numbers satisfying α+ β = 1.(iii) C is a nonnegatively homogeneous set, i.e. if u ∈ C, then λu ∈ C,

whenever λ is a nonnegative number.

Proof. Follows from the definition. �We can show that the zero vector is distinguished (the vertex of the cone)

and each vertex halfline, i.e. the halfline, or, ray generated by one of the vectorsai, is distinguished. Therefore, the cone C is uniquely determined by unitgenerating vectors ai = ai/

√〈ai, ai〉. If these vectors are ordered, the numbers

γ1, . . . , γn uniquely assigned to any vector v of En in v = γ1a1 + . . . + γnan

will be called spherical coordinates of the vector v. Analogous to the simplexcase, the (n− 1)-dimensional faces ω1, . . . , ωn of the cone can be defined; theface ωi is either the cone generated by all the vectors aj for j �= i, or thecorresponding linear space. Then, these hyperplanes will be called boundaryhyperplanes.

There is another way of determining a simplicial n-cone. We start withn linearly independent vectors, say b1, . . . , bn and define C as the set of allvectors x satisfying 〈bi, x〉 ≥ 0 for all i. We can however show the following:

Theorem 5.2.3 Both definitions of a simplicial n-cone are equivalent.

Proof. In the first definition, let c1, . . . , cn be vectors which complete the aisto biorthogonal bases. If a vector in the cone has the form x =

∑ni=1 λiai,

then for every i, 〈ci, x〉 = λi, and is thus nonnegative. Conversely, if a vectory satisfies 〈ci, y〉 ≥ 0, then y has the form

∑ni=1〈ci, y〉ai.

If, in the second definition, d1, . . . , dn are vectors completing the bis tobiorthogonal bases, then we can similarly show that the vectors of the form∑n

i=1 λidi, λi ≥ 0, completely characterize the vectors satisfying 〈bi, x〉 ≥ 0for all i. �

Remark 5.2.4 In the second definition, the expression 〈bi, x〉 = 0 representsthe equation of a hyperplane ωi in En and 〈bi, x〉 ≥ 0 the halfspace in En

Page 131: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 121

with boundary in that hyperplane. We can thus say that the simplicial n-coneC is the intersection of n halfspaces, the boundary hyperplanes of which arelinearly independent. In fact, if we consider the vector ai in the first definitionas an analogy of the vertices of a simplex, the boundary hyperplanes justmentioned form an analogy of the (n− 1)-dimensional faces of the n-simplex.

We thus see that to the simplicial n-cone C generated by the vectors ai, wecan find the cone C generated by the normals bi to the (n − 1)-dimensionalfaces. By the properties of biorthogonality, the repeated construction leadsback to the original cone. We call the second cone the polar cone to the first.

Here, an important remark is in order.

Remark 5.2.5 To every simplicial n-cone in En generated by the vectorsa1, . . . , an, there exist further 2n − 1 simplicial n-cones, each of which is gen-erated by the vectors ε1a1, ε2a2, . . . , εnan, where the epsilons form a systemof ones and minus ones. These will be called conjugate n-cones to C.

The following is easy to prove:

Theorem 5.2.6 The polar cones of conjugate cones of C are conjugates ofthe polar cone of C.

We now intend to study the metric properties of simplicial n-cones. First,the following is important:

Theorem 5.2.7 Two simplicial n-cones generated by vectors ai, . . . , an andai, . . . , an are congruent (in the sense of Euclidean geometry, i.e. there existsan orthogonal mapping which maps one into the other) if and only if thematrices (of the cosines of their angles)[

〈ai, aj〉√〈ai, ai〉〈aj , aj〉

](5.5)

and [〈ai, aj〉√

〈ai, ai〉〈aj , aj〉

]differ only by permutation of rows and columns.

Proof. It is obvious that if the second cone is obtained by an orthogo-nal mapping from the first, then the angles between the mapped vertexhalflines coincide so that the matrices are equal, or permutation equivalent, ifrenumbering of the vertex halflines is necessary.

To prove the converse, suppose that the matrices are equal, perhaps afterrenumbering. Then the Gram matrices G = [〈ai, aj〉] and G = [〈ai, aj〉] sat-isfy G = DGD for some diagonal matrix D with positive diagonal entriesd1, . . . , dn. Define a mapping U which assigns to a vector x of the form

Page 132: Graphs and Matrices in Geometry Fiedler.pdf

122 Further geometric objects

x =∑n

i=1 λiai the vector Ux =∑n

i=1 λid−1i ai. Since for any two vectors

x of this form and y =∑n

i=1 μid−1i ai

〈Ux,Uy〉 =⟨ n∑

i=1

λid−1i ai,

n∑j=1

μjd−1j aj

⟩=∑i,j

λiμjd−1i d−1

j 〈ai, aj〉 =∑i,j

λiμj〈ai, aj〉

=⟨ n∑

i=1

λiai,n∑

j=1

μjaj

⟩= 〈x, y〉,

the mapping U is orthogonal and maps the first cone onto the second. �It follows that the matrix (5.5) determines the geometry of the cone. We

shall call it the normalized Gramian of the n-cone.

Theorem 5.2.8 The normalized Gramian of the polar cone is equal to theinverse of the normalized Gramian of the given simplex, multiplied by a diag-onal matrix D with positive diagonal entries from both sides in such a waythat the resulting matrix has ones along the diagonal.

Proof. This follows from the fact that the vectors ai and bi form – up topossible multiplicative factors – a biorthogonal system so that Theorem A.1.45can be applied. �

This can also be more explicitly formulated as follows:

Theorem 5.2.9 If G(C) and G(C) are the normalized Gramians of the coneC and the polar cone C, respectively, then there exists a diagonal matrix Dwith positive diagonal entries such that

G(C) = D[G(C)]−1D.

In other words, the matrix [G(C) D

D G(G)

](5.6)

has rank n. The matrix D has all diagonal entries smaller than or equal toone; all these entries are equal to one if and only if the generators of C aretotally orthogonal, i.e. if any two of them are orthogonal.

Proof. The matrix (5.6) is the Gram matrix of the biorthogonal system nor-malized in such a way that all vectors are unit vectors. Since the ith diagonalentry di of D is the cosine of the angle between the vectors ai and bi, the lastassertion follows. �

Page 133: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 123

Remark 5.2.10 The matrix D in Theorem 5.2.9 will be called the reductionmatrix of the cone C. In fact, it is at the same time also the reduction matrixof the polar cone C. The diagonal entries di of D will be called reductionparameters. We shall find their geometric properties later (in Corollary 5.2.13).

The simplicial n-cone has, of course, not only (n−1)-dimensional faces, butfaces of all dimensions between 1 and n− 1. Such a face is simply generatedby a subset of the generators and also forms a simplicial cone. Its Gramian isa principal submatrix of the Gramian of the original n-cone. We can thus askabout the geometric meaning of the corresponding polar cone of the face andits relationship to the polar cone of the original simplex. By Theorem 5.2.1we obtain the following.

Theorem 5.2.11 Suppose that a simplicial n-cone C is generated by the vec-tors a1, . . . , an and the polar cone C by the vectors b1, . . . , bn which completethe ais to biorthogonal bases. Let F be the face of C generated by a1, . . . , ak,1 ≤ k ≤ n − 1. Then the polar cone F of F is generated by the orthogonalprojections of the first k vertex halflines of C on the linear space spanned bythe vectors a1, . . . , ak.

There are some distinguished halflines of the n-cone C. In the followingtheorem, the spherical distance of a halfline h from a hyperplane is the smallestangle the halfline h spans with the halflines in the hyperplane.

Theorem 5.2.12 There is exactly one halfline h, which has the same spher-ical distance ϕ to all generating vectors of the cone C. The positivelyhomogeneous coordinates of h are given by G(C)−1e, where e = [1, 1, . . . , 1]T

and G(C) is the matrix (5.5). The angle is

ϕ = arccos1√

eT [G(C)]−1e. (5.7)

The halfline h also has the property that it has the same spherical distance toall (n− 1)-dimensional faces of the polar cone. This distance is

ψ = arcsin1√

eT [G(C)]−1e, 0 < ψ <

π

2. (5.8)

Proof. Suppose that a unit vector c spans with each of the unit vectors ai thesame acute angle ϕ. Then cosϕ = 〈ai, c〉 for all i.

If c =∑

i γiai, and γ = [γ1, . . . , γn]T , we obtain

G(C)γ = e cosϕ, (5.9)

so that indeed γ is a positive multiple of G(C)−1e. The converse also holds.

Page 134: Graphs and Matrices in Geometry Fiedler.pdf

124 Further geometric objects

Since c is a unit vector, we obtain by (5.9) that

1 = 〈c, c〉=∑i,k

γiγk〈ai, ak〉

= γTG(C)γ

= eT [G(C)−1]e cos2 ϕ

so that (5.7) holds. Since c spans with ai the acute angle ϕ, it spans withthe (n− 1)-dimensional face of the polar cone, generated by all the bj , i �= j,the same angle complementing ϕ to π

2 . Thus the spherical distance ψ of hto the faces of the polar cone is the same and satisfies (5.8). �

We could thus speak about the circumscribed circular cone with axis h ofthe cone C and about the inscribed circular cone, in this case of the polar coneC. Of course, we can exchange the roles of C and C. If C is a general circularcone, we shall call the angle between the axis of the cone and any boundaryvector the opening angle of the cone. We summarize in the following result:

Corollary 5.2.13 Let C be a simplicial n-cone with Gramian G(C). Thenthe opening angle of the circumscribed circular cone of C is

ϕ = arccos1√

eT [G(C)]−1e,

and the axis is generated by the vector c =∑

k γkak for γ = [γ1, . . . , γn]T

satisfying

γ = [G(C)]−1e.

The opening angle of the inscribed circular cone of C is

ρ = arcsin1√

eTD−1G(C)D−1e, (5.10)

where D is the reduction matrix of C, and the axis is generated by the vector

v =∑

k

d−1k ak, (5.11)

where the dks are the reduction parameters.

Proof. The first part is in Theorem 5.2.12. Since the normalized Gramian ofthe polar cone C is D[G(C)]−1D and the polar cone of C is C, (5.8) appliedto C yields (5.10) and (5.11). �

Remark 5.2.14 However, one must bear in mind that there exist circum-scribed and inscribed cones of the conjugate cones, too. Thus there existin general 2n circumscribed circular cones in this more general sense (or,

Page 135: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 125

2n−1 double-cones), and the same number of inscribed circular cones ordouble-cones.

Remark 5.2.15 For the purpose of this section, we introduce two notions.If C is an n-cone and H a hyperplane which has a proper point in commonwith every generating ray of C, distinct from the vertex of C, then theseintersection points and the vertex of C form vertices of an n-simplex. Wethen call it the cut-off n-simplex of C for obvious reasons. Conversely if Σ isan n-simplex and V is its vertex, then we call the vertex-cone of Σ at V then-cone generated by the vectors of the edges of Σ starting at V .

We can now prove a theorem which will enable us to apply the results inthe previous chapters.

Theorem 5.2.16 For every simplicial n-cone C, there exists a cut-offn-simplex of C such that all interior dihedral angles of the simplex at theadded boundary hyperplane are acute.

Proof. Suppose that a1, . . . , an are the unit generators of C, and G(a) is itsGramian. By a well-known theorem (Appendix, Corollary A.3.13), there existsa positive vector c = [ci]T such that G(a)c = u is positive as well. Define now ahalfspace H+ by mini ci −〈

∑i ciai, x〉 ≥ 0. The boundary hyperplane H cuts

each of the halflines λak, λ > 0, in the point determined by λ = mini ci/ui,where ui is the ith coordinate of u. The intersection C ∩ H+ is thus an n-simplex Σ, and the vector bn+1 =

∑i ciai is the (n+ 1)th outer normal, with

the remaining normals being −b1, . . . ,−bn. Since the inner product betweenbn+1 and each outer normal −bk is −ck, i.e. negative, all the interior anglesin Σ at H are acute. �

In the following, assign to the cone C a cut-off n-simplex Σ by choosing thevertices of Σ as points on the generating halflines in a unit distance from thevertex of C denoted as An+1. Now we can define the isogonal correspondenceamong nb-halflines of C, i.e. halflines not contained in any (n−1)-dimensionalface of C, as follows. If h1 is such a halfline, choose on h1 a point P not con-tained in the (n−1)-dimensional face ωn+1 of Σ opposite to An+1. By Theorem2.2.10, there exists a unique point Q which is isogonally conjugate to P withrespect to the simplex Σ. We then call the halfline h2 originating at An+1 andcontaining Q, the isogonally conjugate halfline to h1. It is immediate fromTheorem 2.2.4 that the isogonally conjugate halfline to h2 with respect toΣ is h1. Let us show that h2 is independent of the choice of the simplex Σ.Indeed, as was proved in Theorem 2.2.11, the point Q is the center of a hyper-sphere circumscribed to the n-simplex with vertices in n points symmetric toP with respect to each of the (n − 1)-dimensional faces of C, together withthe point Z symmetric to P with respect to ωn+1. Since An+1 has the samedistance to all the mentioned n points, the line An+1Q containing h2 is the

Page 136: Graphs and Matrices in Geometry Fiedler.pdf

126 Further geometric objects

locus of all points having the same distance to these n points, and this linedoes not depend on the position of Z. We thus proved the following:

Theorem 5.2.17 The isogonally conjugate halfline h2 to the halfline h1 is thelocus of the other foci of rotational hyperquadrics which are inscribed into thecone C and have one focus at h1. In addition, whenever we choose two (n−1)-dimensional faces F1 and Fj of C, then the two hyperplanes in the pencilα1F1 + α2F2 (in the clear sense) passing through h1 and h2 are symmedians,i.e. they are symmetric with respect to the axes of symmetry of F1 and F2.

Remark 5.2.18 The case that the two halflines coincide clearly happens ifand only if this rotational hyperquadric is a hypersphere. The halflines thencoincide with the axis of the inscribed circular cone of C (cf. Theorem 5.2.12).

Observe that the isogonal correspondence between the halflines in C is atthe same time a correspondence between two halflines in the polar cone C,namely such that they are not nb-halflines with respect to C, which meansthat they are not orthogonal to any (n− 1)-dimensional face of C. If we nowexchange the role of C and C, we obtain another correspondence in C betweenhalflines of C not orthogonal to any (n−1)-dimensional face of C. In this case,two such halflines coincide if and only if they coincide with the halfaxis of thecircumscribed circular cone of C.

The notion of hyperacute simplexes plays an analogous role in simplicialcones. A simplicial n-cone C generated by the vectors ai is called hyperacuteif none of the angles between the (n− 1)-dimensional faces of C is obtuse. ByTheorem 5.2.16, we then have:

Theorem 5.2.19 If C is a hyperacute n-cone, then there exists a cut-offn-simplex of C which is also hyperacute.

By Theorem 3.3.1, we immediately obtain:

Corollary 5.2.20 Every face (with dimension at least two) of a hyperacuten-cone is also hyperacute.

The property of an n-cone to be hyperacute is, of course, equivalent tothe condition that for the polar cone 〈bi, bj〉 ≤ 0 for all i, j. To formulateconsequences, it is advantageous to define an n-cone as hypernarrow (respec-tively, hyperwide) if any two angles between the generators are acute or right(respectively, obtuse or right). We then have:

Theorem 5.2.21 An n-cone is hyperacute if and only if its polar cone ishyperwide.

By Theorem 5.2.19, the following holds:

Theorem 5.2.22 A hyperacute n-cone is always hypernarrow.

Page 137: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 127

Remark 5.2.23 Of course, the converse of Theorem 5.2.22 does not holdfor n ≥ 3.

Analogously, we can say that a simplicial cone generated by the vectors ai ishyperobtuse if none of the angles between ai and aj is acute.

The following are immediate:

Theorem 5.2.24 If a simplicial cone C is hypernarrow (respectively, hyper-wide), then every face of C is hypernarrow (respectively, hyperwide) aswell.

Theorem 5.2.25 If a simplicial cone is hyperwide, then its polar cone ishypernarrow.

Also, the following is easily proved.

Theorem 5.2.26 The angle spanned by any two rays in a hypernarrow coneis always either acute or right.

Analogous to the point case, we can define orthocentric cones. First, we definethe altitude hyperplane as the hyperplane orthogonal to an (n−1)-dimensionalface and passing through the opposite vertex halfline. An n-cone C is thencalled orthocentric if there exist n altitude hyperplanes which meet in a line;this line will be called the orthocentric line.

Remark 5.2.27 The altitude hyperplane need not be uniquely defined if oneof the vertex halflines is orthogonal to all the remaining vertex halflines. Itcan even happen that each of the vertex halflines is orthogonal to all theremaining ones. Such a totally orthogonal cone is, of course, also consideredorthocentric. We shall, however, be interested in cones – we shall call themusual – in which no vertex halfline is orthogonal to any other vertex halfline,thus also not to the opposite face, and for simplicity, that the same holds forthe polar cone. Observe that such a cone has the property that the polar conehas no vertex halfline in common with the original cone.

Theorem 5.2.28 Any usual simplicial 3-cone is orthocentric.

Proof. Suppose that C is a usual 3-cone generated by vectors a1, a2, and a3.Let b1, b2, and b3 complete the generating vectors to biorthogonal bases. Weshall show that the vector

s =1

〈a2, a3〉b1 +

1〈a3, a1〉

b2 +1

〈a1, a2〉b3

generates the orthocentric line. Let us prove that the vector s is a linearcombination of each pair ai, bi, for i = 1, 2, 3. We shall do that for i = 1. If thesymbol [x, y, z], where x, y, and z are vectors in E3, means the 3×3 matrix of

Page 138: Graphs and Matrices in Geometry Fiedler.pdf

128 Further geometric objects

cartesian coordinates of these vectors form the product [a1, a2, a3]T [b1, a1, s].We obtain for the determinants

det[a1, a2, a3]T det[b1, a1, s] = det

⎡⎣ 〈a1, b1〉 〈a1, a1〉 〈a1, s〉〈a2, b1〉 〈a2, a1〉 〈a2, s〉〈a3, b1〉 〈a3, a1〉 〈a3, s〉

⎤⎦ .The determinant on the right-hand side is

det

⎡⎢⎣ 1 〈a1, a1〉 1〈a2,a3〉

0 〈a2, a1〉 1〈a1,a3〉

0 〈a3, a1〉 1〈a2,a1〉

⎤⎥⎦ ,which is zero. The same holds for i = 2 and i = 3. Thus, the plane containinglinearly independent vectors a1 and s contains also the vector b1 so that itis orthogonal to the plane generated by a2 and a3. It is an altitude planecontaining a1. Therefore, s is the orthocentric line. �

Returning to Theorem 4.2.4, we can prove the following:

Theorem 5.2.29 A usual n-cone, n ≥ 3, is orthocentric if and only if itsGramian has d-rank one. The polar cone is then also orthocentric.

Proof. For n = 3, the result is correct by Theorem 5.2.28. Suppose the d-rankof the usual n-cone C is one and n > 3. The Gramian G(C) thus has the formG(C) = D + μuuT , where D is a diagonal matrix, u is a column vector withall entries ui different from zero, and μ is a real number different from zero.We shall show that the line s generated by the vector

v =∑

i

uibi (5.12)

satisfying 〈v, ai〉 = ui is then contained in all the two-dimensional planes Pi,each generated by the vectors ai and bi, i = 1, . . . , n, where the bis completethe ais to biorthogonal bases. Indeed, let k ∈ {1, . . . , n}, and multiply thenonsingular matrix A = [a1, . . . , an]T by the n× 3 matrix Bk = [bk, ak, v]. Weobtain

ABk =

⎡⎢⎢⎢⎣〈a1, bk〉 〈a1, ak〉 〈a1, v〉]〈a2, bk〉 〈a2, ak〉 〈a2, v〉]

......

...〈an, bk〉 〈an, ak〉 〈an, v〉]

⎤⎥⎥⎥⎦ .This matrix has rank two since in the first column there is just one entry,the kth, different from zero, and the second column is – except for the kthentry – a multiple of u, and the same holds for the third column. Thus s iscontained in each altitude hyperplane containing a vertex line and orthogonal

Page 139: Graphs and Matrices in Geometry Fiedler.pdf

5.2 Simplicial cones 129

to the opposite (n − 1)-dimensional face; it is an orthocentric line of C, andthe only one since ak and bk are for each k linearly independent.

Conversely, take in an n-cone C, a line s to be an orthocentric line generatedby a nonzero vector w. Then w is contained in every plane Pk generated by thepair ak and bk as above. Thus the corresponding n×3-matrix Bk = [bk, ak, w]has rank two for each k, which implies, after multiplication by A as above,that each off-diagonal entry 〈ai, ak〉 is proportional to 〈ai, w〉, i �= k. Since weassume that C is usual and n ≥ 3, it cannot happen that the last column hasall entries except the kth equal to zero. In such a case, w would be proportionalto bk, and bk would have to be linearly dependent on aj and bj for some j �= k.The inner product of ai, where i is different from both j and k, with aj wouldthen be equal to zero. Therefore, there exist constants ck different from zerosuch that 〈ai, ak〉 = ck〈ai, w〉 for all i, k, i �= k. Thus the d-rank of G(C) isone.

The fact that the polar cone is also orthocentric follows from the fact thatthe orthocentric line is symmetrically defined with respect to both parts ofthe biorthogonal bases ai and bi. �

Theorem 5.2.30 If a usual cone C is orthocentric, then every face of C isalso orthocentric.

Proof. It follows from the fact that the properties of being usual and beingGramian of d-rank one are hereditary. �

The change of signs of the generators ai does not change the property thatthe d-rank is one, and the usual property of a cone remains. Therefore, wehave also

Theorem 5.2.31 If a usual cone is orthocentric, then all its conjugate conesare orthocentric as well. The orthocentric lines of the conjugate cones arerelated to the orthocentric line of the original n-cone by conjugacy, in thiscase by multiplication of some of the coordinates with respect to the basis biby minus one.

Proof. The first part follows from the fact that the change of signs of the gene-rators ai does not change the property that the d-rank is one, and also theproperty of the cone to be usual remains. The second part is a consequenceof (5.12). �

Remark 5.2.32 The existence of an orthocentric n-cone for any n ≥ 3follows from the fact that for every positive definite n× n matrix there existn linearly independent vectors in En whose Gram matrix is the given matrix.If we choose a diagonal matrix D with positive diagonal entries and a columnvector u with positive entries, the matrix D + uuT will certainly correspondto a usual orthocentric cone.

Page 140: Graphs and Matrices in Geometry Fiedler.pdf

130 Further geometric objects

Analogous to the properties of the orthocentric n-simplex, the followingholds:

Theorem 5.2.33 Suppose that a1, . . . , an generate a usual orthocentric n-cone with orthocentric line generated by the vector a0. Then each n-conegenerated by any n-tuple from the vectors a0, a1, . . . , an is usual and ortho-centric and the remaining vector generates the corresponding orthocentricline.

Proof. Let C be the usual cone generated by a1, . . . , an, and let a0 be a gen-erator of the orthocentric line. We shall show that whenever we choose ar

and as, r �= s, r, s ∈ {0, 1, . . . , n}, then there is a nonzero vector vrs in theplane Prs generated by ar and as, which is orthogonal to the hyperplane Hrs

generated by all ats, r �= t �= s. If one of the indices r, s is zero, the assertionis true: if the other index is, say, one, the vector b1 is linearly dependent ona0 and a1. Thus let both indices r, s be different from zero; let, say, r = 1,s = 2. The nonzero vector v12 = 〈b2, a0〉b1 − 〈b1, a0〉b2 is orthogonal to a0, a3,. . ., an, thus to H12. Let us show that v12 ∈ P12. We know that a0 is a linearcombination both of a1, b1 and of a2, b2: a0 = α1a1 +β1b1, a0 = α2a2 +β2b2.Thus 〈b2, a0〉 = β1〈b1, b2〉, 〈b1, a0〉 = β2〈b1, b2〉, α1a1 − α2a2 = β2b2 − β1b1 sothat

1〈b1, b2〉

v12 = α2a2 − α1a1.

�Observe that the notion of the vertex-cone defined in Remark 5.2.15 enables

us to formulate a connection between orthocentric simplexes and orthocentriccones.

Theorem 5.2.34 If an orthocentric n-simplex is different from theright, then its every vertex-cone is orthocentric and usual. In addition, theline passing through the vertex and the orthocenter is then the orthocentricline of the vertex cone.

We could expect that the converse is also true. In fact, it is true for so-called acute orthocentric cones, i.e. orthocentric cones all of whose interiorangles are acute. In this case, there exists an orthocentric ray (as a part of theorthocentric line) which is in the interior of the cone.

Then, the following holds:

Theorem 5.2.35 If C is a usual acute orthocentric n-cone, then there existsa cut-off n-simplex of C which is acute orthocentric and different from theright.

Proof. Choose a proper point P on the orthocentric ray of C, different fromthe vertex V of C. If ai is a generating vector of C, and bi is the corresponding

Page 141: Graphs and Matrices in Geometry Fiedler.pdf

5.3 Regular simplicial cones 131

vector of the biorthogonal basis, then by the proof of Theorem 5.2.29, P , ai,and bi are in a plane. Since ai and bi are linearly independent, there existson the line V + λai, a point Xi of the form Xi = P + ξbi. Let us show thatλ is positive. We have P − V = λai − ξbi. Let j �= i be another index. Then〈P − V, aj〉 ≥ 0 by Theorem 5.2.26 so that λ〈ai, aj〉 ≥ 0. Since λ cannot bezero, it is positive. Thus Xi is on the ray V +λai for λ positive, and this holdsfor all i. The points Xi, together with the vertex V , form vertices of the acuteorthocentric n-simplex. �

We can ask now what happens if we have more generators of a cone in En

than n. As before, the cone will be defined as the set of all nonnegative linearcombinations of the given vectors. We shall suppose that this set does notcontain a line and has dimension n in the sense that it is not contained in anyEk with k < n. The resulting cone will then be called simple. The followingis almost immediate:

Theorem 5.2.36 Let S = {a1, . . . , am} be a system of vectors in En. ThenS generates a simple cone C if and only if

(i) C is n-dimensional;(ii) the zero vector can be expressed as a nonnegative combination of the vec-

tors ai, thus in the form∑

i αiai, where all the αs are nonnegative, onlyif all the αs are zero.

Another situation can occur if at least one of the vectors ai is itself anonnegative combination of other vectors of the system. We then say thatsuch vector is redundant. A system of vectors in En will be called, for themoment, pure if it generates a simple cone and none of the vectors of thesystem is redundant. Returning now to biorthogonal systems, we have:

Theorem 5.2.37 Let S be a pure system of vectors in En. Then the systemS which is the biorthogonal system to S is also pure.

Proof. This follows from the fact that the vectors of the system S satisfy thesame linear relations as the corresponding vectors in S. �

5.3 Regular simplicial cones

For completeness, we add a short section on simplicial cones which possess theproperty that the angles between any two generating halflines are the same.We shall call such cones regular. The sum of the generating unit vectors willbe called the axis of the cone, the common angle of the generating vectors willbe the basic angle, and the angle between the axis and the generating vectorswill be the central angle.

Page 142: Graphs and Matrices in Geometry Fiedler.pdf

132 Further geometric objects

Theorem 5.3.1 Suppose that Σ is a regular simplicial n-cone. Denote by αits basic angle, and by ω its central angle. Then the polar cone Σ′ of Σ is alsoregular and the following relations hold between its basic angle α′ and centralangle ω′

cosα ≤ 1n,

cos2 ω =1n

(1 − (n− 1) cosα),

cosα′ = − cosα1 − (n− 2) cosα

,

cos2 ω′ =1 − cos2 ω

1 + n(n− 2) cos2 ω.

The common interior angle ϕ between any two faces of Σ satisfies, of course,ϕ = π − α′, and similarly for the polar cone, ϕ′ = π − α.

Proof. The Gram matrices of the cones Σ and Σ′ have the form I − kE andI−k′E, where E is the matrix of all ones and, since these matrices have to bepositive definite and mutual inverses, k < 1

n , k′ = − k1−kn . Thus cosα = k

1−k ;if the ais are the unit generating vectors of Σ, c =

∑i ai is the generating

vector of the axis. Then 〈ai, ai〉 = 1 − k, 〈ai, aj〉 = −k, cos2 ω = (〈c,ai〉)2〈c,c〉〈ai,ai〉 .

Simple manipulations then yield the formulae above. �

Remark 5.3.2 It is easily seen that a regular cone is always orthocentric; itsorthocentric line is, of course, the axis.

Remark 5.3.3 Returning to the notion of simplexes with a principal point,observe that every n-simplex with principal point for which the coefficient α ispositive can be obtained by choosing a regular (n+1)-cone C and a hyperplaneH not containing its vertex. The intersection points of the generating halflinesof C with H will be the vertices of such an n-simplex.

5.4 Spherical simplexes

If we restrict every nonzero vector in En to a unit vector by appropriate multi-plication by a positive number, we obtain a point on the unit sphere Sn in En.In particular, to a simplicial n-cone there corresponds a spherical n-simplexon Sn. For n = 3, we obtain a spherical triangle. In the second definition,we have to use hemispheres instead of halfspaces, so that the given n-simplexcan be defined as the intersection of the n hemispheres, each containing n− 1

Page 143: Graphs and Matrices in Geometry Fiedler.pdf

5.4 Spherical simplexes 133

of the given points on the boundary and the remaining point as an interiorpoint.

Such a general hemisphere corresponds to a unit vector orthogonal to theboundary hyperplane and contained in the hemisphere, so-called polar vec-tor, and conversely, to every unit vector, or, a point on the hypersphere Sn,

corresponds to a unique polar hemisphere described. It is immediate that thepolar hemisphere corresponding to the unit vector u coincides with the set ofall unit vectors x satisfying 〈u, x〉 ≥ 0. We can then define the polar sphericaln-simplex Σ to the given spherical n-simplex Σ generated by the vectors ai asthe intersection of all the polar hemispheres corresponding to the vectors ai.

It is well known that the spherical distance of two points a, b can be definedas arccos |〈a, b〉| and this distance satisfies the triangular inequality amongpoints in a hemisphere. It is also called the spherical length of the sphericalarc ab. By Theorem 5.2.7, the spherical simplex is by the lengths of all thearcs aiaj determined up to the position on the sphere. The lengths of the arcsbetween the vertices of the polar simplex correspond to the interior angles ofthe original simplex in the sense that they complete them to π.

The matricial approach to spherical simplexes allows us – similarly as forsimplicial cones – to consider also qualitative properties of the angles. Weshall say that an arc between two points a and b of Sn is small if 〈a, b〉 > 0,medium if 〈a, b〉 = 0, and large if 〈a, b〉 < 0. We shall say that an n-simplexis small if each of the arcs is small or medium, and large if each of its arcsis large or medium. Finally, we say that an n-simplex is hyperacute if each ofthe interior angles is acute or right.

The following is trivial:

Theorem 5.4.1 If a spherical n-simplex is small (respectively, large), thenall its faces are small (respectively, large) as well.

Theorem 5.4.2 The polar n-simplex of a spherical n-simplex Σ is large ifand only if Σ is hyperacute.

Less immediate is:

Theorem 5.4.3 If a spherical n-simplex is hyperacute, then it is small.

Proof. By Theorems 5.2.7, 5.2.8, and 5.4.2, the Gramian of the polar ofa hyperacute spherical n-simplex is an M -matrix. Since the inverse of anM -matrix is a nonnegative matrix by Theorem A.3.2, the result follows. �

Theorem 5.4.4 If a spherical n-simplex is hyperacute, then all its faces arehyperacute.

Proof. If C is hyperacute, then again the Gramian of the polar cone is anM -matrix. The inverse of the principal submatrix corresponding to the face is

Page 144: Graphs and Matrices in Geometry Fiedler.pdf

134 Further geometric objects

thus a Schur complement of this Gramian. By Theorem A.3.3 in the Appendix,it is again an M -matrix so that the polar cone of the face is large. By Theorem5.4.2, the face is hyperacute. �

We can repeat the results of Section 2 for spherical simplexes. In particular,the results on circumscribed and inscribed circular cones (in the spherical case,hyperspheres with radius smaller than one on the unit hypersphere) will bevalid. Also, conjugacy and the notion that an n-simplex is usual can be defined.There is also an analogy to the isogonal correspondence for the sphericalsimplex as mentioned in Theorem 5.2.17, as well as isogonal correspondencewith respect to the polar simplex.

Also, the whole theory about the orthocentric n-cones can be used forspherical simplexes. Let us return to Theorem 5.2.9, where the Gramian ofnormalized vectors of two biorthogonal bases was described. In [14], it wasproved that a necessary and sufficient condition for the diagonal entries aii ofa positive definite matrix A and the diagonal entries αii of the inverse matrixA−1 is

aiiαii ≥ 1 for all i;

and

2maxi

(√aiiαii − 1) ≤

∑i

(√aiiαii − 1). (5.13)

If we apply this result to the Gramians of C and C (multiplied by D−1 fromboth sides), we obtain that the diagonal entries d−1

i of D−1 satisfy, in additionto 0 < di ≤ 1, the inequality

2maxi

(d−1i − 1) ≤

∑i

(d−1i − 1). (5.14)

Observing that di is by (5.6) the cosine of the angle ϕi between ai and bi,it follows that the modulus of π

2− ϕi is the angle ψi of the altitude between

the vector ai and the opposite face of C. Since (5.14) means

2maxi

(secϕi − 1) ≤∑

i

(secϕi − 1),

we obtain the following.

Theorem 5.4.5 A necessary and sufficient condition that the angles ψi, i =1, . . . , n, be angles of the altitudes in a spherical n-simplex, is

2maxi

(cscψi − 1) ≤∑

i

(cscψi − 1). (5.15)

Remark 5.4.6 In [14], a necessary and sufficient condition for a positivedefinite matrix A was found in order that equality in (5.13) is attained. Itcan be shown that the corresponding geometric interpretation for a spherical

Page 145: Graphs and Matrices in Geometry Fiedler.pdf

5.5 Finite sets of points 135

n-simplex Σ satisfying equality in (5.15) is that the polar n-simplex Σ issymmetric to Σ with respect to a hyperplane. In addition, both simplexes areorthocentric.

We make a final comment on the spherical geometry. As we saw, spheri-cal geometry is in a sense richer than the Euclidean since we can study thepolar objects. On the other hand, we are losing one dimension (in E3, wecan visualize only spherical triangles and not spherical tetrahedrons). In theEuclidean geometry, we have the centroid which we do not have in sphericalgeometry, etc.

5.5 Finite sets of points

As we saw in Chapter 1, we can study problems in En using the barycentriccoordinates with respect to a simplex, i.e. a distinguished set of n+1 linearlyindependent points. We can ask the question of whether we can do somethinganalogous in the case that we have more than n+1 distinguished points in En.

In fact, we can again define barycentric coordinates.Suppose that A1, A2, . . . , Am are points in En, m > n+1. A linear combina-

tion∑m

1 αiAi has again geometric meaning in two cases. If the sum∑αi = 1,

we obtain a point; if∑αi = 0, the result is a vector. Of course, all such com-

binations describe just the smallest linear space of En containing the givenpoints.

Analogously to the construction in Chapter 1, we can define the homoge-neous barycentric coordinates as follows.

A linear combination∑

i βiAi is a vector if∑

i βi = 0; if∑

i βi �= 0, then itis the point

∑i γβiAi, where γ = (

∑i βi)−1. We have thus a correspondence

between the points and vectors in En and the (m− 1)-dimensional projectivespace Pm−1. In this space, we can identify a linear subspace formed by thelinear dependencies among the points Ai. We shall illustrate the situation byan example.

Example 5.5.1 Let A = (0, 0), B = (1, 0), C = (1, 1), and D = (0, 1) befour such points in E2 in the usual coordinates. The point

(12, 1

2

)has the

expression 14A+ 1

4B+ 14C + 1

4D, but also 12A+ 1

2C. This is, of course, causedby the fact that there is a linear dependence relation A−B+C−D = 0 amongthe given points. The projective space mentioned above is three-dimensional,but there is a plane P2 with the equation x1 − x2 + x3 − x4 = 0 having theproperty that there is a one-to-one correspondence between the points of theplane E2, i.e. the Euclidian plane E2 completed by the points at infinity, andthe plane P2. In particular, the improper points of E2 correspond to the pointsin P2 contained in the plane x1 + x2 + x3 + x4 = 0.

Page 146: Graphs and Matrices in Geometry Fiedler.pdf

136 Further geometric objects

The squares of the mutual distances of the given points form the matrix

M =

⎡⎢⎢⎣0 1 2 11 0 1 22 1 0 11 2 1 0

⎤⎥⎥⎦ .Observe that the bordered matrix (as was done in Chapter 1, Corol-

lary 1.4.3)

M0 =

⎡⎢⎢⎢⎢⎣0 1 1 1 11 0 1 2 11 1 0 1 21 2 1 0 11 1 2 1 0

⎤⎥⎥⎥⎥⎦is singular since M0 [0, 1,−1, 1,−1]T = 0.

On the other hand, if a vector x = [x1, x2, x3, x4]T satisfies∑xi = 0, then

xT M x is after some manipulations (subtracting (x1 + x2 + x3 + x4)2, etc.)equal to −(x1 + x3)2 − (x2 + x4)2, thus nonpositive and equal to zero if andonly if the vector is a multiple of [1,−1, 1,−1]T .

Let us return to the general case. The squares of the mutual distances ofthe points Ai form the matrix M = [|Ai − Aj |2]. As in the case of simplexes,we call it the Menger matrix of the (ordered) system of points. The followingtheorem describes its properties.

Theorem 5.5.2 Let S = {A1, A2, . . . , Am} be an ordered system of points inEn, m > n+1. Denote by M = [mij ] the Menger matrix of S, mij = |Ai−Aj |2,and by M0 the bordered Menger matrix

M0 =[

0 eT

e M

]. (5.16)

Then:

(i) mii = 0, mij = mji;(ii) whenever x1, . . . , xm are real numbers satisfying

∑i xi = 0, then

m∑i,j=1

mijxixj ≤ 0;

(iii) the matrix M0 has rank s+1, where s is the maximum number of linearlyindependent points in S.

Conversely, if M = [mij ] is a real m×m matrix satisfying (i), (ii) and therank of the corresponding matrix M0 is s + 1, then there exists a system of

Page 147: Graphs and Matrices in Geometry Fiedler.pdf

5.5 Finite sets of points 137

points in a Euclidean space with rank s such that M is the Menger matrix ofthis ordered system.

Proof. In the first part, (i) is evident. To prove (ii), choose some orthonormal

coordinate system in En. Then let (ka1, . . . ,

kan) represent the coordinates of Ak,

k = 1, . . . ,m. Suppose now that x1, . . . , xm is a nonzero m-tuple satisfyingm∑

i=1

xi = 0. Then

m∑i,k=1

mikxixk =m∑

i,k=1

( n∑α=1

(iaα − k

aα)2)xixk

=m∑

i=1

( n∑α=1

ia2

α

)xi

m∑k=1

xk +m∑

i=1

xi

m∑k=1

( n∑α=1

ka2

α

)xk −

−2m∑

i,k=1

n∑α=1

iaα

kaαxixk

= −2n∑

α=1

( m∑k=1

kaαxk

)2

≤ 0.

Suppose now that the rank of the system is s. Then the matrix⎡⎢⎢⎢⎢⎣1a1 . . .

1an 1

2a1 . . .

2an 1

.... . .

......

ma1 . . .

man 1

⎤⎥⎥⎥⎥⎦has rank s. Without loss of generality, we can assume that the first s rowsare linearly independent so that each of the remaining m− s rows is linearlydependent on the first s rows. The situation is reflected in the extended Mengermatrix M0 from (5.16) as follows: the matrix M0 in the first s + 1 rows andcolumns is nonsingular by Corollary 1.4.3 since the first s points form an(s−1)-simplex and this matrix is the corresponding extended Menger matrix.The rank of the matrix M0 is thus at least s + 1. We shall show that eachof the remaining columns of M0, say, the next of which corresponds to thepoint As+1, is linearly dependent on the first s + 1 columns. Let the lineardependence relation among the first s+ 1 points Ai be

γ1A1 + · · · + γsAs + γs+1As+1 = 0, (5.17)s+1∑i=1

γi = 0. (5.18)

We shall show that there exists a number γ0 such that the linear combinationof the first s+ 2 columns with coefficients γ0, γ1, . . . , γs+1 is zero. Because of

Page 148: Graphs and Matrices in Geometry Fiedler.pdf

138 Further geometric objects

(5.18), it is true for the first entry. Since |Ap−Aq|2 = 〈Ap−Aq, Ap−Aq〉, etc.,we obtain in the (i+ 1)th entry γ0 +

∑s+1j=1 γj〈Aj −Ai, Aj −Ai〉, which, if we

consider the points Ap formally as radius vectors from some fixed origin, canbe written as γ0+

∑s+1j=1 γj〈Aj , Aj〉+

∑s+1j=1 γj〈Ai, Ai〉−2

∑s+1j=1 γj〈Aj , Ai〉. The

last two sums are equal to zero because of (5.18) and (5.17). The remainingsum does not depend on i so that it can be made zero by choosing γ0 =−∑s+1

j=1 γj〈Aj , Aj〉.It remains to prove the last assertion. It is easy to show that similar to

Theorem 1.2.7 we can reformulate the condition (ii) in the following form.The (m − 1) × (m − 1) matrix C = [cij ], i, j = 1, . . . ,m − 1, where cij =

mim +mjm −mij , is positive semidefinite.The condition that the rank of M0 is s+ 1 implies similarly as in Theorem

1.2.4 that the rank of C is s. Thus there exists in a Euclidean space Es

of dimension s a set of vectors c1, . . . , cm−1, such that 〈ci, cj〉 = cij , i, j =1, . . . ,m − 1. Choosing arbitrarily an origin Am and defining points Ai asAm + ci, i = 1, . . . ,m − 1, we obtain a set of points in an s-dimensionalEuclidean point space, the Menger matrix of which is M . �

This theorem has important consequences.

Theorem 5.5.3 The formulae (1.8) and (1.9) for the inner product of twovectors and for the square of the distance between two points in barycentriccoordinates hold in generalized barycentric coordinates as well:

〈Y −X,Z −X〉 = −12

∑i,k=1

mik

(xi

Σxj− yi

Σyj

)(xk

Σxj− zk

Σzj

),

ρ2(X,Y ) = −12

∑i,k=1

mik

(xi

Σxj− yi

Σyj

)(xk

Σxj− yk

Σyj

). (5.19)

The summation is over the whole set of points and does not depend on thechoice of barycentric coordinates if there is such choice.

Theorem 5.5.4 Suppose A1, . . . , Am is a system S of points in En and M =[mik], the corresponding Menger matrix. Then all the points of S are on ahypersphere in En if and only if there exists a positive constant c such that forall x1, . . . , xm

m∑i,k=1

mikxixk ≤ c

( m∑i=1

xi

)2

. (5.20)

If there is such a constant, then there exists the smallest, say c0, of suchconstants, and then c0 = 2r2, where r is the radius of the hypersphere.

Proof. Suppose K is a hypersphere with radius r and center C. Denote bys1, . . . , sm the nonhomogeneous barycentric coordinates of C. Observe that

Page 149: Graphs and Matrices in Geometry Fiedler.pdf

5.5 Finite sets of points 139

K contains all the points of S if and only if |Ai − C|2 = r2, i.e. by (5.19), ifand only if for some r2 > 0∑

k

miksk − 12

m∑k,l=1

mklsksl = r2, i = 1, . . . ,m. (5.21)

Suppose first that (5.21) is satisfied. Then for every m-tuple x1, . . . ,xm, we obtain ∑

i,k

mikxisk =(

12

m∑k,l=1

mklsksl + r2)∑

i

xi. (5.22)

In particular ∑i,k

miksisk = 2r2.

Since a proper point X = (x1, . . . , xm) belongs to K if and only if |X−C|2 =r2, i.e.

−∑

i,k mikxixk

2(∑xi)2

+

∑i,k mikxisk∑

xi−∑

i,k miksisk

2= r2,

we obtain by (5.22) that

−∑

i,k mikxixk

2(∑xi)2

+ r2 ≥ 0.

Thus (even also for∑xk = 0 by (ii) of Theorem 5.5.2)∑

i,k

mikxixk ≤ 2r2(∑

k

xk

)2

.

This means that (5.20) holds, and the constant c0 = 2r2 cannot be improved.

Conversely, let (5.20) hold. Then there exists c0 = maxm∑

i,k=1

mikxixk over

x, for whichm∑

i=1xi = 1. Suppose that this maximum is attained at the m-tuple

s = (s1, . . . , sm),∑k

sk = 1. Since the quadratic form

(∑i,k

miksisk

)(∑xi

)2

−(∑

si

)2(∑i,k

mikxixk

)is positive semidefinite and attains the value zero for x = s, all the partialderivatives with respect to xi at this point are equal to zero(∑

i,k

miksisk

)∑sj −

(∑sj

)2∑mijsj = 0, i = 1, . . . ,m,

Page 150: Graphs and Matrices in Geometry Fiedler.pdf

140 Further geometric objects

or, using∑sj = 1, we obtain the identity∑

i,k

mikxisk =(∑

j

xj

)∑i,k

miksisk.

This means that (5.21) holds for r2 = 12

∑i,k miksisk, which is a positive

number. �

Remark 5.5.5 Observe that in the case of the four points in Example 5.5.1the condition (5.20) is satisfied with the constant c0 = 1. Thus the points areon a circle with radius 1/

√2.

Theorem 5.5.6 Denote by Mm (m ≥ 2) the vector space of the m × m

matrices [aik] such that

aii = 0, aik = aki, i, k = 1, . . . ,m.

Then the set of all Menger matrices [mik], which satisfy the conditions (i)and (ii) from Theorem 5.5.2, forms a convex cone with the zero matrix as thevertex. This cone Sm is the convex hull of the matrices A = [aik] of the form

aik = (ci − ck)2,m∑

i=1

ci = 0, (5.23)

with real parameters ci.

Proof. Let [mik] ∈ Sm. By Theorem 5.5.2, there exists in Em−1 a system ofpoints A1, . . . , Am such that

mik = |Ai −Ak|2, i, k = 1, . . . ,m.

Choose in Em−1 an arbitrary system of cartesian coordinates such that thesums of the first, second, etc., coordinates of the points Ai are equal to zero(we want the centroid of the system A1, . . . , Am to be at the origin).

Then for Ai = (ia1, . . . ,

iam−1), not only

∑i

iaα = 0 for α = 1, . . . ,m− 1, but

also mik =m−1∑α=1

(iaα − k

aα)2. This means that the point [mik] is the arithmetic

mean of the points of the form (5.23) for

αci =

iaα

√m− 1, i = 1, . . . ,m, α = 1, . . . ,m− 1. �

Remark 5.5.7 In this sense, the ordered systems of m points in Em−1 forma convex cone. This can be used for the study of such systems. We should,however, have in mind that the dimension of the sum of two systems whichcorrespond to systems of smaller rank can have greater rank (however, notmore than the sum of the ranks). We can describe geometrically that the sumof two systems (in the above sense) is again a system. In the Euclidean space

Page 151: Graphs and Matrices in Geometry Fiedler.pdf

5.5 Finite sets of points 141

E2m, choose a cartesian system of coordinates. In the m-dimensional subspaceEm1, which has the last m coordinates zero, construct the system with thefirst Menger matrix; in the m-dimensional subspace Em2, which has the firstm coordinates zero, construct the system with the second Menger matrix. Ifnow Ak is the kth point of the first system and Bk is the kth point of thesecond system, let Ck be the point whose first m coordinates are those of thepoint Ak and the last m coordinates those of the point Bk. Then, since

|Ci − Cj |2 = |Ai − Aj |2 + |Bi −Bj |2,

we have found a system corresponding to the sum of the two Menger matrices(in E2m, but the dimension could be reduced).

Theorem 5.5.8 The set Phm of those matrices from Mm which fulfill thecondition (5.20), i.e. which correspond to systems of points on a hypersphere,is also a convex cone in Sm.

In addition, if A1 is a system with radius r1 and A2 a system with radiusr2, then A1 + A2 has radius r fulfilling r2 ≤ r21 + r22.

Proof. Suppose that both conditions∑ 1mikxixk ≤ 2r21 (

∑i xi)2 and∑ 2

mikxixk ≤ 2r21(∑

i xi)2 are satisfied; then for mik =1mik +

2mik,∑

mikxixk ≤ (2r21 + 2r22)(∑

i xi)2. This, together with the obvious multi-plicative property by a positive constant, yields convexity. Also, the formular2 ≤ r21 + r22 follows by Theorem 5.5.4. �

Theorem 5.5.9 The set Pm of those matrices [aik] from Mm, which satisfythe system of inequalities

aik + ail ≥ akl, i, k, l = 1, . . . ,m, (5.24)

is a convex polyhedral cone.

Proof. This follows from the linearity of the conditions (5.24). �

Remark 5.5.10 Interpreting this theorem in terms of systems of points, weobtain:

The system of all ordered m-tuples of points in Em−1 such that any threeof them form a triangle with no obtuse angle, i.e. the set Pm ∩ Sm, forms aconvex cone.

Theorem 5.5.11 Suppose that c1, . . . , cm are real numbers, wherem∑

i=1

ci = 0.The matrix A = [aik] with entries

aik = |ci − ck|, i, k = 1, . . . ,m, (5.25)

is contained in Sm. The set Pm, formed as the convex hull of matricessatisfying (5.25), is a convex cone contained in the intersection Pm ∩ Sm.

Page 152: Graphs and Matrices in Geometry Fiedler.pdf

142 Further geometric objects

Proof. Suppose ci1 ≤ ci2 ≤ · · · ≤ cim. Then the points A1, . . . , Am in Em−1,

whose coordinates in some Cartesian coordinate system are

Ai1 = (0, 0, . . . , 0),Ai2 = (

√ci2 − ci1 , 0, . . . , 0),

Ai3 = (√ci2 − ci1 ,

√ci3 − ci2 , . . . , 0),

. . .

Aim= (

√ci2 − ci1 ,

√ci3 − ci2 , . . . ,

√cim

− cim−1),

clearly have the property that

|ci − ck| = |Ai −Ak|2.

Thus A ∈ Sm. Since the condition (5.24) is satisfied, A ∈ Pm. �

Remark 5.5.12 Compare (5.25) with (4.2) for the Schlaefli simplex.

Theorem 5.5.13 Denote by Pm the convex hull of the matrices A = [aik]such that for some subset N0 ⊂ N = {1, . . . ,m}, and a constant a

aik = 0 for i, k ∈ N0;

aik = 0 for i, k ∈ N −N0;

aik = aki = a ≥ 0 for i ∈ N0, k ∈ N −N0.

Then Pm ⊂ Pm ∩ Phm ∩ Sm.

Proof. This is clear, since these matrices correspond to such systems of mpoints, at most two of which are distinct. �

Remark 5.5.14 The set Pm corresponds to those ordered systems of mpoints in Em−1, which can be completed into 2N vertices of some right boxin EN−1 (cf. Theorem 4.1.2), which may be degenerate when some oppositefaces coincide.

Observe that all acute orthocentric n-simplexes also form a cone. Another,more general, observation is that the Gramians of n-simplexes also form acone. In particular, the Gramians of the hyperacute simplexes form a cone. Itis interesting that due to linearity of the expressions (5.2) and (5.3), it followsthat this new operation of addition of the Gramians corresponds to additionof Menger matrices of the inverse cones.

5.6 Degenerate simplexes

Suppose that we have an n-simplex Σ and a (in a sense privileged) vector (or,direction) d. The orthogonal projection of Σ onto a hyperplane orthogonal tod forms a set of n+ 1 points in an (n− 1)-dimensional Euclidean point space.

Page 153: Graphs and Matrices in Geometry Fiedler.pdf

5.6 Degenerate simplexes 143

We can do this even as a one-parametric problem, starting with the origi-nal simplex and continuously ending with the projected object. We can thenask what happens with some distinguished points, such as the circumcenter,incenter, Lemoine point, etc. It is clear that the projection of the centroidwill always exist. Thus also the vectors from the centroid to the vertices ofthe simplex are projected on such (linearly dependent) vectors. Forming thebiorthogonal set of vectors to these, we can ask if this can be obtained by ananalogous projection of some n-simplex.

We can ask what geometric object can be considered as the closest objectto an n-simplex. It seems that it could be a set of n+2 points in the Euclideanpoint n-space. It is natural to assume that no n+ 1 points of these points arelinearly dependent. We suggest to call such an object an n-bisimplex. Thus a2-bisimplex is a quadrilateral, etc. The points which determine the bisimplexwill again be called vertices of the bisimplex.

Theorem 5.6.1 Let A1, . . . , An+2 be vertices of an n-bisimplex in En. Thenthere exists a decomposition of these points into two nonvoid parts in such away that there is a point in En which is in the convex hull of the points of eachof the parts.

Proof. The points Ai are linearly dependent but any n + 1 of them are lin-early independent. Therefore, there is exactly one (linearly independent) lineardependence relation among the points, say∑

k

αkAk = 0,∑

k

αk = 0;

here, all the αi coefficients are different from zero. Since the sum is zero, thesets N+ and N− of indices corresponding to positive αs and negative αs areboth nonvoid. It is then immediate that the point

1∑i∈S+ αi

∑i∈S+

αiAi

coincides with the point

1∑i∈S− αi

∑i∈S−

αiAi

and has thus the mentioned property. �

Remark 5.6.2 If one of the sets S+, S− consists of just one element, thecorresponding point is in the interior of the n-simplex determined by theremaining vertices. If one of the sets contains two elements, the correspondingpoints are in opposite halfspaces with respect to the hyperplane determinedby the remaining vertices. Strictly speaking, only in this latter case could theresult be called a bisimplex.

Page 154: Graphs and Matrices in Geometry Fiedler.pdf

144 Further geometric objects

Suppose now that in an (n − 1)-bisimplex in En−1 we move one vertex inthe orthogonal direction to En−1 infinitesimally, i.e. the distance of the movedvertex from En−1 will be ε > 0 and tending to zero. The resulting object willbe called a degenerate n-simplex. Every interior angle of this n-simplex willbe either infinitesimally small, or infinitesimally close to π.

Example 5.6.3 Let A, B, C, D be the points from Example 5.5.1. If thepoint D has the third coordinate ε and the remaining three zero, the resultingtetrahedron will have four acute angles opposite the edges AD, DC, AB, andBC, and two obtuse angles opposite the edges AC and BD.

We leave it to the reader to show that a general theorem on the coloredgraph (cf. Chapter 3, Section 1) of the degenerate n-simplex holds.

Theorem 5.6.4 Let A1, . . . , An+1 be vertices of a degenerate n-simplex, andlet N+, N− be the parts of the decomposition of indices as in the proof ofTheorem 5.6.1. Then the edges AiAk will be red if and only if i and k belongto different sets N+, N−, and blue if and only if i and k belong to the sameset N+, or N−.

Remark 5.6.5 Theorem 5.6.4 implies that the degenerate simplex is flat inthe sense of Remark 2.2.20. In fact, this was the reason for that notation.

Another type of an n-simplex close to degenerate is one which we suggestbe called a needle. It is essentially a simplex obtained by perturbations fromthe set of n + 1 points on a line. Such a simplex should have the propertythat every face is again a needle and, if possible, the colored graph of everyface (as well as of the simplex itself) should have a red path containing allthe vertices, whereas all the remaining edges are blue. One possibility is usingthe following result. First, call a tetrahedron A1A2A3A4 with ordered verticesa t-needle if the angles ∠A1A2A3 and ∠A2A3A4 are obtuse and the sum ofsquares |A1A3|2 + |A2A4|2 is smaller than |A1A4|2 + |A2A3|2. Then if eachtetrahedron AiAi+1Ai+2Ai+3 for i = 1, . . . , n − 2 is a t-needle, then all thetetrahedrons Ai1Ai2Ai3Ai4 are t-needles if i1 < i2 < i3 < i4.

Page 155: Graphs and Matrices in Geometry Fiedler.pdf

6

Applications

6.1 An application to graph theory

In this section, we shall investigate undirected graphs with n nodes withoutloops and multiple edges. We assume that the set of nodes is N = {1, 2, . . . , n}and we write G = (N,E) where E denotes the set of edges.

Recall that the Laplacian matrix of G = (N,E), Laplacian for short, is thereal symmetric n× n matrix L(G) whose quadratic form is

〈L(G)x, x〉 =∑

i,k,i<k,(i,k)∈E

(xi − xk)2.

Let us list a few elementary properties of L(G):

(L 1) L(G) = D(G) − A(G),where A(G) is the adjacency matrix of G and D(G) is the diagonalmatrix, the ith diagonal entry of which is di, the degree of the nodei in G;

(L 2) L(G) is positive semidefinite and singular;(L 3) L(G)e = 0, where e is the vector of all ones;(L 4) if G is connected, then L(G) has rank n− 1;(L 5) for the complement G of G, L(G) + L(G) = nI − J,

where J = eeT .

More generally, we can define the Laplacian of a weighted graph GC =(N,E,C) with nonnegative weight cij = cji assigned to each edge (i, j) ∈ E

as follows:L(GC) is the symmetric matrix of the quadratic form∑

(i,j)∈E,i<j

cij(xi − xj)2.

Clearly, L(GC) is again positive semidefinite and singular withL(GC)e = 0. If the graph with the node set N and the set of positively

Page 156: Graphs and Matrices in Geometry Fiedler.pdf

146 Applications

weighted edges in C is connected, the rank of L(GC) is n−1. In this last case,we shall call such a weighted graph a connected weighted graph.

Now let G (or GC) be a (weighted) graph with n nodes. The eigenvalues

λ1 ≤ λ2 ≤ . . . ≤ λn

of L(G) (or L(GC)) will be called Laplacian eigenvalues of G (GC) (λ1 thefirst, λ2 the second etc.).

The first (smallest) Laplacian eigenvalue λ1 is zero, the second λ2 wasdenoted as a(G) and called the algebraic connectivity of the graph G in [4]since it has similar properties as the edge-connectivity e(G), the number ofedges (or, the sum of edge-weights) in the minimum cut.

In addition, the following inequalities were proved in [17]

2(1 − cos

π

n

)e(G) ≤ a(G) ≤ e(G).

An important property of the eigenvector corresponding to λ2 was provedin [19].

Theorem 6.1.1 Let G be a connected graph, and let u be a (real) eigenvectorof L(G) corresponding to the algebraic connectivity a(G). Then the subgraphof G induced by the node set corresponding to the nodes with nonnegativecoordinates of u is connected.

We shall first prove the following lemma.

Lemma 6.1.2 Suppose A is an n × n symmetric nonnegative irreduciblematrix with eigenvalues λ1 ≥ λ2 ≥ · · · ≥ λn. Denote by v = [vi] a (real) eigen-vector of A corresponding to λ2 and N = {1, 2, . . . , n}. If M = {i ∈ N ; vi ≥0}, then M is a proper nonvoid subset of N and the principal submatrix A(M)with rows and columns indices in M is irreducible.

Proof. Without loss of generality, we can choose M = {1, 2, . . . ,m}. Supposethat A(M) is reducible and of the form

A(M) =

⎡⎢⎢⎢⎣A11

A22

. . .Arr

⎤⎥⎥⎥⎦ , r ≥ 2,

where all Aiis are irreducible. Thus A has the form

A =

⎡⎢⎢⎢⎣A11 0 A1,r+1

. . ....

0 Arr Ar,r+1

AT1,r+1 · · · AT

r,r+1 Ar+1,r+1

⎤⎥⎥⎥⎦ .

Page 157: Graphs and Matrices in Geometry Fiedler.pdf

6.2 Simplex of a graph 147

Let

v =

⎡⎢⎢⎢⎣v(1)

...v(r)

v(r+1)

⎤⎥⎥⎥⎦be the corresponding partitioning of v, v(1) ≥ 0, . . . , v(r) ≥ 0, v(r+1) < 0.Then

(Akk − λ2Ik)v(k) = −Ak,r+1v(r+1), k = 1, . . . , r. (6.1)

The matrix λ2I − A has (since λ1 > λ2 by the Perron–Frobenius theorem)exactly one negative eigenvalue; therefore, its principal submatrix λ2I(M) −A(M) has at most one negative eigenvalue. Since r ≥ 2, some diagonal block,say λkIk − Akk, has only nonnegative eigenvalues and is thus a (nonsingularor singular) M -matrix. If it were singular, then by Theorem A.3.9 we wouldhave

(λ2Ik − Akk)z(k) = 0 for z(k) > 0.

Multiplication of (6.1) by (z(k))T from the left implies

(z(k))TAk,r+1v(r+1) = 0,

i.e. Ak,r+1 = 0, a contradiction of irreducibility.Therefore, λkIk − Akk is nonsingular and, by the property of M -matrices,

(λkIk −Akk)−1 > 0. Now, (6.1) implies

v(k) = (λkIk − Akk)−1Ak,r+1v(r+1);

the left-hand side is a nonnegative vector, the right-hand side a nonpositivevector. Consequently, both are equal to zero, and necessarily Ak,r+1 = 0, acontradiction of irreducibility again.

Let us complete the proof of Theorem 6.1.1. Choose a real c so that thematrix A = cI − L(G) is nonnegative. The maximal eigenvalue of A is c, andλ2 is c− a(G), with the corresponding eigenvector v a real multiple of u. Bythe lemma, if this multiple is positive, the subgraph of G induced by the setof vertices with nonnegative coordinates is connected. Since the lemma alsoholds for the vector −v, the result does not depend on multiplication by −1.

6.2 Simplex of a graph

In this section, we assume that G is a connected graph. Observe that theLaplacian L(G) as well as the Laplacian L(GC) of a connected weighted graphGC satisfy the conditions (1.31) and (1.32) (with n instead of n + 1) of thematrix Q.

Page 158: Graphs and Matrices in Geometry Fiedler.pdf

148 Applications

Therefore, we can assign to G [or GC ] in En−1 an – up to congruenceuniquely defined – (n− 1)-simplex Σ(G) [or, Σ(GC)], which we shall call thesimplex of the graph G [GC , respectively]. The corresponding Menger matrixM will be called the Menger matrix of G.

Applying Theorem 1.4.1, we immediately have the following theorem,formulated just for the more general case of a weighted graph:

Theorem 6.2.1 Let e be the column vector of n ones. Then there exists aunique column vector q0 and a unique number q00 such that the symmetricmatrix M = [mik] with mii = 0 satisfies[

0 eT

e M

] [q00 qT

0

q0 L(GC)

]= −2In+1. (6.2)

The (unique) matrix M is the Menger matrix of GC .

Example 6.2.2 Let G be the path P4 with four nodes 1, 2, 3, 4 and edges(1, 2), (2, 3), (3, 4). Then (6.2) reads as follows⎡⎢⎢⎢⎢⎢⎣

0 1 1 1 11 0 1 2 31 1 0 1 21 2 1 0 11 3 2 1 0

⎤⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎢⎢⎢⎣3 − 1 0 0 −1

−1 1 −1 0 00 −1 2 −1 00 0 −1 2 −1

−1 0 0 −1 1

⎤⎥⎥⎥⎥⎥⎦ = −2I5.

M L(P4)

Example 6.2.3 For a star Sn with nodes 1, 2, . . . , n and the set of edges(1, k), k = 2, . . . , n, the equality (6.2) reads⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 1 1 1 . . . 11 0 1 1 . . . 11 1 0 2 . . . 21 1 2 0 . . . 2. . . . . .

1 1 2 2 . . . 0

⎤⎥⎥⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎢⎢⎢⎢⎢⎣

n− 1 n− 3 −1 −1 . . . −1n− 3 n− 1 −1 −1 . . . −1−1 −1 1 0 . . . 0−1 −1 0 1 . . . 0. . . . . .

−1 −1 0 0 . . . 1

⎤⎥⎥⎥⎥⎥⎥⎥⎦= −2In+1.

Remark 6.2.4 Observe that – in agreement with Theorem 4.1.3 – in bothcases, the Menger matrix M is at the same time the distance matrix of G, i.e.the matrix D = [Dik] for which Dik means the distance between the nodes iand k in GC , in general the minimum of the lengths of all the paths betweeni and k, the length of a path being the sum of the lengths of edges containedin the path. We intend to prove that M = D for all weighted trees, of whichthe length of each edge is appropriately chosen.

Page 159: Graphs and Matrices in Geometry Fiedler.pdf

6.2 Simplex of a graph 149

Theorem 6.2.5 Let TC = (N,E,C) be a connected weighted tree, N ={1, . . . , n}, and cik = cki denote the weight of the edge (i, k) ∈ E. LetL(TC) = [qik] be the Laplacian of TC so that for i, k ∈ N

qii =∑

k,(i,k)∈E

cik,

qik = −cik for (i, k) ∈ E,

qik = 0 otherwise.

Further denote for i, j, k ∈ N

Rij =1cij

for (i, j) ∈ E,

Rii = 0,

Rik(= Rki) = Rij1 +Rj1j2 + · · · +Rjs−1,js +Rjs,k

whenever (i, j1, j2, . . . , js, k) is the (unique) path from i to k in TC .If, moreover

R00 = 0,

R0i = 1, i ∈ N,

q00 =∑

(i,k)∈E,i<k

Rik,

q0i = di − 2, di being the degree of i ∈ N in TC ,

then the symmetric matrices

R = [Rrs], Q = [qrs], r, s = 0, 1, . . . , n

satisfy

RQ = −2In+1.

Proof. We apply Lemma 4.1.1 with Rrs instead of mrs and qrs instead of γrs

for r, s = 0, 1, . . . , n+ 1. The result follows. �For cik = 1 for all i, k, we obtain:

Corollary 6.2.6 Let T = (N,E) be a tree with n nodes. Denote by z = [zi]the column n-vector with zi = di − 2, di being the degree of the node i. Then:

(i) the Menger matrix M of T coincides with the (usual) distance matrixD of T ;

(ii) the equality (6.2) reads[0 eT

e D

] [n− 1 zT

z L(T )

]= −2In+1.

Page 160: Graphs and Matrices in Geometry Fiedler.pdf

150 Applications

Let us return now to Theorem 5.1.2 and formulate a result which will relatethe Laplacian eigenvalues to the Menger matrix M .

Theorem 6.2.7 Let M be the Menger matrix of a graph G. Then the n− 1roots of the equation

det[

0 eT

e M − μI

]= 0 (6.3)

are the numbers μi = −2/λi, i = 2, . . . , n where λ2, . . . , λn are the nonzeroLaplacian eigenvalues of G. If y(i) is an eigenvector of L(G) corresponding

to λi �= 0, then[λ−1

0 qT0 y

(i)

y(i)

]is the corresponding annihilating vector of the

matrix in (6.3).

Proof. The first part follows immediately from the identity[0 eT

e M − μI

] [q00 qT

0

q0 L(G)

]=[

−2 0∗ −2I − μL(G)

], (6.4)

where ∗ is some column vector. Postmultiplying (6.4) with μ = −2/λi by[0y(i)

], we obtain the second assertion. �

Corollary 6.2.8 Let G be connected and L(G) = ZZT be any full-rank-factorization, so that Z is an n× (n− 1) matrix. Then

ZTMZ = −2In−1, (6.5)

where M is the Menger matrix of G.

Proof. By (6.2)

ML(G) = −2In − eqT0 .

Since ZT e = 0, by premultiplying by ZT we get

ZTMZZT = −2ZT ,

and hence (6.5) holds. �The interlacing theorem (cf. Appendix, Theorem A.1.33) will now be

applied.

Corollary 6.2.9 The roots of (6.3), i.e. the numbers −2/λi where the λi arethe nonzero Laplacian eigenvalues of G, i = 2, . . . , n, interlace the eigenvaluesof the Menger matrix M of G.

Proof. This follows immediately from the following Lemma which is easilyproved by transforming A to diagonal form by an orthogonal transformation:

Page 161: Graphs and Matrices in Geometry Fiedler.pdf

6.2 Simplex of a graph 151

Lemma 6.2.10 Let A be a real symmetric matrix, u be a nonzero real vector,and t be a real number. Then the zeros of the equation

det[t ut

u A− xI

]= 0

interlace the eigenvalues of A.

Using the previous Lemma, an analogous proof to that of Theorem 6.2.7gives:

Theorem 6.2.11 The eigenvalues mi of the Menger matrix M of G and theroots xi of the equation

det[q00 qT

0

q0 L(G) − xI

]= 0

satisfy

mixi = −2, i = 1, . . . , n.

The numbers xi and the Laplacian eigenvalues λi of G interlace each other.

Let us return now to the geometric considerations from Section 4.

Theorem 6.2.12 Let λi be a nonzero eigenvalue of L(GC), and y be acorresponding eigenvector. Then

n− 1n

· 1λi

is the square of a halfaxis of the Steiner circumscribed ellipsoid of the simplexΣ(GC) of GC and Σyixi = 0 is the equation of the hyperplane orthogonal tothe corresponding axis. Also, y is the direction of the axis.

Corollary 6.2.13 The smallest positive eigenvalue of L(GC) (the alge-braic connectivity of GC) corresponds to the largest halfaxis of the Steinercircumscribed ellipsoid of Σ(GC).

Due to the result presented here as Theorem 6.1.1, the eigenvector y = [yi]corresponding to the second smallest eigenvalue a(G) of G seems to be agood separator in the set of nodes N of G in the sense that the ratio ofthe cardinalities of the two parts is neither very small nor very large. Thegeometric meaning of the subsets N+ and N− of N is as follows.

Theorem 6.2.14 Let y = [yi] be an eigenvector of L(G) corresponding to λ2

(= a(G)). Then

N+ = {i ∈ N | yi > 0},N− = {i ∈ N | yi < 0},Z = {i ∈ N | yi = 0}

Page 162: Graphs and Matrices in Geometry Fiedler.pdf

152 Applications

correspond to the number of vertices of the simplex Σ(G) in the decompo-sition of En with respect to the hyperplane of symmetry H of the Steinercircumscribed ellipsoid orthogonal to the largest halfaxis: |N+| is the numberof vertices of Σ(G) in one halfspace H+, |N−| is the number of vertices inH−, and |Z| is the number of vertices in H.

6.3 Geometric inequalities

We add here a short section on inequalities, which was prompted from theprevious considerations.

Theorem 6.3.1 Let Σ be an n-simplex, with F1 and F2 its (n − 1)-dimensional faces. Then their volumes satisfy

nVn(Σ)Vn−2(F1 ∩ F2) ≤ Vn−1(F1)Vn−1(F2).

Equality is attained if and only if the faces F1 and F2 are orthogonal.

Proof. Follows from the formula (2.4). �Let us show how this formula can be generalized.

Theorem 6.3.2 Let Σ be an n-simplex, with F1 and F2 its faces with nonvoidintersection. Then the volumes of the faces F1 ∩F2 and F1 ∪F2 (the smallestface containing both F1 and F2) satisfy: if f1, f2, f0, and f are the dimensionsof F1, F2, F1 ∩ F2, and F1 ∪ F2, respectively, then

f !f0!Vf (F1 ∪ F2)Vf0(F1 ∩ F2) ≤ f1!f2!Vf1(F1)Vf2(F2). (6.6)

Equality is attained if and only if the faces F1 and F2 are orthogonal in thespace of F1 ∪ F2.

Proof. Suppose that the vertex An+1 is in the intersection. The n× n matrixM with (i, j) entries mi,n+1 +mj,n+1 −mij is then positive definite and eachof its principal minors has determinant corresponding to the volume of a face.Using now the Hadamard–Fischer inequality (Appendix, (A.13)), we obtain(6.6). The proof of equality follows from considering the Schur complementsand the case of equality in (A.12). �

An interesting inequality for the altitudes of a spherical simplex was provedin Theorem 5.4.5. We can use it also for the usual n-simplex for the limitingcase when the radius grows to infinity. We have already proved the strictpolygonal inequality for the reciprocals of the lengths lis in (iii) of Theorem2.1.4 using the volumes of the (n− 1)-dimensional faces

2maxi

1li<∑

i

1li.

Page 163: Graphs and Matrices in Geometry Fiedler.pdf

6.4 Extended graphs of tetrahedrons 153

By (2.3), this generalizes the triangle inequality.In Example 2.1.11, we obtained the formula

576r2V 2 = (aa′ + bb′ + cc′)(−aa′ + bb′ + cc′)(aa′ − bb′ + cc′)(aa′ + bb′ − cc′).

Since the left-hand side is positive, it follows that all expressions in theparentheses on the right-hand side have to be positive. Thus

2max(aa′, bb′, cc′) ≤ aa′ + bb′ + cc′.

This is a generalization of the Ptolemy inequality: The products of the lengthsof two opposite pairs among four points do not exceed the sum of the productsfor the remaining pairs. We proved it for the three-dimensional space, but alimiting procedure leads to the more usual plane case as well.

Many geometric inequalities follow from solutions of optimizationproblems. Usually, the regular simplex is optimal. Let us present an example.

Theorem 6.3.3 If the circumcenter of an n-simplex Σ is an interior pointof Σ or a point of its boundary, then the length of the maximum edge of Σ isat least r

√2(n+ 1)/n, where r is the circumradius. Equality is attained for

the regular n-simplex.

Proof. Let C be the circumcenter in Σ with vertices Ai. Denote ui = Ai −C,i = 1, . . . , n + 1. The condition about the position of C means that all thecoefficients μi in the linear dependence relation

∑i μiui = 0 are nonnegative.

We have then 〈ui, ui〉 = r2: supposing that |AiAj |2 < 2(n + 1)n−1r2 or,〈ui−uj , ui−uj〉 < (n+1)n−1r2, for every pair i, j, i �= j, means that for eachsuch pair 〈ui, uj〉 > −1/n. Thus

⟨∑i μiui, uk

⟩= 0 for each k implies after

dividing by r2 that

0 > μk − 1n

∑i�=k

μi

for each k. But summation over all k yields 0 > 0, a contradiction. �

6.4 Extended graphs of tetrahedrons

Theorems in previous chapters can be used for finding all possible extendedgraphs of tetrahedrons, i.e. extended graphs of simplexes with five nodes. Mostgeneral theorems were proved already in Section 3.4. We start with a lemma.

Lemma 6.4.1 Suppose that G0 is an extended graph of a tetrahedron withthe nodes 0, 1, 2, 3, 4, and G its induced subgraph with nodes 1, 2, 3, 4. If G hasexactly one negative edge (i, j), then (0, i) as well as (0, j) are positive edgesin G0.

Page 164: Graphs and Matrices in Geometry Fiedler.pdf

154 Applications

Proof. Without loss of generality, assume that (1, 2) is that negative edge in G.Let Σ be the corresponding tetrahedron with graph G and extended graphG0. The Gramian Q = [qik], i, k = 1, 2, 3, 4, is then positive semidefinite withrank 3, and satisfies Qe = 0, and q12 > 0, qij ≤ 0 for all the remaining pairsi, j, i �= j.

Since q11 > 0, we have q12 + q13 + q14 < 0. By (3.9)

c1 = −(−q12)(−q13)(−q14) + (−q12)(−q23)(−q24) +

+(−q13)(−q23)(−q34) + (−q14)(−q24)(−q34) +

+(−q12)(−q23)(−q34) + (−q12)(−q24)(−q34) +

+(−q13)(−q34)(−q24) + (−q13)(−q23)(−q24) +

+(−q14)(−q24)(−q23) + (−q14)(−q34)(−q23),

since at the summands corresponding to the remaining spanning trees in G

the coefficient 2 − ki is zero. Thus

c1 = q12q13q14 − (q12 + q13 + q14)(q23q24 + q23q34 + q24q34) > 0,

since both summands are nonnegative and at least one is positive, the positivepart of G being connected by Theorem 3.4.6. Therefore, the edge (0, 1) ispositive in G0. The same holds for the edge (0, 2) by exchanging the indices1 and 2. �

We are able now to prove the main theorem on extended graphs oftetrahedrons.

Theorem 6.4.2 None of the graphs P5, Q5, or R5 in Fig. 6.1 as well as noneof its subgraphs with five nodes, with the exception of the positive circuit, canbe an extended graph of a tetrahedron. All the remaining signed graphs on fivenodes, the positive part of which has edge-connectivity at least two, can serveas extended graphs of tetrahedrons (with an arbitrary choice of the vertex cor-responding to the circumcenter). There are 20 such (mutually non-isomorphic)graphs. They are depicted in Fig. 6.2.

Proof. Theorem 3.4.21 shows that neither P5, Q5, R5, nor any of their sub-graphs on five vertices with at least one negative edge, can be extended graphsof a tetrahedron. Since the positive parts of P5, Q5, R5 have edge-connectivityat least two, the assertion in the first part holds by Theorem 3.4.15.

P5 Q5 R5

Fig. 6.1

Page 165: Graphs and Matrices in Geometry Fiedler.pdf

6.4 Extended graphs of tetrahedrons 155

01 02 03 04

05 06 07 08

09 10 11 12

13 14 15 16

17 18 19 20

Fig. 6.2

To prove the second part, construct in Fig. 6.2 all those signed graphs onfive nodes, the positive part of which has edge-connectivity at least two. Fromthe fact that every positive graph on five nodes with edge-connectivity atleast two must contain the graph 05 or the positive part of 01, it follows thatthe list is complete. Let us show that all these graphs are extended graphsof tetrahedrons. We show this first for graph 09. The graph in Fig. 6.3 is the(usual) graph of some tetrahedron by Theorem 3.1.3. The extended graph ofthis tetrahedron contains by (ii) of Theorem 3.4.10 the positive edge (0, 4),and by Lemma 6.4.1 positive edges (0, 1) and (0, 3). Assume that (0, 2) iseither positive or missing. In the first case the edge (0, 3) would be positiveby (iv) of Theorem 3.4.10 (if we remove node 3); in the second, (0, 3) wouldbe missing by (i) of Theorem 3.4.10. This contradiction shows that (0, 2) isnegative and the graph 09 is an extended graph of a tetrahedron. The graphs01 and 05 are extended graphs of right simplexes by (iv) of Theorem 3.4.10.To prove that the graphs 06 and 15 are such extended graphs, we need theassertion that the graph of Fig. 6.4 is the (usual) graph of the obtuse cyclictetrahedron (a special case of the cyclic simplex from Chapter 4, Section 3).By Lemma 6.4.1, the extended graph has to contain positive edges (0, 1) and

Page 166: Graphs and Matrices in Geometry Fiedler.pdf

156 Applications

1

2 3

4

Fig. 6.3

1

2 3

4

Fig. 6.4

P4

Fig. 6.5

(0, 4). Using the formulae (3.9) for c2 and c3, we obtain that c2 < 0 and c3 < 0.Thus 06 is an extended graph. For the graph 15, the proof is analogous. ByTheorem 3.4.21, the following graphs are such extended graphs: 2, 3, 4, 12, 13,14 (they contain the graph 01), 7, 8 (contain the graph 06), 10, 11 (containthe graph 09), 16, 17, 18, 19, and 20 (contain the graph 15). The proof iscomplete. �

In a similar way as in Theorem 6.4.2, the characterization of extendedgraphs of triangles can be formulated:

Theorem 6.4.3 Neither the graph P4 from Fig. 6.5, nor any of its sub-graphs on four nodes, with the exception of the positive circuit, can be theextended graph of a triangle. All the remaining signed graphs with four nodes,whose positive part has edge-connectivity at least two, are extended graphs ofa triangle.

Proof. Follows immediately by comparing the possibilities with thegraphs in Fig. 3.2 in Section 3.4. �

Remark 6.4.4 As we already noticed, the problem of characterizing allextended graphs of n-simplexes is for n > 3 open.

6.5 Resistive electrical networks

In this section, we intend to show an application of geometric and graph-theoretic notions of the previous chapters to resistive electrical networks.

In the papers [20], [26] the author showed that if n ≥ 2, the following foursets are mathematically equivalent:A. Gn, the set of (in a sense connected) nonnegative valuations of a completegraph with n nodes 1, . . . , n;B. Mn, the set of real symmetric n×n M -matrices of rank n−1 with row-sumszero;C. Nn, the set of all connected electrical networks with n nodes and such thatthe branches contain resistors only;D. Sn, the set of all (in fact, classes of mutually congruent) hyperacute(n− 1)-simplexes with numbered vertices.

Page 167: Graphs and Matrices in Geometry Fiedler.pdf

6.5 Resistive electrical networks 157

The parameters which in separate cases determine the situations are:

(A) nonnegative weights wik(= wki), i, k = 1, . . . , n, i �= k assigned to theedges (i, k);(B) the negatives of the entries aik = aki of the matrix [aik], i �= k;(C) the conductivities Cik (the inverses of resistances) in the branch betweeni and k, if it is contained in the network, or zero, if not;(D) the negatives of the entries qik = qki of the Gramian of the (n−1)-simplex(or, the class).

Observe that the matrix of the graph in (B) is the Laplacian of the graphin (A) and that the (n− 1)-simplex in (D) is the simplex of the graph in thesense of Section 6.1.

The equivalent models have then the determining elements identical. Theadvantage is, of course, that we can use methods of each model in the othermodels, interpret them, etc. In addition, there are some common “invariants.”

Theorem 6.5.1 The following values are in each model the same:(A) edge-connectivity e(G);(B) the so-called measure of irreducibility of the matrix A, i.e.minM,∅�=M �=N

∑i∈M,k/∈M |aik|;

(C) the minimal conductivity between complementary groups of nodes;(D) the reciprocal value of the maximum of the squares of distances betweenpairs of complementary faces of the simplex, in other words the reciprocal ofthe square of the thickness of the simplex.

Probably most interesting is the following equivalence (in particular between(C) and (D)):

Theorem 6.5.2 Let i and j be distinct indices. Then the following quantitiesare equivalent:(A)

∑H∈Kij

π(H)/∑

H∈K π(H), where for the subgraph H of theweighted graph G, π(H) means the sum of all weights on the edges of H,K is the set of spanning trees of G, and Kij is the set of all forests in G withtwo components: one containing the node i, the second the node j.(B) detA(N\{i, j})/ detA(N\{i}), where N = {1, . . . , n} and A(M) for M ⊂N is the principal submatrix of the matrix A with rows in M .(C) Rij, which is the global resistance in the network between the ith and jthnode.(D) mij, i.e. the square of the distance between the ith and jth vertex of thesimplex.

The proof is in [20]. It is interesting that the equivalence between (C) and(D), which the author knew in 1962, was discovered and published in 1968 byD. J. H. Moore [32].

Page 168: Graphs and Matrices in Geometry Fiedler.pdf

158 Applications

We conclude the section by a comment on a closer relationship betweenthe models (C) and (D) (although it could be of interest to consider thecorresponding objects in the models (A) and (B) as well).

Let us observe first that the equivalence of (C) and (D) in Theorem 6.5.2answers the question about all possible structures of mutual resistances, whichcan occur between the n outlets of a black box if we know that the netis connected and contains resistors only. It is identical to the structure ofthe squares of distances in hyperacute (n− 1)-simplexes, i.e. of their Mengermatrices.

Furthermore, the theorem that every at least two-dimensional face of ahyperacute simplex is also hyperacute implies that the smaller black boxobtained from a black box by ignoring some of its outlets is also a realizableblack box itself.

It also follows that every realizable resistive black box can be realized by acomplete network in which every outlet is connected with every other outletby some resistive branch. Conversely we can possibly use this equivalencefor finding networks with the smallest structure (by adding further auxiliarynodes).

We can also investigate what happens if we make shortcuts between two ormore nodes in such a network. This corresponds to an orthogonal projectionof the simplex along the face connecting the shortcut vertices. Geometrically,this means that every such projection is again a hyperacute simplex.

From the network theory it is known that if we put on two disjoint sets ofoutlets a potential (that means that we join the nodes of each of these sets bya shortcut and then put the potential between), then each of the remainingnodes will have some potential. Geometrically this potential can be found asfollows: we find the layer of the maximum thickness which contains on oneof the boundary hyperplanes the vertices of the first group and on the otherthose of the second group. Thanks to the hyperacuteness property, all verticesof the simplex are contained in the layer. The ratio of the distances to the firstand the second hyperplane determines then the potential of each remainingvertex. Of course, the maximum layer is obtained by such position of theboundary hyperplane in which it is orthogonal to the linear space determinedby the union of both groups.

It would be desirable to find the interpretation of the numbers q0i and q00in the electrical model and of q00 in the graph-theoretical model.

Page 169: Graphs and Matrices in Geometry Fiedler.pdf

Appendix

A.1 Matrices

Throughout the book, we use basic facts from matrix theory and the theoryof determinants. The interested reader may find the omitted proofs in generalmatrix theory books, such as [28], [31], and others.

A matrix of type m-by-n or, equivalently, an m × n matrix, is a two-dimensional array of mn numbers (usually real or complex) arranged in m

rows and n columns (m, n positive integers)⎡⎢⎢⎣a11 a12 a13 . . . a1n

a21 a22 a23 . . . a2n

. . . . . . .

am1 am2 am3 . . . amn

⎤⎥⎥⎦ . (A.1)

We call the number aik the entry of the matrix (A.1) in the ith row and thekth column. It is advantageous to denote the matrix (A.1) by a single symbol,say A, C, etc. The set of m×n matrices with real entries is denoted by Rm×n.In some cases, m × n matrices with complex entries will occur and their setis denoted analogously by Cm×n. In some cases, entries can be polynomials,variables, functions, etc.

In this terminology, matrices with only one column (thus, n = 1) are calledcolumn vectors, and matrices with only one row (thus, m = 1) row vectors.In such a case, we write Rm instead of Rm×1 and – unless said otherwise –vectors are always column vectors.

Matrices of the same type can be added entrywise: if A = [aik], B = [bik],then A+B is the matrix [aik+bik]. We also admit multiplication of a matrix bya number (real, complex, a parameter, etc.). If A = [aik] and if α is a number(also called scalar), then αA is the matrix [αaik], of the same type as A.

An m× n matrix A = [aik] can be multiplied by an n× p matrix B = [bkl]as follows: AB is the m× p matrix C = [cil], where

cil = ai1b1l + ai2b2l + · · · + ainbnl.

Page 170: Graphs and Matrices in Geometry Fiedler.pdf

160 Appendix

It is important to notice that the matrices A and B can be multiplied (inthis order) only if the number of columns of A is the same as the number ofrows in B. Also, the entries of A and B should be multiplicable. In general, theproduct AB is not equal to BA, even if the multiplication of both productsis possible. On the other hand, the multiplication fulfills the associative law

(AB)C = A(BC)

as well as (in this case, two) distributive laws:

(A+B)C = AC +BC

and

A(B + C) = AB +AC,

whenever multiplications are possible.Of basic importance are the zero matrices, all entries of which are zeros,

and the identity matrices; the latter are square matrices, i.e. m = n, and haveones in the main diagonal and zeros elsewhere. Thus

[1],[

1 00 1

],

⎡⎣ 1 0 00 1 00 0 1

⎤⎦are identity matrices of order one, two, and three. We denote the zero matri-ces simply by 0, and the identity matrices by I, sometimes with a subscriptdenoting the order.

The identity matrices of appropriate orders have the property that

AI = A and IA = A

hold for any matrix A.Now let A = [aik] be an m × n matrix and let M, N , respectively, denote

the sets {1, . . . ,m}, {1, . . . , n}. If M1 is an ordered subset of M, i.e. M1 ={i1, . . . , ir}, i1 < · · · < ir, and N1 = {k1, . . . , ks} an ordered subset of N ,then A(M1,N1) denotes the r× s submatrix of A obtained from A by leavingthe rows with indices in M1 and removing all the remaining rows and leavingthe columns with indices in N1 and removing the remaining columns.

Particularly important are submatrices corresponding to consecutive rowindices as well as consecutive column indices. Such a submatrix is called ablock of the original matrix. We then obtain a partitioning of the matrix A

into blocks by splitting the set of row indices into subsets of the first, say,p1 indices, then the set of the next p2 indices, etc., up to the last pu indices,and similarly splitting the set of column indices into subsets of consecutive

Page 171: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 161

q1, . . . , qv indices. If Ars denotes the block describing the pr × qs submatrixof A obtained by this procedure, A can be written as

A =

⎡⎢⎢⎣A11 A12 . . . A1v

A21 A22 . . . A2v

. . . . . .

Au1 Au2 . . . Auv

⎤⎥⎥⎦ .If, for instance, we partition the 3 × 4 matrix [aik] with p1 = 2, p2 = 1,

q1 = 1, q2 = 2, q3 = 1, we obtain the block matrix[A11 A12 A13

A21 A22 A23

],

where, say A12 denotes the block[a12 a13

a22 a23

].

On the other hand, we can form matrices from blocks. We only have tofulfill the condition that all matrices in each block row must have the samenumber of rows and all matrices in each block column must have the samenumber of columns.

The importance of block matrices lies in the fact that we can multiply blockmatrices in the same way as before:

Let A = [Aik] and B = [Bkl] be block matrices, A with m block rows andn block columns, and B with n block rows and p block columns. If (and thisis crucial) the first block column of A has the same number of columns as thefirst block row of B has number of rows, the second block column of A has thesame number of columns as the second block row of B has number of rows,etc., then the product C = AB is the matrix C = [Cil], where

Cil = Ai1B1l + Ai2B2l + · · · +AinBnl.

Now let A = [aik] be an m × n matrix. The n × m matrix C = [cpq] forwhich cpq = aqp, p = 1, . . . , n, q = 1, . . . ,m, is called the transpose matrix ofA. It is denoted by AT . If A and B are matrices that can be multiplied, then

(AB)T = BTAT .

Also

(AT )T = A

for every matrix A.This notation is also advantageous for vectors. We usually denote the

column vector u with entries (coordinates) u1, . . . , un as [u1, . . . , un]T .Of crucial importance are square matrices. If of fixed order, say n, and over

a fixed field, e.g. R or C, they form a set that is closed with respect to addition

Page 172: Graphs and Matrices in Geometry Fiedler.pdf

162 Appendix

and multiplication as well as transposition. Here, closed means that the resultof the operation again belongs to the set.

A square matrix A = [aik] of order n is called diagonal if aik = 0 when-ever i �= k. Such a matrix is usually described by its diagonal entries asdiag{a11, . . . , ann}. The matrix A is called lower triangular if aik = 0, when-ever i < k, and upper triangular if aik = 0, whenever i > k. We have then:

Observation A.1.1 The set of diagonal (respectively, lower triangular,respectively, upper triangular) matrices of fixed order over a fixed field R orC is closed with respect to both addition and multiplication.

A square matrix A = [aik] is called tridiagonal if aik = 0,whenever |i−k| >1; thus only diagonal entries and the entries right above or below the diagonalcan be different from zero.

A matrix A (necessarily square!) is called nonsingular if there exists a matrixC such that AC = CA = I. This matrix C (which can be shown to be unique)is called the inverse matrix of A and is denoted by A−1. Clearly

(A−1)−1 = A.

Observation A.1.2 If A, B are nonsingular matrices of the same order,then their product AB is also nonsingular and

(AB)−1 = B−1A−1.

Observation A.1.3 If A is nonsingular, then AT is nonsingular and

(AT )−1 = (A−1)T .

Let us recall now the notion of the determinant of a square matrix A = [aik]of order n. We denote it as detA

detA =∑

P=(k1,...,kn)

σ(P )a1k1a2k2 · · · ankn,

where the sum is taken over all permutations P = (k1, k2, . . . , kn) of the indices1, 2, . . . , n, and σ(P ), the sign of the permutation P , is 1 or −1, according towhether the number of pairs (i, j) for which i < j but ki > kj is even or odd.

In this connection, let us mention that an n × n matrix which has in thefirst row just one nonzero entry 1 in the position (1, k1), in the second rowone nonzero entry 1 in the position (2, k2) etc., in the last row one nonzeroentry 1 in the position (n, kn) is called a permutation matrix. If it is denotedas P , then

P PT = I. (A.2)

We now list some important properties of the determinants.

Page 173: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 163

Theorem A.1.4 Let A = [aik] be a lower triangular, upper triangular, ordiagonal matrix of order n. Then

detA = a11a22 . . . ann.

In particular

det I = 1

for every identity matrix.

We denote by |S| the number of elements in a set S. Let A be a squarematrix of order n. Denote, as before, N = {1, . . . , n}. Whenever M1 ⊂ N ,M2 ⊂ N , and |M1| = |M2|, the submatrix A(M1,M2) is square. We thencall detA(M1,M2) a subdeterminant or minor of the matrix A. If M1 = M2,we speak about principal minors of A.

Theorem A.1.5 If P and Q are square matrices of the same order, then

detPQ = detP · detQ.

Theorem A.1.6 A matrix A = [aik] is nonsingular if and only if it is squareand its determinant is different from zero. In addition, the inverse A−1 = [αik]where

αik =Aki

detA, (A.3)

Aki being the algebraic complement of aki.

Remark A.1.7 The transpose matrix of the algebraic complements is calledthe adjoint matrix of the matrix A and denoted as adjA.

Remark A.1.8 Theorem A.1.5 implies that the product of a finite numberof nonsingular matrices of the same order is again nonsingular.

Remark A.1.9 Theorem A.1.6 implies that for checking that the matrix Cis the inverse of A, only one of the conditions AC = I, CA = I suffices.

Let us return, for a moment, to the block lower triangular matrix as inObservation A.1.1.

Theorem A.1.10 A block triangular matrix

A =

⎡⎢⎢⎣A11 0 0 . . . 0A21 A22 0 . . . 0. . . . . . 0

Ar1 Ar2 Ar3 . . . Arr

⎤⎥⎥⎦with square diagonal blocks is nonsingular if and only if all the diagonal blocksare nonsingular. In such a case the inverse A−1 = [Bik] is also lower block

Page 174: Graphs and Matrices in Geometry Fiedler.pdf

164 Appendix

triangular. The diagonal blocks Bii are the inverses of Aii and the subdiagonalblocks Bij, i > j, can be obtained recurrently from

Bij = −A−1ii

i−1∑k=j

AikBkj .

Remark A.1.11 This theorem applies, of course, also to the simplest casewhen the blocks Aik are entries of the lower triangular matrix [aik]. An anal-ogous result on inverting upper triangular matrices, or upper block triangularmatrices, follows by transposing the matrix and using Observation A.1.3.

A square matrix A of order n is called strongly nonsingular if all the principalminors detA(Nk,Nk), k = 1, . . . , n, Nk = {1, . . . , k} are different from zero.

Theorem A.1.12 Let A be a square matrix. Then the following are equiva-lent:

(i) A is strongly nonsingular.(ii) A has an LU-decomposition, i.e. there exist a nonsingular lower triangular

matrix L and a nonsingular upper triangular matrix U such that A = LU .

The condition (ii) can be formulated in a stronger form: A = BDC, whereB is a lower triangular matrix with ones on the diagonal, C is an uppertriangular matrix with ones on the diagonal, and D is a nonsingular diagonalmatrix. This factorization is uniquely determined. The diagonal entries dk ofD are

d1 = A({1}, {1}), dk =detA(Nk,Nk)

detA(Nk−1,Nk−1), k = 2, . . . , n.

Now let

A =[A11 A12

A21 A22

](A.4)

be a block matrix in which A11 is nonsingular. We then call the matrix

A22 − A21A−111 A12

the Schur complement of the submatrix A11 in A and denote it by [A/A11].Here, the matrix A22 does not even need to be square.

Theorem A.1.13 If the matrix

A =[A11 A12

A21 A22

]is square and A11 is nonsingular, then the matrix A is nonsingular if and onlyif the Schur complement [A/A11] is nonsingular. We have then

detA = detA11 det[A/A11],

Page 175: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 165

and if the inverse

A−1 =[B11 B12

B21 B22

]is written in the same block form, then

[A/A11] = B−122 . (A.5)

If A is not nonsingular, then the Schur complement [A/A11] is also not non-singular; if Az = 0, then [A/A11]z = 0, where z is the column vector obtainedfrom z by omitting coordinates with indices in A11.

Starting with the inverse matrix, we obtain immediately:

Corollary A.1.14 The inverse of a nonsingular principal submatrix of anonsingular matrix is the Schur complement of the inverse with respect tothe submatrix with the complementary set of indices. In other words, if bothA and A11 in

A =[A11 A12

A21 A22

]are nonsingular, then

A−111 = [A−1/(A−1)22]. (A.6)

Remark A.1.15 The principal submatrices of a square matrix enjoy theproperty that if A1 is a principal submatrix of A2 and A2 is a principal sub-matrix of A3, then A1 is a principal submatrix of A3 as well. This property isessentially reflected, in the case of a nonsingular matrix, thanks to CorollaryA.1.14, by the Schur complements: if

A =

⎡⎣ A11 A12 A13

A21 A22 A23

A31 A32 A33

⎤⎦and [

A11 A12

A21 A22

]is denoted as A, then

[A/A] = [[A/A11]/[A/A11]]. (A.7)

Let us also mention the Sylvester identity in the simplest case which showshow the principal minors of two inverse matrices are related (cf. [31], p. 21):

Page 176: Graphs and Matrices in Geometry Fiedler.pdf

166 Appendix

Theorem A.1.16 Let again

A =[A11 A12

A21 A22

]be a nonsingular matrix with the inverse

A−1 =[B11 B12

B21 B22

].

Then

detB11 =detA22

detA. (A.8)

If V1 is a nonempty subset in a vector space V , which is closed with respectto the operations of addition and scalar multiplication in V , then we say thatV1 is a linear subspace of V . It is clear that the intersection of linear subspacesof V is again a linear subspace of V . In this sense, the set (0) is in fact a linearsubspace contained in all linear subspaces of V .

If S is some set of vectors of a finite-dimensional vector space V , then thelinear subspace of V of smallest dimension that contains the set S is calledthe linear hull of S and its dimension (necessarily finite) is called the rankof S.

We are now able to present, without proof, an important statement aboutthe rank of a matrix.

Theorem A.1.17 Let A be an m×n matrix. Then the rank of the system ofthe columns (as vectors) of A is the same as the rank of the system of the rows(as vectors) of A. This common number r(A), called the rank of the matrixA, is equal to the maximum order of all nonsingular submatrices of A. (If Ais the zero matrix, thus containing no nonsingular submatrix, then r(A) = 0.)

Theorem A.1.18 A square matrix A is singular if and only if there exists anonzero vector x for which Ax = 0.

The rank function enjoys important properties. We list some:

Theorem A.1.19 We have:

(i) For any matrix A,

r(AT ) = r(A).

(ii) If the matrices A and B are of the same type, then

r(A+B) ≤ r(A) + r(B).

(iii) If the matrices A and B can be multiplied, then

r(AB) ≤ min(r(A), r(B)).

Page 177: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 167

(iv) If A (respectively, B) is nonsingular, then r(AB) = r(B) (respectively,r(AB) = r(A)).

(v) If a matrix A has rank one, then there exist nonzero column vectors xand y such that A = xyT .

Theorem A.1.13 can now be completed as follows:

Theorem A.1.20 In the same notation as in (A.4)

r(A) = r(A11) + r([A/A11]).

For square matrices, the following important notions have to be mentioned.Let A be a square matrix of order n. A nonzero column vector x is called

an eigenvector of A if Ax = λx for some number (scalar) λ. This number λ iscalled an eigenvalue of A corresponding to the eigenvector x.

Theorem A.1.21 A necessary and sufficient condition that a number λ

is an eigenvalue of a matrix A is that the matrix A − λI is singular,i.e. that

det(A− λI) = 0.

This formula is equivalent to

(−λ)n + c1(−λ)n−1 + · · · + cn−1(−λ) + cn = 0, (A.9)

where ck is the sum of all principal minors of A of order k

ck =∑

M⊂N , |M|=k

detA(M,M), N = {1, . . . , n}.

The polynomial on the left-hand side of (A.9) is called the characteristicpolynomial of the matrix A. It has degree n.

We have thus:

Theorem A.1.22 A square complex matrix A = [aik] of order n has n

eigenvalues (some may coincide). These are all the roots of the characteristicpolynomial of A. If we denote them as λ1, . . . , λn, then

n∑i=1

λi =n∑

i=1

aii, (A.10)

λ1λ2 · . . . · λn = detA.

The number∑n

i=1 aii is called the trace of the matrix A. We denote it bytrA. By (A.10), trA is the sum of all the eigenvalues of A.

Remark A.1.23 A real square matrix need not have real eigenvalues, butas its characteristic polynomial has real coefficients, the nonreal eigenvaluesoccur in complex conjugate pairs.

Page 178: Graphs and Matrices in Geometry Fiedler.pdf

168 Appendix

Theorem A.1.24 A real or complex square matrix is nonsingular if and onlyif all its eigenvalues are different from zero. In such a case, the inverse haseigenvalues reciprocal to the eigenvalues of the matrix.

We can also use the more general Gaussian block elimination method.

Theorem A.1.25 Let the system Ax = b be in the block form

A11x1 + A12x2 = b1,

A21x1 + A22x2 = b2,

where x1, x2 are vectors.If A11 is nonsingular, then this system is equivalent to the system

A11x1 +A12x2 = b1,

(A22 − A21A−111 A12)x2 = b2 −A21A

−111 b1.

Remark A.1.26 In this theorem, the role of the Schur complement[A/A11] = A22 − A21A

−111 A12 for elimination is recognized.

Now, we pay attention to a specialized real vector space called Euclideanvector space in which magnitude (length) of a vector is defined by the so-calledinner product of two vectors.

For the sake of completeness, recall that a real finite-dimensional vectorspace E is called a Euclidean vector space if a function 〈x, y〉 : E × E → R isgiven that satisfies:

E 1. 〈x, y〉 = 〈y, x〉 for all x ∈ E, y ∈ E;E 2. 〈x1 + x2, y〉 = 〈x1, y〉 + 〈x2, y〉 for all x1 ∈ E, x2 ∈ E, and y ∈ E;E 3. 〈αx, y〉 = α〈x, y〉 for all x ∈ E, y ∈ E, and all real α;E 4. 〈x, x〉 ≥ 0 for all x ∈ E, with equality if and only if x = 0.

The property E 4 enables us to define the length ‖x‖ of the vector x as√〈x, x〉. A vector is called a unit vector if its length is one. Vectors x and

y are orthogonal if 〈x, y〉 = 0. A system u1, . . . , um of vectors in E is calledorthonormal if 〈ui, uj〉 = δij , the Kronecker delta.

It is easily proved that every orthonormal system of vectors is linearly inde-pendent. If the number of vectors in such a system is equal to the dimensionof E, it is called an orthonormal basis of E.

The real vector space Rn of column vectors will become a Euclidean spaceif the inner product of the vectors x = [x1, . . . , xn]T and y = [y1, . . . , yn]T isdefined as

〈x, y〉 = x1y1 + · · · + xnyn;

in other words, in matrix notation

〈x, y〉 = xT y (= yTx). (A.11)

Page 179: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 169

Notice the following:

Theorem A.1.27 If a vector is orthogonal to all the vectors of a basis, thenit is the zero vector.

We call now a matrix A = [aik] in Rn×n symmetric if aik = aki for all i, k,or equivalently, if A = AT . We call the matrix A orthogonal if AAT = I. Thus:

Theorem A.1.28 The sum of two symmetric matrices in Rn×n is symmet-ric; the product of two orthogonal matrices in Rn×n is orthogonal. The identitymatrix is orthogonal and the transpose (which is equal to the inverse) of anorthogonal matrix is orthogonal as well.

The following theorem on orthogonal matrices holds (see [28]):

Theorem A.1.29 Let Q be an n × n real matrix. Then the following areequivalent:

(i) Q is orthogonal.(ii) For all x ∈ Rn

‖Qx‖ = ‖x‖.(iii) For all x ∈ Rn, y ∈ Rn

〈Qx,Qy〉 = 〈x, y〉.(iv) Whenever u1, . . . , un is an orthonormal basis, then Qu1, . . . , Qun is an

orthonormal basis as well.(v) There exists an orthonormal basis v1, . . . , vn such that Qv1, . . . , Qvn is

again an orthonormal basis.

The basic theorem on symmetric matrices can be formulated as follows.

Theorem A.1.30 Let A be a real symmetric matrix. Then there exist anorthogonal matrix Q and a real diagonal matrix D such that A = QDQT .

The diagonal entries of D are the eigenvalues of A, and the columns of Q areeigenvectors of A; the kth column corresponds to the kth diagonal entry of D.

Theorem A.1.31 All the eigenvalues of a real symmetric matrix are real. Forevery real symmetric matrix, there exists an orthonormal basis of R consistingof its eigenvectors.

If a real symmetric matrix A has p positive and q negative eigenvalues, thenthe difference p− q is called the signature of the matrix A. By the celebratedInertia Theorem the following holds (cf. [28], Theorem 2.6.1):

Theorem A.1.32 The signatures of a real symmetric matrix A and CACT

are equal whenever C is a real nonsingular matrix of the same type as A.

We also should mention the theorem on the interlacing of eigenvalues ofsubmatrices (cf. [31], p.185).

Page 180: Graphs and Matrices in Geometry Fiedler.pdf

170 Appendix

Theorem A.1.33 Let A ∈ Rn×n be symmetric, y ∈ Rn and α a real number.Let A be the matrix

A =[A y

yT α

].

If λ1 ≤ λ2 ≤ · · · ≤ λn (respectively, λ1 ≤ λ2 ≤ · · · ≤ λn+1) are the eigenvaluesof A (respectively, A), then

λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ · · · ≤ λn ≤ λn ≤ λn+1.

An important subclass of the class of real symmetric matrices is that ofpositive definite (respectively, positive semidefinite) matrices.

A real symmetric matrix A of order n is called positive definite (respectively,positive semidefinite) if for every nonzero vector x ∈ Rn, the product xTAx

is positive (respectively, nonnegative).In the following theorem we collect the basic characteristic properties of

positive definite matrices. For the proof, see [28].

Theorem A.1.34 Let A = [aik] be a real symmetric matrix of order n. Thenthe following are equivalent:

(i) A is positive definite.(ii) All principal minors of A are positive.(iii) (Sylvester criterion) detA(Nk,Nk) > 0 for k = 1, . . . , n, where Nk =

{1, . . . , k}. In other words

a11 > 0, det[a11 a12

a21 a22

]> 0, . . . , detA > 0.

(iv) There exists a nonsingular lower triangular matrix B such that A =BBT .

(v) There exists a nonsingular matrix C such that A = CCT .(vi) The sum of all the principal minors of order k is positive for k =

1, . . . , n.(vii) All the eigenvalues of A are positive.(viii) There exists an orthogonal matrix Q and a diagonal matrix D with

positive diagonal entries such that A = QDQT .

Corollary A.1.35 If A is positive definite, then A−1 exists and is positivedefinite as well.

Remark A.1.36 Observe also that the identity matrix is positive definite.

For positive semidefinite matrices, we have:

Theorem A.1.37 Let A = [aik] be a real symmetric matrix of order n. Thenthe following are equivalent:

(i) A is positive semidefinite.(ii) The matrix A+ εI is positive definite for all ε > 0.

Page 181: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 171

(iii) All the principal minors of A are nonnegative.(iv) There exists a square matrix C such that A = CCT .(v) The sum of all principal minors of order k is nonnegative for k =

1, . . . , n.(vi) All the eigenvalues of A are nonnegative.(vii) There exists an orthogonal matrix Q and a diagonal matrix D with

nonnegative diagonal entries such that A = QDQT .

Corollary A.1.38 A positive semidefinite matrix is positive definite if andonly if it is nonsingular.

Corollary A.1.39 If A is positive definite and α a positive number, then αAis positive definite as well. If A and B are positive definite of the same order,then A+ B is positive definite; this is so even if one of the matrices A, B ispositive semidefinite.

The expression xTAx – in the case that A is symmetric – is called thequadratic form corresponding to the matrix A. If A is positive definite (respec-tively, positive semidefinite), this quadratic form is called positive definite(respectively, positive semidefinite). Observe also the following property:

Theorem A.1.40 If A is a positive semidefinite matrix and y a real vectorfor which yTAy = 0, then Ay = 0.

Remark A.1.41 Theorem A.1.37 can be also formulated using the rank r ofA in the separate items. So in (iv), the matrix C can be taken as an n × r

matrix, in (vii) the diagonal matrix D can be specified as having r positiveand n− r zero diagonal entries, etc.

We should not forget to mention important inequalities for positive definitematrices. In the first, we use the symbol N for the index set {1, 2, . . . , n} andA(M) for the principal submatrix of A with index set M . For M void, weshould put detA(M) = 1.

Theorem A.1.42 (Generalized Hadamard inequality (cf. [31]), p. 478) LetA be a positive definite n× n matrix. Then for any M ⊂ N

detA ≤ detA(M) detA(N\M). (A.12)

Remark A.1.43 Equality in (A.12) is attained if and only if all entries ofA with one index in M and the second in N\M are equal to zero. A furthergeneralization of (A.12) is the Hadamard–Fischer inequality (cf. [31], p. 485)

detA(N1 ∪N2) detA(N1 ∩N2) ≤ detA(N1) detA(N2) (A.13)

for any subsets N1 ⊂ N , N2 ⊂ N .

Page 182: Graphs and Matrices in Geometry Fiedler.pdf

172 Appendix

Concluding this section, let us notice a close relationship of the class ofpositive semidefinite matrices with Euclidean geometry. If v1, v2, . . . , vm is asystem of vectors in a Euclidean vector space, then the matrix of the innerproducts

G(v1, v2, . . . , vm) =

⎡⎢⎢⎣〈v1, v1〉 〈v1, v2〉 · · · 〈v1, vm〉〈v2, v1〉 〈v2, v2〉 · · · 〈v2, vm〉

. . . .

〈vm, v1〉 〈vm, v2〉 · · · 〈vm, vm〉

⎤⎥⎥⎦ , (A.14)

the so-called Gram matrix of the system, enjoys the following properties.(Because of its importance for our approach, we supply proofs of the nextfour theorems.)

Theorem A.1.44 Let a1, a2, . . . , as be vectors in En. Then the Gram matrixG(a1, a2, . . . , as) is positive semidefinite. It is nonsingular if and only ifa1, . . . , as are linearly independent.

If G(a1, a2, . . . , as) is singular, then every linear dependence relations∑

i=1

αiai = 0 (A.15)

implies the same relation among the columns of the matrix G(a1, . . . , as), i.e.

G(a1, . . . , as)[α] = 0 for [α] = [α1, . . . , αs]T , (A.16)

and conversely, every linear dependence relation (A.16) among the columns ofG(a1, . . . , as) implies the same relation (A.15) among the vectors a1, . . . , as.

Proof. Positive semidefiniteness of G(a1, . . . , as) follows from the fact thatfor x = [x1, . . . , xs]T , the corresponding quadratic form xTG(a1, . . . , as)x isequal to the inner product

⟨∑si=1 xiai,

∑si=1 xiai

⟩, which is nonnegative. In

addition, if the vectors a1, . . . , as are linearly independent, this inner productis positive unless x is zero. Now let (A.15) be fulfilled. Then

G(a1, . . . , as)[α] =

⎡⎢⎣⟨∑s

i=1 αiai, a1

⟩...⟨∑s

i=1 αiai, as

⟩⎤⎥⎦ ,

which is the zero vector.Conversely, if (A.16) is fulfilled, then

[α]TG(a1, . . . , as)[α] =⟨ s∑

i=1

αiai,s∑

i=1

αiai

⟩is zero. Thus we obtain (A.15). �

Page 183: Graphs and Matrices in Geometry Fiedler.pdf

A.1 Matrices 173

Theorem A.1.45 Every positive semidefinite matrix is the Gram matrix ofsome system of vectors in any Euclidean space, the dimension of which isgreater than or equal to the rank of the matrix.

Proof. Let A be such a matrix, and let r be its rank. By Remark A.1.41, in (iv)of Theorem A.1.37, A can be written as CCT , where C is a (real) n×r-matrix.If the rows of C are considered as coordinates of vectors in an r-dimensional(or, higher-dimensional) Euclidean space, then A is the Gram matrix of thesepoints. �

This theorem can be used for obtaining matrix inequalities (cf. [9]).

Theorem A.1.46 Let A = [aik] be a positive semidefinite n× n matrix withrow sums zero. Then the square roots of the diagonal entries of A satisfy thepolygonal inequality

2maxi

√aii ≤

∑i

√aii. (A.17)

Proof. By Theorem A.1.45, A is the Gram matrix of a system of vectorsu1, . . . , un whose sum is the zero vector, thus forming a closed polygon. There-fore, the length |ui|, which is

√aii, is less than or equal to the sum of the

lengths of the remaining vectors. Consequently, (A.17) follows. �

Theorem A.1.47 Let a1, . . . , an be some basis of an n-dimensionalEuclidean space En. Then there exists a unique ordered system of vectorsb1, . . . , bn in En for which

〈ai, bk〉 = δik, i, k = 1, . . . , n. (A.18)

The Gram matrices G(a1, . . . , an) and G(b1, . . . , bn) are inverse to eachother.

Proof. Uniqueness: Suppose that b′1, . . . , b′n also satisfy (A.18). Then for k =

1, . . . , n, the vector bk − b′k is orthogonal to all the vectors ai, hence to all thevectors in En. By Theorem A.1.27, b′k = bk, k = 1, . . . , n.

To prove the existence of b1, . . . , bn, denote by G(a) the Gram matrixG(a1, . . . , an). Observe that the fact that a vector b satisfies the lineardependence relation

b = x1a1 + x2a2 + · · · + xnan (A.19)

is equivalent to ⎡⎢⎢⎢⎣〈a1, b〉〈a2, b〉

...〈an, b〉

⎤⎥⎥⎥⎦ = G(a)

⎡⎢⎢⎢⎣x1

x2

...xn

⎤⎥⎥⎥⎦ .

Page 184: Graphs and Matrices in Geometry Fiedler.pdf

174 Appendix

Thus the vectors bi, i = 1, 2. . . . , n, defined by (A.19) for x = G(a)−1ei, whereei = [δi1, δi2, . . . , δin]T , satisfy (A.18). For fixed i and x for b = bi, we obtainfrom (A.19), using inner multiplication by bj and (A.18), that xj = 〈bi, bj〉.Therefore, using again inner multiplication of (A.19) by aj , we obtain I =G(a)G(b1, b2, . . . , bn). �

The two bases a1, . . . , an and b1, . . . , bn satisfying (A.18) are called biortho-gonal bases.

As we know, the cosine of the angle ϕ of two non-zero vectors u1, u2 satisfies

cosϕ =〈u1, u2〉√

〈u1, u1〉√

〈u2, u2〉.

Therefore

sin2 ϕ =1

〈u1, u1〉〈u2, u2〉detG(u1, u2),

where G(u1, u2) is the Gram matrix of u1 and u2.Formally, this notion can be generalized for the case of more than two

nonzero vectors. Thus, the number denoted as

sin(u1, . . . , un) =

√detG(u1, . . . , un)〈u1, u1〉 · · · 〈un, un〉

,

which is always less than or equal to 1– is called the sine, sometimes spatialsine, of the vectors u1,. . . , un.

We add a few words on measuring. The determinant of the Gram matrixG(u1, . . . , un) is considered as the square of the n-dimensional volume of theparallelepiped spanned by the vectors u1, . . . , un. On the other hand, thevolume of the unit cube is an n!-multiple of the volume of the special Schlaefliright n-simplex with unit legs since the cube can be put together from thatnumber of such congruent n-simplexes. By affine transformations, it followsthat the volume of the mentioned parallelepiped is the n!-multiple of thevolume VΣ of the n-simplex Σ with vertices A1, A2, . . . , An+1, where An+1 isa chosen point in En and the Ais are chosen by Ai = An+1 + ui, i = 1, . . . , n.

We now intend to relate the determinant of the extended Menger matrixof Σ with the determinant of the Gram matrix G(u1, . . . , un). Thus, as in(1.19), let mik denote the square of the length of the edge AiAk for i �= k. Wehave then 〈ui, ui〉 = mi,n+1, and, since 〈ui − uk, ui − uk〉 = mik, we obtain〈ui, uk〉 = 1

2(mi,n+1 +mk,n+1 −mik). Multiplying each row of G(u1, . . . , un)

by 2, we obtain for the determinant

detG(u1, . . . , un) =12n

detZ,

Page 185: Graphs and Matrices in Geometry Fiedler.pdf

A.2 Graphs and matrices 175

where Z =⎡⎢⎣ 2m1,n+1 m1,n+1 + m2,n+1 − m12 · · · m1,n+1 + mn,n+1 − m1n

m2,n+1 + m1,n+1 − m12 2m2,n+1 · · · m2,n+1 + mn,n+1 − m2n

.

.

....

. . ....

mn,n+1 + m1,n+1 − m1n mn,n+1 + m2,n+1 − m2n · · · 2mn,n+1

⎤⎥⎦.Bordering now Z by a column with entries −mi,n+1 and another column ofall ones, and by two rows, the first with n+ 1 zeros and a 1, the second withn zeros, then 1 and 0, the determinant of Z just changes the sign. Add nowthe (n+ 1)th column to each of the first n columns and subtract the mk,n+1-multiple of the last column from the kth column for k = 1, . . . , n. We obtainthe extended Menger matrix of Σ in which each entry mik is multiplied by−1. Its determinant is thus (−1)n detM0 in the notation of (1.26).

Altogether, we arrive at

V 2Σ =

( 1n!

)2

detG(u1, . . . , un)

=(−1)n+1

(n!)22ndetM0,

as was claimed in (1.28).

A.2 Graphs and matrices

A (finite) directed graph G = (V,E) consists of the set of nodes V and the setof arcs E, a subset of the cartesian product V × V . This means that everyedge is an ordered pair of nodes and can thus be depicted in the plane by anarc with an arrow if the nodes are depicted as points.

For our purpose, it will be convenient to choose V as the set {1, 2, . . ., n}(or, to order the set V ). If now E is the set of arcs of G, define an n×n matrixA(G) as follows: if there is an arc “starting” in i and “ending” in k, the entryin the position (i, k) will be one, if there is no arc starting in i and ending ink, the entry in the position (i, k) will be zero.

We have thus assigned to a finite directed graph (usually called a digraph) a(0, 1) matrix A(G). Conversely, let C = [cik] be an n×n (say, real) matrix. Wecan assign to C a digraph G(C) = (V,E) as follows: V is the set {1, . . . , n},and E the set of all pairs (i, k) for which cik is different from zero.

The graph theory terminology speaks about a walk in G from nodei to the node k if there are nodes j1, . . . , js such that all the arcs(i, j1), (j1, j2), . . . , (js, k) are in E; s + 1 is then the length of this walk. Thenodes in the walk need not be distinct. If they are, the walk is a path. If icoincides with k, we speak about a cycle; its length is then again s+ 1. If allthe remaining nodes are distinct, the cycle is simple. The arcs (k, k) them-selves are called loops. The digraph is strongly connected if there is at least

Page 186: Graphs and Matrices in Geometry Fiedler.pdf

176 Appendix

one path from each node to any other node. There is an equivalent propertyfor matrices.

Let P be a permutation matrix. By (A.2), we have PP T = I. If C is asquare matrix and P a permutation matrix of the same order, then PCPT

is obtained from C by a simultaneous permutation of rows and columns; thediagonal entries remain diagonal. Observe that the digraph G(PCPT ) differsfrom the digraph G(C) only by different numbering of the nodes.

We say that a square matrix C is reducible if it has the block form

C =[C11 C12

0 C22

],

where both matrices C11, C22 are square of order at least one, or if itcan be brought to such form by a simultaneous permutation of rows andcolumns.

A matrix is called irreducible if it is square and not reducible. (Observe thata 1 × 1 matrix is always irreducible, even if the entry is zero.)

This relatively complicated notion is important for (in particular, nonneg-ative) matrices and their applications, e.g. in probability theory. However, ithas a very simple equivalent in the graph-theoretical setting.

Theorem A.2.1 A matrix C is irreducible if and only if the digraph G(C)is strongly connected.

A more detailed view is given in the following theorem.

Theorem A.2.2 Every square real matrix can be brought by a simultaneouspermutation of rows and columns to the form⎡⎢⎢⎢⎢⎣

C11 C12 C13 . . . C1r

0 C22 C23 . . . C2r

0 0 C33 . . . C3r

. . . . . . .

0 0 0 . . . Crr

⎤⎥⎥⎥⎥⎦ ,in which the diagonal blocks are irreducible (thus square) matrices.

This theorem has a counterpart in graph theory. Every finite digraph hasthe following structure. It consists of so-called strong components that are themaximal strongly connected subdigraphs; these can then be numbered in sucha way that there is no arc from a node with a larger number of the strongcomponent into a node belonging to the strong component with a smallernumber.

A digraph is symmetric if to every arc (i, j) in E the arc (j, i) is also inE. Such a symmetric digraph can be simply treated as an undirected graph.In graph theory, a finite undirected graph (or briefly graph) G = (V,H) is

Page 187: Graphs and Matrices in Geometry Fiedler.pdf

A.2 Graphs and matrices 177

introduced as an ordered pair of two finite sets (V,H), where V is the set ofnodes and H is the set of some unordered pairs of the elements of V , whichwill here be called edges. A finite undirected graph can also be representedby means of a plane diagram in such a way that the nodes of the graphare represented by points in the plane and edges of the graph by segments(or, arcs) joining the corresponding two (possibly also identical) points inthe plane. In contrast to the representation of digraphs, the edges are notequipped with arrows.

It is usually required that an undirected graph contains neither loops (i.e.,the edges (u, u) where u ∈ V ), nor more edges joining the same pair of nodes(the so-called multiple edges).

If (u, v) is an edge of a graph, we say that this edge is incident with thenodes u and v or that the nodes u and v are incident with this edge. In agraph containing no loops, a node is said to have degree k if it is incidentexactly with k edges. The nodes of degree 0 are called isolated; the nodes ofdegree 1 are called end-nodes. An edge incident with an end-node is called apending edge.

We have introduced the concepts of a (directed) walk and a (directed)path in digraphs. Analogous concepts in undirected graphs are a walk and apath. A walk in a graph G is a sequence of nodes (not necessarily distinct),say (u1, u2, . . . , us) such that every two consecutive nodes uk and uk+1 (k =1, . . . , s − 1) are joined by an edge in G. A path in a graph G is then such awalk in which all the nodes are distinct. A polygon, or a circuit in G, is a walkwhose first and last nodes are identical and, if the last node is removed, allthe remaining nodes are distinct. At the same time, this first (and also last)node of the walk representing a circuit is not considered distinguished in thecircuit.

We also speak about a subgraph of a given graph and about a union ofgraphs. A connected graph is defined as a graph in which there exists a pathbetween any two distinct nodes.

If the graph G is not connected, we introduce the notion of a componentof G as such a subgraph of G which is connected but is not contained in anyother connected subgraph of G.

With connected graphs, it is important to study the question of how con-nectivity changes when some edge is removed (the set of nodes remaining thesame), or when some node as well as all the edges incident with it are removed.An edge of a graph is called a bridge if it is not a pending edge and if thegraph has more components after removing this edge. A node of a connectedgraph such that the graph has again more components after removing thisnode (together with all incident edges) is called a cut-node. More generally,we call a subset of nodes whose removal results in a disconnected graph acut-set of the graph, for short a cut.

Page 188: Graphs and Matrices in Geometry Fiedler.pdf

178 Appendix

The following theorems are useful for the study of cut-nodes and connec-tivity in general.

Theorem A.2.3 If a longest path in the graph G joins the nodes u and v,then neither u nor v are cut-nodes.

Theorem A.2.4 A connected graph with n nodes, without loops and multipleedges, has at least n− 1 edges. If it has more than n− 1 edges, it contains acircuit as a subgraph.

We now present a theorem on an important type of connected graph.

Theorem A.2.5 Let G be a connected graph, without loops and multipleedges, with n nodes. Then the following conditions are equivalent:

(i) The graph G has exactly n− 1 edges.(ii) Each edge of G is either a pending edge, or a bridge.(iii) There exists one and only one path between any two distinct nodes

of G.(iv) The graph G contains no circuit as a subgraph, but adding any new edge

(and no new node) to G, we always obtain a circuit.(v) The graph G contains no circuit.

A connected graph satisfying one (and then all) of the conditions (i) to (v)of Theorem A.2.5 is called a tree.

Every path is a tree; another example of a tree is a star, i.e. a graph with nnodes, n− 1 of which are end-nodes, and the last node is joined with all theseend-nodes.

A graph, every component of which is a tree, is called a forest.A subgraph of a connected graph G which has the same vertices as G and

which is a tree is called a spanning tree of G.

Theorem A.2.6 There always exists a spanning tree of a connected graph.Moreover, choosing an arbitrary subgraph S of a connected graph G that con-tains no polygon, we can find a spanning tree of G that contains S as asubgraph.

Some special graphs should be mentioned. In addition to the path and thecircuit, a wheel is a graph consisting of a circuit and an additional node whichis joined by an edge to each node of the circuit.

An important notion is the edge connectivity of a graph. It is the smallestnumber of edges whose removal causes the graph to be disconnected, or tohave only a node left. Clearly, the edge-connectivity of a not-connected graphis zero, the edge connectivity of a tree is one, of a circuit two, and of a wheel(with at least four nodes) three.

Weighted graphs (more precisely, edge-weighted graphs) are graphs in whichto every edge (in the case of directed graphs, to every arc) a nonnegative

Page 189: Graphs and Matrices in Geometry Fiedler.pdf

A.3 Nonnegative matrices, M- and P -matrices 179

number, called weight, is assigned. In such a case, the degree of a node inan undirected graph is the sum of the weights of all the edges incident withthat node. Usually, edges with zero weight are considered as “not edges,”sometimes missing edges are considered as edges with zero weight. A pathis then only such a path in which all the edges have a positive weight. Thelength of a path is then the sum of the weights on all the edges. The distancebetween two nodes in an undirected graph is the length of a shortest pathbetween those nodes.

Signed graphs are undirected graphs in which every edge is considered eitheras positive, or as negative. A signed graph of an n× n real matrix A = [aik]is the signed graph on n nodes 1, 2, . . . , n which has positive edges (i, k) ifaik > 0 and negative edges (i, k) if aik < 0. The entries aii are usually notinvolved. We can then speak about the positive part of the graph and negativepart of the graph. Both have the same set of nodes and the first has just thepositive and the second just the negative edges.

A.3 Nonnegative matrices, M- and P -matrices

Positivity, or, more generally, nonnegativity, plays an important role in mostparts of this book. In the present section, we always assume that the vectorsand matrices are real.

We denote by the symbols >, ≥ or <, ≤ componentwise comparison ofthe vectors or matrices. For instance, for a matrix A, A > 0 means thatall the entries of A are positive; the matrix is called positive. A ≥ 0 meansnonnegativity of all the entries and the matrix is called nonnegative.

Evidently, the sum of two or more nonnegative matrices of the same typeis again nonnegative, and also the product of nonnegative matrices, if theycan be multiplied, is nonnegative. Sometimes it is necessary to know whetherthe result is already positive. Usually, the combinatorial structure of zero andnonzero entries and not the values themselves decide. In such a case, it isuseful to apply graph theory terminology. We restrict ourselves to the case ofsquare matrices.

Let us now formulate the Perron–Frobenius theorem [28] on nonnegativematrices.

Theorem A.3.1 Let A be a nonnegative irreducible square matrix of order n,n > 1. Then there exists a positive eigenvalue p of A which is simple and suchthat no other eigenvalue has a greater modulus. There is a positive eigenvectorassociated with the eigenvalue p and no nonnegative eigenvector is associatedwith any other eigenvalue of A.

There is another important class of matrices that is closely related to theprevious class of nonnegative matrices.

Page 190: Graphs and Matrices in Geometry Fiedler.pdf

180 Appendix

A square matrix A is called an M-matrix if it has the form kI − C, whereC is a nonnegative matrix and k > �(C), the spectral radius of C, i.e. themaximum modulus of all the eigenvalues of C.

Observe that every M -matrix has all the off-diagonal entries non-positive.It is usual to denote the set of such matrices by Z. To characterize matricesfrom Z to obtain M -matrices, there exist surprisingly many possibilities. Welist some:

Theorem A.3.2 Let A be a matrix in Z of order n. Then the following areequivalent.

(i) A is an M -matrix.(ii) There exists a vector x ≥ 0 such that Ax > 0.(iii) All the principal minors of A are positive.(iv) The sum of all the principal minors of order k is positive for k =

1, . . . , n.(v) detA(Nk,Nk) > 0 for k = 1, . . . , n, where Nk = {1, . . . , k}.(vi) Every real eigenvalue of A is positive.(vii) The real part of every eigenvalue of A is positive.(viii) A is nonsingular and A−1 is nonnegative.

The proof and other characteristic properties can be found in [28].

Corollary A.3.3 Let A be an M-matrix. Then every principal submatrix ofA as well as the Schur complement of every principal submatrix of A is againan M -matrix.

Remark A.3.4 It is clear that the combinatorial structure (described, say,by the graph) of any principal submatrix of such a matrix A is determinedby the structure of A. Surprisingly, the same holds also for the combina-torial structure of the Schur complement. Since (compare Remark A.1.26)the Schur complement of the block with indices from the set S is obtained byelimination of unknowns with indices from S from equations with indices fromS, this means that the resulting so-called elimination graph, i.e. the graph ofG(A(S, S)), where S is the complement of S in the set of all indices, dependsonly on G(A) and S, and not on the magnitudes of the entries of A. Thedescription of G(A(S, S)) is in the following theorem:

Theorem A.3.5 In G(A(S, S)), there is an edge (i, j), i ∈ S, j ∈ S, i �= j, ifand only if there is a path in G(A) from i to j all interior nodes (i.e., differentfrom i and j) of which belong to S.

Remark A.3.6 Observe the coincidence of several properties with those ofpositive definite matrices in Theorem A.1.34. In the next theorem, we presentan analogy of positive semidefinite matrices.

Page 191: Graphs and Matrices in Geometry Fiedler.pdf

A.3 Nonnegative matrices, M- and P -matrices 181

Theorem A.3.7 Let A be a matrix in Z of order n. Then the following areequivalent:

(i) A+ εI is an M-matrix for all ε > 0.(ii) All principal minors of A are nonnegative.(iii) The sum of all principal minors of order k is nonnegative for k = 1, . . . , n.(iv) Every real eigenvalue of A is nonnegative.(v) The real part of every eigenvalue of A is nonnegative.

We denote matrices satisfying these conditions M0-matrices; they are usu-ally called possibly singular M -matrices. Also in this case, the submatricesare M0-matrices and Schur complements with respect to nonsingular principalsubmatrices are possibly singular M -matrices. In fact, the following holds:

Theorem A.3.8 If

A =[A11 A12

A21 A22

]is a singular M -matrix and Au = 0, u partitioned as u =

[u1

u2

], then the

Schur complement [A/A11] is also a singular M -matrix and [A/A11]u2 = 0.

Theorem A.3.9 Let A be an irreducible singular M -matrix. Then thereexists a positive vector u for which Au = 0.

Remark A.3.10 As in the case of positive definite matrices, an M0-matrixis an M -matrix if and only if it is nonsingular.

In the next theorem, we list other characteristic properties of the class ofreal square matrices having just the property (iii) from Theorem A.3.2 orproperty (ii) from Theorem A.1.34, namely all principal minors are positive.These matrices are called P -matrices (cf. [28]).

Theorem A.3.11 Let A be a real square matrix. Then the following areequivalent:

(i) A is a P-matrix, i.e. all principal minors of A are positive.(ii) Whenever D is a nonnegative diagonal matrix of the same order as A,

then all principal minors of A+D are different from zero.(iii) For every nonzero vector x = [xi], there exists an index k such that

xk(Ax)k > 0.(iv) Every real eigenvalue of any principal submatrix of A is positive.(v) The implication

z ≥ 0, SATSz ≤ 0 implies z = 0

holds for every diagonal matrix S with diagonal entries 1 or −1.(vi) To every diagonal matrix S with diagonal entries 1 or −1, there exists a

vector x ≥ 0 such that SASx > 0.

Page 192: Graphs and Matrices in Geometry Fiedler.pdf

182 Appendix

Let us state some corollaries.

Corollary A.3.12 Every symmetric P-matrix is positive definite (and, ofcourse, every positive definite matrix is a P-matrix). Every P-matrix in Z isan M-matrix.

Corollary A.3.13 If A ∈ P , then there exists a vector x > 0 such thatAx > 0.

Corollary A.3.14 If for a real square matrix A its symmetric part12 (A+AT ) is positive definite, then A ∈ P .

An important class closely related to the class of M -matrices is that of inverseM -matrices; it consists of real matrices the inverse of which is an M -matrix.By (viii) of Theorem A.3.2 and Corollary A.3.3, the following holds:

Theorem A.3.15 Let A be an inverse M-matrix. Then A as well as allprincipal submatrices and all Schur complements of principal submatrices arenonnegative matrices; they even are inverse M -matrices.

A.4 Hankel matrices

A Hankel matrix of order n is a matrix H of the form H = (hi+j), i, j =0, . . . , n− 1, i.e.

H =

⎡⎢⎢⎢⎢⎣h0 h1 h2 . . . hn−1

h1 h2 . . . hn−1 hn

h2 . . .

. . . . . . . . .

hn−1 . . . h2n−3 h2n−2

⎤⎥⎥⎥⎥⎦ .Its entries hk can be real or complex. Let Hn denote the class of all n × n

Hankel matrices. Evidently, Hn is a linear vector space (complex or real) ofdimension 2n−1. It is also clear that an n×n Hankel matrix has rank one if andonly if it is either of the form γ(ti+k) for γ and t fixed (in general, complex),or if it has a single nonzero entry in the lower-right corner. Hankel matricesplay an important role in approximations, investigation of polynomials, etc.

A.5 Projective geometry

For our purpose, we shall introduce, for an integer n > 1, the notion of the realprojective n-space Pn, as the set of points defined as real homogeneous (n+1)-tuples, i.e. classes of equivalence of all ordered nonzero real (n + 1)-tuples(y1, . . . , yn+1) under the equivalence

Page 193: Graphs and Matrices in Geometry Fiedler.pdf

A.5 Projective geometry 183

(y1, . . . , yn+1) ∼ (z1, . . . , zn+1)

if and only if the rank of the matrix[y1 . . . yn+1

z1 . . . zn+1

](A.20)

is one. This, of course, happens if and only if zk = λyk for k = 1, . . . , n + 1for some λ different from zero.

The entries of the (n+1)-tuple will be called (homogeneous) coordinates ofthe corresponding point.

Remark A.5.1 This definition means that Pn is obtained by starting with an(n+1)-dimensional real vector space, removing the zero vector and identifyingthe lines with the points of Pn. Concisely, we should distinguish between thejust defined geometric point of Pn and the arithmetic point identified by achosen (n+ 1)-tuple.

Usually, we shall denote the points by upper case letters and write simplyY = (y1, . . . , yn+1) or even Y = (yi) if (y1, . . . , yn+1) is some (arithmetic)representative of the point Y . If Y = (yi) and Z = (zi) are distinct points, i.e.the matrix (A.20) has rank two, we call the line determined by Y and Z anddenote by L(Y,Z) the set of all points some representative of which has theform (αy1 +βz1, . . . , αyn+1 +βzn+1) for some real α and β not both equal tozero (observe that then not all entries in this (n+ 1)-tuple are equal to zero).

Furthermore, if Y = (yi), Z = (zi), . . . , T = (ti) are m points in Pn, wesay that they are linearly independent (respectively, linearly dependent) if thematrix (with m rows) ⎡⎢⎢⎣

y1 . . . yn+1

z1 . . . zn+1

. . . . . . . . .

t1 . . . tn+1

⎤⎥⎥⎦has rank m (respectively, less than m).

It will be convenient to denote by [y], [z] etc. the column vectors withentries (yi), (zi) etc. Thus the above condition can also be formulated as thedecision of whether (for the matrix transposed to the preceding) the rank[[y], [z], . . . , [t]] is equal to m or less than m.

The following corollary is self-evident:

Corollary A.5.2 In Pn, any n + 2 points are linearly dependent, and thereexist n+ 1 linearly independent points.

Remark A.5.3 Such a set of n+ 1 linearly independent points is called thebasis of Pn and the number n is the dimension of Pn. Of importance for us will

Page 194: Graphs and Matrices in Geometry Fiedler.pdf

184 Appendix

also be the sets of n+ 2 points, any n+ 1 of which are linearly independent.We call such a (usually ordered) set a quasibasis of Pn.

Let now Y = (yi), Z = (zi), . . ., T = (ti) be m linearly independent pointsin Pn. We denote by L[Y,Z, . . . , T ] the set of all real linear combinations ofthe points Y,Z, . . . , T , i.e. the set of all points in Pn with the (n+ 1)-tuple ofcoordinates of the form (αyi +βzi + . . .+ γti) and call it the linear hull of thepoints Y,Z, . . . , T . Since these points are linearly independent, such a linearcombination is nonzero if not all coefficients α, β, . . . , γ are equal to zero.

A projective transformation T in Pn is a one-to-one (bijective) mapping ofPn into itself which assigns to a point X = (xi) the point Y = (yi) defined by

[y] = σXA[x],

where A is a (fixed) nonsingular real matrix and σX a real number (dependingon X). It is clear that projective transformations in Pn form a group withrespect to the operation of composition.

We now say that two geometric objects in Pn are projectively equivalent ifone is obtained as the image of the other in a projective transformation.

Theorem A.5.4 Any two (ordered) quasibases in Pn are projectively equiva-lent. In addition, there is a unique projective transformation which maps thefirst quasibasis into the second.

We omit the proof but add some comments.

Remark A.5.5 More generally, we can show that the ordered system of mpoints Y1, . . . , Ym, m ≥ n+2, is projectively equivalent to the ordered systemZ1, . . . , Zm if and only if there exist a nonsingular matrix A of order n + 1and a nonsingular diagonal matrix D of order m such that for the matricesU , V consisting as above of the arbitrarily chosen column representatives ofthe points Yi, Zi

V = AUD (A.21)

holds.

Let m be a positive integer less than n. It is obvious that the set of pointsin Pn with the property that the last n−m coordinates are equal to zero canbe identified with the set of all points of a projective space of dimension m

having the same first m+1 coordinates. This set is the linear hull of the pointsB1 = (1, 0, . . . , 0), . . ., Bm+1 = (0, . . . , 0, 1, 0, . . . , 0) (with 1 in the (m + 1)thplace). By the result in Remark A.5.5, we obtain:

Corollary A.5.6 If Y = (yi), Z = (zi), . . . , T = (ti) are m + 1 linearlyindependent points in Pn, then all (n+1)-tuples of the form (αy1 +βz1 + · · ·+

Page 195: Graphs and Matrices in Geometry Fiedler.pdf

A.5 Projective geometry 185

γt1, . . ., αyn+1+βzn+1+· · ·+γtn+1) with (m+1)-tuples (α, β, . . . , γ) differentfrom zero form an m-dimensional projective space.

Remark A.5.7 This space will be called the (linear) subspace of Pn deter-mined by the points Y,Z, . . . , T . However, it is important to realize that thesame subspace can be determined by any of its m linearly independent points,i.e. by any of its bases.

If m = n, such a subspace is called a hyperplane in Pn. It is thus a subspaceof maximal dimension in Pn which is different from Pn itself. Similarly, as invector spaces, such a hyperplane corresponds to a linear form which, however,has to be nonzero; in addition, this linear form is determined up to a nonzeromultiple.

Let, say, 〈α, x〉 denote the bilinear form

〈α, x〉 = α1x1 + α2x2 + . . .+ αn+1xn+1.

Here, the symbol x stands for the point x = (x1, . . . , xn+1), and the sym-bol α for the hyperplane, which we shall also consider as an (n + 1)-tuple(α1, . . . , αn+1)d.

We say that the point x and the hyperplane α are incident if 〈α, x〉 = 0.Clearly, this fact does not depend on the choice of nonzero multiples in eithervariable α or x. Thus α can be considered also as a point of a projective n-dimensional space, say, P (d)

n , which we call the dual space to Pn. There aretwo ways to characterize a hyperplane α: either by its equation 〈α, x〉 = 0describing the set of all points x incident with α, or by the (n + 1)-tuple ofthe dual coordinates. The word dual is justified since there is a bilinear form,namely 〈., .〉 which satisfies the two properties: to every α ∈ P

(d)n there exists

an element x0 ∈Pn such that 〈α, x0〉 �= 0, and to every x ∈ Pn there exists anelement α0 ∈ P

(d)n such that 〈α0, x〉 �= 0. Also, Pn is a dual space to P (d)

n sincethe bilinear form 〈x, α〉(d) defined as 〈α, x〉 again satisfies the two mentionedproperties. We are thus allowed to speak about the equation of the point, sayZ = (zi), as

ξ1z1 + . . .+ ξn+1zn+1 = 0,

where the ξs play the role of the coordinates of a variable hyperplane incidentwith the point Z.

There are simple formulae about how the change of the basis in Pn isreflected by a suitable change of the basis in P (d)

n (cf. [31], p. 30).Let us only mention that a linear subspace of Pn of dimension m can be

determined either by its m+1 linearly independent points as their linear hull(i.e. the set of all linear combinations of these points), or as the intersectionof n−m− 1 linearly independent hyperplanes.

Page 196: Graphs and Matrices in Geometry Fiedler.pdf

186 Appendix

We turn now to quadrics in Pn, the next simplest notion. A quadric in Pn,more explicitly quadratic hypersurface in Pn, is the set QA of all points x inPn whose coordinates xi annihilate some quadratic form

n+1∑i,k=1

aikxixk (A.22)

not identically equal to zero. As usual, we assume that the coefficients in(A.22) are real and satisfy aik = aki. The coefficients can thus be written inthe form of a real symmetric matrix A = [aik] and the left-hand side of (A.22)as [x]TA[x], T meaning as usual the transposition and [x] the column vectorwith coordinates xi. The quadric QA is called nonsingular if the matrix A isnonsingular, otherwise it is singular.

Let Z = (zi) be a point in Pn. Then two cases can occur. Either all thesums

∑k aikzk are equal to zero, or not. In the first case, we say that Z is a

singular point of the quadric (which is necessarily singular since A[z] = 0 and[z] �= 0). In the second case, the equation

n+1∑i,k=1

aikxizk = 0 (A.23)

is an equation of a hyperplane; we call this hyperplane the polar hyperplaneor simply polar of the point Z with respect to the quadric QA.

It follows easily from (A.22) and (A.23) that a singular point is always apoint of the quadric. If, however, Z is a nonsingular point of the quadric,then the corresponding polar is, since it has the properties required, calledthe tangent hyperplane of QA at the point Z. The point Z is then inci-dent with the corresponding tangent hyperplane and in fact this propertyis (for a nonsingular point of QA) characteristic for the polar to be a tangenthyperplane.

We can now formulate the problem to characterize the set of all tangenthyperplanes of QA as a subset of the dual space P (d)

n .

Theorem A.5.8 Let QA be a nonsingular quadric with the correspondingmatrix A. Then the set of all its tangent hyperplanes forms in P

(d)n again a

nonsingular quadric; this dual quadric corresponds to the matrix A−1.

Proof. We have to characterize the set of all hyperplanes ξ with the propertythat

[ξ] = A[x]

and

[x]TA[x] = 0 (A.24)

Page 197: Graphs and Matrices in Geometry Fiedler.pdf

A.5 Projective geometry 187

for some [x] �= 0. Substituting [x] = A−1[ξ] into (A.24) yields

[ξ]A−1[ξ] = 0.

Since the converse is also true, the proof is complete. �A singular quadric QA with matrix A of rank r, i.e. r < n+ 1, has singular

points. The set SQ of its singular points is formed by the linear space of thosepoints X = (xi) whose vector [x] satisfies A[x] = 0. Thus the dimension ofSQ is n − r. It is easily seen that if Y ∈ QA and Z ∈ QA, Z �= Y , thenthe whole line Y Z is contained in QA. Thus QA is then a generalized conewith the vertex set SQ. Moreover, if r = 2, QA is the set of two hyperplanes;its equation is the product of the equations of the hyperplanes and thesehyperplanes are distinct. It can, however, happen that the hyperplanes arecomplex conjugate. If r = 1, QA is just one hyperplane; all points of thishyperplane are singular and the equation of QA is the square of the equationof the hyperplane. Whereas the set of all polars of all points with respect toa nonsingular quadric is the set of all hyperplanes, in the case of a singularquadric this set is restricted to those hyperplanes which are incident with theset of singular points SQ.

Two points Y = (yi) and Z = (zi) are called polar conjugates with respect tothe quadric QA if for the corresponding column vectors [y] and [z], [y]TA[z] =0. This means that each of the points Y,Z is incident with the polar of theother point, if this polar exists. However, the definition applies also for thecase that one or both of the points Y and Z are singular.

Finally, n + 1 linearly independent points, any two different of whichare polar conjugates with respect of the quadric QA, are said to form anautopolar n-simplex. In the case that these points are the coordinate pointsO1 = (1, 0, . . . , 0), O2 = (0, 1, . . . , 0), . . ., On+1 = (0, 0, . . . , 1), the matrix A

has all off-diagonal entries equal to zero. The converse also holds.All these definitions and facts can be dually formulated for dual quadrics.

We must, however, be aware of the fact that only nonsingular quadrics canbe considered both as quadrics and dual quadrics at the same time.

Now let a (point) quadric QA with the matrix A = [aik] and a dual quadricQΓ with the matrix Γ = [γik] be given. We say that these quadrics are apolar if

n+1∑i,k=1

aikγik = 0. (A.25)

It can be shown that this happens if and only if there exists an autopolar n-simplex of QA with the property that all n+1 of its (n−1)-dimensional facesare hyperplanes of QΓ. (Observe that this is true if the simplex is formedby the coordinate points (1, 0, . . . , 0), etc. since then aik = 0 for all i, k =1, . . . , n+ 1, i �= k, as well as γii = 0 for i = 1, . . . , n+ 1.)

Page 198: Graphs and Matrices in Geometry Fiedler.pdf

188 Appendix

To better understand basic properties of linear and quadratic objects inprojective spaces, we shall investigate more thoroughly the case n = 1, whichis more important than it might seem.

The first fact to be observed is that the hyperplanes as sets of points havealso dimension 1. The point Y = (y1, y2) is incident only with the dual pointY (d) = (y2,−y1)d. The quadrics have ranks 1 or 2. In the first case, sucha quadric consists of a single point (“counted twice”), in the second of twodistinct points which can also be complex conjugate.

A particularly important notion is that of two harmonic pairs of points.The basis is in the following theorem:

Theorem A.5.9 Let A, B, C, D be points in P1. Then the following areequivalent:

(i) the quadric of the points A and B and the dual quadric of the points C(d)

and D(d) are apolar;(ii) the points C and D are polar conjugates with respect to the quadric formed

by the points A and B;(iii) the quadric of the points C and D and the dual quadric of the points A(d)

and B(d) are apolar;(iv) the points A and B are polar conjugates with respect to the quadric formed

by the points C and D.

Proof. Let A = (a1, a2), etc. Since the quadric of the points A and B has theequation

(a2x1 − a1x2)(b2x1 − b1x2) = 0

and the dual quadric of the points C(d) and D(d) the dual equation

(c1ξ1 + c2ξ2)(d1ξ1 + d2ξ2) = 0,

the condition (i) means that

2a2b2c1d1 − (a2b1 + a1b2)(c1d2 + c2d1) + 2a1b1c2d2 = 0. (A.26)

The condition (ii) means that

(a2c1 − a1c2)(b2d1 − b1d2) + (a2d1 − a1d2)(b2c1 − b1c2) = 0. (A.27)

Since this condition coincides with (A.26), (i) and (ii) are equivalent. Thecondition in (iii) is again (A.26), and similarly for (iv). �If one – and thus all – of the conditions (i)–(iv) is fulfilled, the pairs A, B andC, D are called harmonic. Let us add a useful criterion of harmonicity in thecase that the points A and B are distinct.

Page 199: Graphs and Matrices in Geometry Fiedler.pdf

A.5 Projective geometry 189

Theorem A.5.10 Let A, B, and C be points in P1, A and B distinct. IfC = αA + βB for some α and β, then αA − βB is again a point and thispoint completes C to a pair harmonic with the pair A and B.

Proof. Substitute ci = αai + βbi, i = 1, 2, into (A.27) . We obtain

(a2b1 − a1b2)((αa2 − βb2)d1 − (αa1 − βb1)d2) = 0,

which yields the result. �

Remark A.5.11 Some of the points A, B, C, and D may coincide. On theother hand, the pair A, B can be complex conjugate and α and β can also becomplex and we still can get a real or complex conjugate pair (only such leadto real quadrics).

The relationship which assigns to every point C in P1 the point D harmonicto C with respect to some pair A, B is an involution. Here again, the pair A,B can be complex conjugate.

Theorem A.5.12 Such involution is determined by two pairs of pointsrelated in this involution (the pairs must not be identical). If these pairs areC, D and E, F , then the relationship between X and Y is obtained from theformula

det

⎡⎣ x1y1 x1y2 + x2y1 x2y2c1d1 c1d2 + c2d1 c2d2

e1f1 e1f2 + e2f1 e2f2

⎤⎦ = 0. (A.28)

Proof. This follows from Theorem A.5.9 since (A.28) describes the situationthat there is a dual quadric apolar to all three pairs X, Y ; C, D; and E, F .Under the stated condition, the last two rows of the determinant are linearlyindependent. �

For the sake of completeness, let us add the well-known construction ofthe fourth harmonic point on a line using the plane. If A,B, and C are thegiven points, we choose a point P not on the line arbitrarily, then Q on PC,different from both P and C. Then construct the intersection points R of PBand QA, and S of PA and QB. The intersection point of RS with the lineAB is the fourth harmonic point D.

In Chapter 4, we use the following notion and result:We call two systems in a projective m-space, with m+ 1 points each, inde-

pendent if for no k ∈ {0, 1, . . . ,m} the following holds: a k-dimensional linearsubspace generated by k + 1 points of any one of the systems contains morethat k points of the other system.

Theorem A.5.13 Suppose that two independent systems with m + 1 pointseach in a projective m-space Pm are given. Then there exists at most onenonsingular quadric in Pm for which both systems are autopolar.

Page 200: Graphs and Matrices in Geometry Fiedler.pdf

190 Appendix

Proof. Choose the points of the first system as vertices O1, O2, . . . , Om ofprojective coordinates in Pm, and let Yi = (

iy1,

iy2, . . . ,

iym), i = 1, 2, . . . ,m be

the points of the second system.Suppose there are two different nonsingular quadrics having both systems

as autopolar. Suppose that

a ≡n∑

i=1

ai x2i = 0, b ≡

n∑i=1

bi x2i = 0,

are their equations; clearly, ai, bi are numbers different from zero, and the

rank of the matrix[ai

bi

]is 2.

The condition that also the second system is autopolar with respect to botha and b, implies the existence of nonzero numbers σ1, σ2, . . . , σn, such that forr = 1, 2, . . . , n, we have for all xis

n∑i=1

airyixi ≡ σr

n∑i=1

biryixi.

Therefore

(ai − σrbi)ryi = 0 (A.29)

for i, r = 1, . . . ,m. Define now the following equivalence relation among theindices 1, . . . , n: i ∼ j, if ai bj − aj bi = 0.

Observe that not all indices 1, . . . ,m are in the same class with respect to

this equivalence since then the matrix[ai

bi

]would have rank less than 2, a

contradiction. Denote thus by M1 the class of all indices equivalent with theindex 1, by M2 the nonvoid set of the remaining indices.

If now Yr = (ryk) is one of the points Yk, then the nonzero coordinates

ryk

have indices either all from M1, or all from M2: Indeed, if for i ∈M1, j ∈M2,ryi �= 0,

ryj �= 0, then (A.29) would imply

ai = σrbi, aj = σr bj ,

i.e. i ∼ j, a contradiction.Denote by p1, p2, respectively, the number of points Yr, having nonzero

coordinates in M1, respectively M2. We have p1 + p2 = n; the linear inde-pendence of the points Yr implies that p1 ≤ s, p2 ≤ n − s, where s is thecardinality of M1. This means, however, that p1 = s, p2 = n − s. Thus thelinear space of dimension s − 1, generated by the points Oi for i ∈ M1, con-tains s points Yr, a contradiction with the independence of both systems. �To conclude this chapter, we investigate the so-called rational normal curvesin Pn. These are geometric objects whose points are in a one-to-one cor-respondence with points in a projective line. Because of homogeneity, we

Page 201: Graphs and Matrices in Geometry Fiedler.pdf

A.5 Projective geometry 191

shall use forms in two (homogeneous) indeterminates (variables) instead ofpolynomials in one indeterminate. We can, of course, use similar notions suchas factor, divisibility, common divisor, prime forms, etc.

Definition A.5.14 A rational normal curve Cn in Pn is the set of all thosepoints (x1, . . . , xn+1) which are obtained as image of P1 in the mappingf : P1 → Pn given by

xk = fk(t1, t2), k = 1, . . . , n+ 1, (A.30)

where f1(t1, t2), . . . , fn+1(t1, t2) are linearly independent forms (i.e. homoge-neous polynomials) of degree n.

Remark A.5.15 For n = 1, we obtain the whole line P1. As we shall see,for n = 2, C2 is a nonsingular conic. In general, it is a curve of degree n

(in the sense that it has n points in common with every hyperplane of Pn

if appropriate multiplicities of the common points are defined). Of course,(A.30) are the parametric equations of Cn.

Theorem A.5.16 Cn has the following properties:

(i) it contains n+ 1 linearly independent points (which means that it is notcontained in any hyperplane);

(ii) in an appropriate basis of Pn, its parametric equations are

xk = tn+1−k1 tk−1

2 , k = 1, . . . , n+ 1; (A.31)

(iii) for n ≥ 2, Cn is the intersection of n− 1 linearly independent quadrics.

Proof. The property (i) just rewords the condition that the forms fk arelinearly independent. Also, if we express these forms explicitly

fk(t1, t2) = fk,0tn1 + fk,1t

n−11 t2 + . . .+ fk,nt

n2 , k = 1, 2, . . . , n+ 1,

then the matrix Φ of the coefficients

Φ = (fk,l), k = 1, . . . , n+ 1, l = 0, . . . , n

is nonsingular. This implies (ii) since the transformation of the coordinateswith the matrix Φ−1 will bring the coefficient matrix to the identity matrixas in (A.31).

To prove (iii), it suffices to choose as Cn in the form (A.31) the quadricswith equations

x1x3 − x22 = 0, x2x4 − x2

3 = 0, . . . , xn−1xn+1 − x2n = 0. (A.32)

Clearly every point of Cn is contained in all the quadrics in (A.32). Conversely,let a point Y = (y1, . . . , yn+1) be contained in all these quadrics. If y1 = 0, Yis the point (0, 0, . . . , 0, 1) which belongs to Cn for t1 = 0, t2 = 1. If y1 �= 0,

Page 202: Graphs and Matrices in Geometry Fiedler.pdf

192 Appendix

set t = y2/y1. Then y2 = ty1, y3 = t2y1, . . ., yn+1 = tny1, which means thatY corresponds to t1 = 1, t2 = t. �

Corollary A.5.17 C2 is a nonsingular conic.

Proof. Indeed, it is – in the form (A.32) – the conic with equation x1x3−x22 = 0

and this conic is nonsingular. �

Theorem A.5.18 Any n+ 1 distinct points of Cn are linearly independent.

Proof. We can bring Cn to the form (A.31) by choosing an appropriate coor-dinate system. Since the given points have distinct ratios of parameters, thematrix of coordinates being essentially the Vandermonde matrix with nonpro-portional columns is nonsingular (cf. [28]). �

Theorem A.5.19 Any two rational normal curves, each in a real projectivespace of dimension n, are projectively equivalent. Any two points of such acurve are projectively equivalent as well.

Proof. The first assertion follows from the fact that every such curve is pro-jectively equivalent to the curve of the form (A.31). The second is obtainedby a suitable linear transformation of the homogeneous parameters. �

Page 203: Graphs and Matrices in Geometry Fiedler.pdf

References

[1] L. M. Blumenthal: Theory and Applications of Distance Geometry. Oxford,Clarendon Press, 1953.

[2] E. Egervary: On orthocentric simplexes. Acta Math. Szeged IX (1940), 218–226.[3] M. Fiedler: Geometrie simplexu I. Casopis pest. mat. 79 (1954), 270–297.[4] M. Fiedler: Geometrie simplexu II. Casopis pest. mat. 80 (1955), 462–476.[5] M. Fiedler: Geometrie simplexu III. Casopis pest. mat. 81 (1956), 182–223.[6] M. Fiedler: Uber qualitative Winkeleigenschaften der Simplexe. Czechosl. Math.

J. 7(82) (1957), 463–478.[7] M. Fiedler: Einige Satze aus der metrischen Geometrie der Simplexe in Euklidi-

schen Raumen. In: Schriftenreihe d. Inst. f. Math. DAW, Heft 1, Berlin (1957),157.

[8] M. Fiedler: A note on positive definite matrices. (Czech, English summary.)Czechosl. Math. J. 10(85) (1960), 75–77.

[9] M. Fiedler: Uber eine Ungleichung fur positive definite Matrizen. Mathemati-sche Nachrichten 23 (1961), 197–199.

[10] M. Fiedler: Uber die qualitative Lage des Mittelpunktes der umgeschriebenenHyperkugel im n-Simplex. Comm. Math. Univ. Carol. 2(1) (1961), 3–51.

[11] M. Fiedler: Uber zyklische n-Simplexe und konjugierte Raumvielecke. Comm.Math. Univ. Carol. 2(2) (1961), 3–26.

[12] M. Fiedler, V. Ptak: On matrices with non-positive off-diagonal elements andpositive principal minors. Czechosl. Math. J. 12(87) (1962), 382–400.

[13] M. Fiedler: Hankel matrices and 2-apolarity. Notices AMS 11 (1964), 367–368.[14] M. Fiedler: Relations between the diagonal elements of two mutually inverse

positive definite matrices. Czechosl. Math. J. 14(89) (1964), 39–51.[15] M. Fiedler: Some applications of the theory of graphs in the matrix theory and

geometry. In: Theory of Graphs and Its Applications. Proc. Symp. Smolenice1963, Academia, Praha (1964), 37–41.

[16] M. Fiedler: Matrix inequalities. Numer. Math. 9 (1966), 109–119.[17] M. Fiedler: Algebraic connectivity of graphs. Czechosl. Math. J. 23(98) (1973),

298–305.[18] M. Fiedler: Eigenvectors of acyclic matrices. Czechosl. Math. J. 25(100) (1975),

607–618.[19] M. Fiedler: A property of eigenvectors of nonnegative symmetric matri-

ces and its application to graph theory. Czechosl. Math. J. 25(100) (1975),619–633.

Page 204: Graphs and Matrices in Geometry Fiedler.pdf

194 References

[20] M. Fiedler: Aggregation in graphs. In: Coll. Math. Soc. J. Bolyai, 18.Combinatorics. Keszthely (1976), 315–330.

[21] M. Fiedler: Laplacian of graphs and algebraic connectivity. In: Combinatoricsand Graph Theory, Banach Center Publ. vol. 25, PWN, Warszava (1989),57–70.

[22] M. Fiedler: A geometric approach to the Laplacian matrix of a graph. In: Com-binatorial and Graph-Theoretical Problems in Linear Algebra (R. A. Brualdi,S. Friedland, V. Klee, editors), Springer, New York (1993), 73–98.

[23] M. Fiedler: Structure ranks of matrices. Linear Algebra Appl. 179 (1993),119–128.

[24] M. Fiedler: Elliptic matrices with zero diagonal. Linear Algebra Appl. 197, 198(1994), 337–347.

[25] M. Fiedler: Moore–Penrose involutions in the classes of Laplacians andsimplices. Linear Multilin. Algebra 39 (1995), 171–178.

[26] M. Fiedler: Some characterizations of symmetric inverse M -matrices. LinearAlgebra Appl. 275–276 (1998), 179–187.

[27] M. Fiedler: Moore-Penrose biorthogonal systems in Euclidean spaces. LinearAlgebra Appl. 362 (2003), 137–143.

[28] M. Fiedler: Special Matrices and Their Applications in Numerical Mathematics,2nd edn, Dover Publ., Mineola, NY (2008).

[29] M. Fiedler, T. L. Markham: Rank-preserving diagonal completions of a matrix.Linear Algebra Appl. 85 (1987), 49–56.

[30] M. Fiedler, T. L. Markham: A characterization of the Moore–Penrose inverse.Linear Algebra Appl. 179 (1993), 129–134.

[31] R. A. Horn, C. A. Johnson: Matrix Analysis, Cambridge University Press, NewYork, NY (1985).

[32] D. J. H. Moore: A geometric theory for electrical networks. Ph.D. Thesis,Monash. Univ., Australia (1968).

Page 205: Graphs and Matrices in Geometry Fiedler.pdf

Index

acutely cyclic simplex, 101

adjoint matrix, 163

algebraic connectivity, 146altitude hyperplane, 127

apolar, 187

Apollonius hypersphere, 111arc of a graph, 175

autopolar simplex, 187

axis, 124, 131

barycentric coordinates, 5

basic angle, 131basis orthonormal, 168

biorthogonal bases, 2

bisimplex, 143block matrix, 160

boundary hyperplane, 120

box, 66bridge, 177

cathete, 66Cayley–Menger determinant, 18

center of a quadric, 41

central angle, 131central quadric, 41

centroid, 5

characteristic polynomial, 167circuit, 177

circumcenter, 23

circumscribed circular cone, 124circumscribed sphere, 23

circumscribed Steiner ellipsoid, 41

column vector, 159complementary faces, 40

component, 177conjugate cone, 121

connected graph, 177

convex hull, 4

covariance matrix, 118

cut-node, 177cut-off simplex, 125

cut-set, 177

cycle, 175cycle simple, 175

cyclic simplex, 101

degree of a node, 177

determinant, 162diagonal, 160

digraph, 175strongly connected, 175

dimension, 1

directed graph, 175

duality, 12

edge, 5edge connectivity, 178

edge of a graph, 177

eigenvalue, 167eigenvector, 167

elimination graph, 180

elliptic matrix, 18end-node, 177

Euclidean distance, 1Euclidean vector space, 168

extended Gramian, 18

face of a simplex, 5

flat simplex, 41forest, 178

generalized biorthogonal system, 115Gergonne point, 109Gram matrix, 172

Gramian, 16graph, 176

Page 206: Graphs and Matrices in Geometry Fiedler.pdf

196 Index

Hadamard product, 37

halfline, 1

Hankel matrix, 182harmonic pair, 188

harmonically conjugate, 39

homogeneous barycentric coordinates, 5hull

linear, 166

hyperacute, 52

hyperacute cone, 126hypernarrow cone, 126

hyperobtuse cone, 127

hyperplane, 3, 185hyperwide cone, 126

hypotenuse, 66

identity matrix, 160

improper hyperplane, 12improper point, 5

incident, 177, 185

independent systems, 189

inner product, 1, 168inscribed circular cone, 124

interior, 6

inverse M -matrix, 182inverse matrix, 162

inverse point system, 118

inverse simplex, 116

involution, 34irreducible matrix, 176

isodynamical center, 111

isogonal correspondence, 33isogonally conjugate, 34

isogonally conjugate halfline, 125

isolated node, 177

isotomically conjugate hyperplanes, 39isotomy, 39

isotropic points, 31

Kronecker delta, 2

Laplacian eigenvalue, 146Laplacian matrix, 145

left conjugate, 95

leg, 66

Lemoine point, 33length of a vector, 168

length of a walk, 175

linearhull, 166

subspace, 166

linearly independent points, 1

loop, 175, 177

main diagonal, 160

matrix, 159addition, 159

block triangular, 163

column, 159diagonal, 162

entry, 159

inverse, 162irreducible, 176

lower triangular, 162

M -matrix, 180multiplication, 159

nonnegative, 179nonsingular, 162

of type, 159

orthogonal, 169P -matrix, 181

positive, 179

positive definite, 170positive semidefinite, 170

reducible, 176row, 159

strongly nonsingular, 164

symmetric, 169Menger matrix, 16

minor, 163

minor principal, 163M -matrix, 180

M0-matrix, 181

Moore–Penrose inverse, 114multiple edge, 177

n-box, 66

nb-hyperplane, 39

nb-point, 33needle, 144

negative of a signed graph, 50node of a graph, 175

nonboundary point, 33

nonnegative matrix, 179nonsingular matrix, 162

nonsingular quadric, 186

normal polygon, 94normalized Gramian, 122

normalized outer normal, 14

obtusely cyclic simplex, 101opening angle, 124

order, 160

ordered, 160orthocentric line, 127

orthocentric normal polygon, 99

orthocentric ray, 130orthogonal hyperplanes, 3

Page 207: Graphs and Matrices in Geometry Fiedler.pdf

Index 197

orthogonal matrix, 169

orthogonal vectors, 168orthonormal basis, 168

orthonormal coordinate system, 2

orthonormal system, 168outer normal, 13

parallel hyperplanes, 3

path, 175, 177pending edge, 177

permutation, 162

perpendicular hyperplanes, 3Perron–Frobenius theorem, 179

P -matrix, 181point Euclidean space, 1

polar, 186

polar cojugate, 187polar cone, 121

polar hyperplane, 186

polygon, 177polynomial characteristic, 167

positive definite matrix, 170positive definite quadratic form, 171

positive matrix, 179

positive semidefinite matrix, 170potency, 22

principal minor, 163

projective space, 182proper orthocentric simplex, 77

proper point, 5

quadratic form, 171

quasiparallelogram, 38

rank, 166ray, 1

reducible matrix, 176

reduction parameter, 123redundant, 131

regular cone, 131regular simplex, 112

right conjugate, 95

right cyclic simplex, 101right simplex, 48

row vector, 159

scalar, 159Schur complement, 164

sign of permutation, 162

signature, 169signed graph, 179

signed graph of a simplex, 47

simple cycle, 175simplex, 4

simplicial cone, 120

singular point, 187singular quadric, 186

spanning tree, 178

spherical arc, 133spherical coordinates, 120

spherical distance, 133spherical triangle, 132

square matrix, 160

star, 178Steiner ellipsoid, 41

strong component, 176

strongly connected digraph, 175strongly nonsingular matrix, 164

subdeterminant, 163subgraph, 177

submatrix, 163

subspace linear, 166Sylvester identity, 165

symmedian, 33

symmetric matrix, 169

thickness, 41Toricelli point, 109

totally orthogonal, 122

trace, 167transpose matrix, 161

transposition, 161tree, 178

unit vector, 168upper triangular, 162

usual cone, 127

vector, 159

vector spaceEuclidean, 168

vertex halfline, 120

vertex of a cone, 120vertex-cone, 125

walk, 175, 177

weight, 179

weighted graph, 178well centered, 57

wheel, 178

zero matrix, 160

Page 208: Graphs and Matrices in Geometry Fiedler.pdf