Over Completeness

53
Overcompleteness From Wikipedia, the free encyclopedia

description

1. From Wikipedia, the free encyclopedia2. Lexicographical order

Transcript of Over Completeness

  • OvercompletenessFrom Wikipedia, the free encyclopedia

  • Contents

    1 Orientation (vector space) 11.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.1.1 Zero-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Alternate viewpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.2.1 Multilinear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Lie group theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.3 Geometric algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3 Orientation on manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2 Orientation of a vector bundle 62.1 Thom space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3 Orthant 83.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    4 Orthogonal basis 104.1 As coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.2 In functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5 Orthogonal complement 125.1 General bilinear forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    5.1.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.3 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    5.3.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.3.2 Finite dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    i

  • ii CONTENTS

    5.4 Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    6 Orthogonal diagonalization 156.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    7 Orthogonal Procrustes problem 167.1 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167.3 Generalized/constrained Procrustes problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    8 Orthogonal transformation 188.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    9 Orthogonality 199.1 Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199.2 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    9.2.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209.2.2 Euclidean vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.2.3 Orthogonal functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    9.3 Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.4 Computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.5 Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.6 Statistics, econometrics, and economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.7 Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.8 Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.9 Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.10 System reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.11 Neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.12 Gaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.13 Other examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.14 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249.15 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    10 Orthogonalization 2610.1 Orthogonalization algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

  • CONTENTS iii

    10.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    11 Orthographic projection 2711.1 Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711.3 Multiview orthographic projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2811.4 Pictorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2811.5 Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    12 Orthonormal basis 3112.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.2 Basic formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.3 Incomplete orthogonal sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.4 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.5 As a homogeneous space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3312.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    13 Orthonormal function system 3413.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    14 Orthonormality 3514.1 Intuitive overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    14.1.1 Simple example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3514.2 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3614.3 Signicance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    14.3.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3614.3.2 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    14.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3714.4.1 Standard basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3714.4.2 Real-valued functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3714.4.3 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    14.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3714.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    15 Overdetermined system 3915.1 Systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    15.1.1 An example in two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3915.1.2 Matrix form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    15.2 Homogeneous case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

  • iv CONTENTS

    15.3 Non-homogeneous case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4115.4 Exact solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4215.5 Approximate solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4315.6 In general use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4315.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4415.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4515.9 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    15.9.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4615.9.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4715.9.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

  • Chapter 1

    Orientation (vector space)

    See also: Orientation (geometry)In mathematics, orientation is a geometric notion that in two dimensions allows one to say when a cycle goes around

    The left-handed orientation is shown on the left, and the right-handed on the right.

    clockwise or counterclockwise, and in three dimensions when a gure is left-handed or right-handed. In linear algebra,the notion of orientation makes sense in arbitrary nite dimension. In this setting, the orientation of an ordered basisis a kind of asymmetry that makes a reection impossible to replicate by means of a simple rotation. Thus, in threedimensions, it is impossible to make the left hand of a human gure into the right hand of the gure by applying arotation alone, but it is possible to do so by reecting the gure in a mirror. As a result, in the three-dimensionalEuclidean space, the two possible basis orientations are called right-handed and left-handed (or right-chiral and left-chiral).The orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented andwhich are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declaredto be positively oriented, but the choice is arbitrary, as they may also be assigned a negative orientation. A vectorspace with an orientation selected is called an oriented vector space, while one not having an orientation selected, iscalled unoriented.

    1

  • 2 CHAPTER 1. ORIENTATION (VECTOR SPACE)

    1.1 DenitionLet V be a nite-dimensional real vector space and let b1 and b2 be two ordered bases for V. It is a standard resultin linear algebra that there exists a unique linear transformation A : V V that takes b1 to b2. The bases b1 andb2 are said to have the same orientation (or be consistently oriented) if A has positive determinant; otherwise theyhave opposite orientations. The property of having the same orientation denes an equivalence relation on the set ofall ordered bases for V. If V is non-zero, there are precisely two equivalence classes determined by this relation. Anorientation on V is an assignment of +1 to one equivalence class and 1 to the other.[1]

    Every ordered basis lives in one equivalence class or another. Thus any choice of a privileged ordered basis for Vdetermines an orientation: the orientation class of the privileged basis is declared to be positive. For example, thestandard basis onRn provides a standard orientation onRn (in turn, the orientation of the standard basis depends onthe orientation of the Cartesian coordinate system on which it is built). Any choice of a linear isomorphism betweenV and Rn will then provide an orientation on V.The ordering of elements in a basis is crucial. Two bases with a dierent ordering will dier by some permutation.They will have the same/opposite orientations according to whether the signature of this permutation is 1. This isbecause the determinant of a permutation matrix is equal to the signature of the associated permutation.Similarly, let A be a nonsingular linear mapping of vector space Rn to Rn. This mapping is orientation-preservingif its determinant is positive.[2] For instance, in R3 a rotation around the Z Cartesian axis by an angle is orientation-preserving:

    A1 =

    0@cos sin 0sin cos 00 0 1

    1Awhile a reection by the XY Cartesian plane is not orientation-preserving:

    A2 =

    0@1 0 00 1 00 0 1

    1A

    1.1.1 Zero-dimensional case

    The concept of orientation dened above did not quite apply to zero-dimensional vector spaces (as the only emptymatrix is the identity (with determinant 1), so there will be only one equivalence class). However, it is useful to beable to assign dierent orientations to a point (e.g. orienting the boundary of a 1-dimensional manifold). A moregeneral denition of orientation that works regardless of dimension is the following: An orientation on V is a mapfrom the set of ordered bases of V to the set f1g that is invariant under base changes with positive determinantand changes sign under base changes with negative determinant (it is equivarient with respect to the homomorphismGLn ! 1 ). The set of ordered bases of the zero-dimensional vector space has one element (the empty set), andso there are two maps from this set to f1g .A subtle point is that a zero-dimensional vector space is naturally (canonically) oriented, so we can talk about anorientation being positive (agreeing with the canonical orientation) or negative (disagreeing). An application is in-terpreting the Fundamental theorem of calculus as a special case of Stokes theorem.Two ways of seeing this are:

    A zero-dimensional vector space is a point, and there is a unique map from a point to a point, so every zero-dimensional vector space is naturally identied with R0, and thus is oriented.

    The 0th exterior power of a vector space is the ground eld K , which here is R1, which has an orientation(given by the standard basis).

  • 1.2. ALTERNATE VIEWPOINTS 3

    1.2 Alternate viewpoints

    1.2.1 Multilinear algebraFor any n-dimensional real vector space V we can form the kth-exterior power of V, denoted kV. This is a realvector space of dimension

    nk

    . The vector space nV (called the top exterior power) therefore has dimension 1.

    That is, nV is just a real line. There is no a priori choice of which direction on this line is positive. An orientationis just such a choice. Any nonzero linear form on nV determines an orientation of V by declaring that x is in thepositive direction when (x) > 0. To connect with the basis point of view we say that the positively oriented basesare those on which evaluates to a positive number (since is an n-form we can evaluate it on an ordered set of nvectors, giving an element of R). The form is called an orientation form. If {ei} is a privileged basis for V and{ei} is the dual basis, then the orientation form giving the standard orientation is e1 e2 en.The connection of this with the determinant point of view is: the determinant of an endomorphism T : V ! V canbe interpreted as the induced action on the top exterior power.

    1.2.2 Lie group theoryLet B be the set of all ordered bases for V. Then the general linear group GL(V) acts freely and transitively on B.(In fancy language, B is a GL(V)-torsor). This means that as a manifold, B is (noncanonically) homeomorphic toGL(V). Note that the group GL(V) is not connected, but rather has two connected components according to whetherthe determinant of the transformation is positive or negative (except for GL0, which is the trivial group and thus has asingle connected component; this corresponds to the canonical orientation on a zero-dimensional vector space). Theidentity component of GL(V) is denoted GL+(V) and consists of those transformations with positive determinant.The action of GL+(V) on B is not transitive: there are two orbits which correspond to the connected components ofB. These orbits are precisely the equivalence classes referred to above. Since B does not have a distinguished element(i.e. a privileged basis) there is no natural choice of which component is positive. Contrast this with GL(V) whichdoes have a privileged component: the component of the identity. A specic choice of homeomorphism between Band GL(V) is equivalent to a choice of a privileged basis and therefore determines an orientation.More formally: 0(GL(V )) = (GL(V )/GL+(V ) = f1g , and the Stiefel manifold of n-frames in V is a GL(V )-torsor, so Vn(V )/GL+(V ) is a torsor over f1g , i.e., its 2 points, and a choice of one of them is an orientation.

    1.2.3 Geometric algebraThe various objects of geometric algebra are charged with three attributes or features: attitude, orientation, andmagnitude.[4] For example, a vector has an attitude given by a straight line parallel to it, an orientation given by itssense (often indicated by an arrowhead) and a magnitude given by its length. Similarly, a bivector in three dimensionshas an attitude given by the family of planes associated with it (possibly specied by the normal line common to theseplanes [5]), an orientation (sometimes denoted by a curved arrow in the plane) indicating a choice of sense of traversalof its boundary (its circulation), and a magnitude given by the area of the parallelogram dened by its two vectors.[6]

    1.3 Orientation on manifoldsMain article: OrientabilityOne can also discuss orientation onmanifolds. Each point p on an n-dimensional dierentiable manifold has a tangentspace TpM which is an n-dimensional real vector space. One can assign to each of these vector spaces an orientation.However, one would like to know whether it is possible to choose the orientations so that they vary smoothly frompoint to point. Due to certain topological restrictions, there are situations when this is impossible. A manifold whichadmits a smooth choice of orientations for its tangents spaces is said to be orientable. See the article on orientabilityfor more on orientations of manifolds.

    1.4 See also Sign convention

  • 4 CHAPTER 1. ORIENTATION (VECTOR SPACE)

    Parallel plane segments with the same attitude, magnitude and orientation, all corresponding to the same bivector a b.[3]

    Rotation formalisms in three dimensions Chirality (mathematics) Right-hand rule Even and odd permutations Cartesian coordinate system Pseudovector Pseudovectors are a consequence of oriented spaces.

  • 1.5. REFERENCES 5

    The orientation of a volume may be determined by the orientation on its boundary, indicated by the circulating arrows.

    Orientability Discussion about the possibility of having orientations in a space.

    1.5 References[1] Rowland, Todd. Vector Space Orientation. FromMathWorld--AWolframWeb Resource, created by Eric W.Weisstein.

    http://mathworld.wolfram.com/VectorSpaceOrientation.html

    [2] Weisstein, Eric W. Orientation-Preserving. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Orientation-Preserving.html

    [3] Leo Dorst, Daniel Fontijne, Stephen Mann (2009). Geometric Algebra for Computer Science: An Object-Oriented Approachto Geometry (2nd ed.). Morgan Kaufmann. p. 32. ISBN 0-12-374942-5. The algebraic bivector is not specic on shape;geometrically it is an amount of oriented area in a specic plane, thats all.

    [4] B Jancewicz (1996). Tables 28.1 & 28.2 in section 28.3: Forms and pseudoforms". In William Eric Baylis. Cliord(geometric) algebras with applications to physics, mathematics, and engineering. Springer. p. 397. ISBN 0-8176-3868-7.

    [5] William Anthony Granville (1904). "178 Normal line to a surface. Elements of the dierential and integral calculus.Ginn & Company. p. 275.

    [6] David Hestenes (1999). New foundations for classical mechanics: Fundamental Theories of Physics (2nd ed.). Springer.p. 21. ISBN 0-7923-5302-1.

    1.6 External links Hazewinkel, Michiel, ed. (2001), Orientation, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4

  • Chapter 2

    Orientation of a vector bundle

    In mathematics, an orientation of a real vector bundle is a generalization of an orientation of a vector space; thus,given a real vector bundle : E B, an orientation of E means: for each ber Ex, there is an orientation of the vectorspace Ex and one demands that each trivialization map (which is a bundle map)

    U : 1(U)! U Rn

    is berwise orientation-preserving, where Rn is given the standard orientation. In more concise terms, this says thatthe structure group of the frame bundle of E, which is the real general linear group GL(R), can be reduced to thesubgroup consisting of those with positive determinant.A vector bundle together with an orientation is called an oriented bundle. Just as a real vector bundle is classiedby the real innite Grassmannian, oriented bundles are classied by the innite Grassmannian of oriented real vectorspaces.The basic invariant of an oriented bundle is the Euler class. The multiplication (that is, cup product) by the Eulerclass of an oriented bundle gives rise to a Gysin sequence.

    2.1 Thom spaceMain article: Thom space

    From the cohomological point of view, for any ring , a -orientation of a real vector bundle E of rank n means achoice (and existence) of a class

    u 2 Hn(T (E); )in the cohomology ring of the Thom space T(E) such that u generates ~H(T (E); ) as a free H(E; ) -moduleglobally and locally: i.e.,

    H(E; )! ~H(T (E); ); x 7! x [ uis an isomorphism (called the Thom isomorphism), where tilde means reduced cohomology, that restricts to eachisomorphism

    H(1(U); )! ~H(T (EjU ); )induced by the trivialization 1(U) ' U Rn . One can show, with some work, that the usual notion of anorientation coincides with a Z-orientation.

    6

  • 2.2. REFERENCES 7

    2.2 References J.P. May, A Concise Course in Algebraic Topology. University of Chicago Press, 1999. Milnor, John Willard; Stashe, James D. (1974), Characteristic classes, Annals of Mathematics Studies 76,Princeton University Press; University of Tokyo Press, ISBN 978-0-691-08122-9

  • Chapter 3

    Orthant

    In two dimensions, there are 4 orthants (called quadrants)

    In geometry, an orthant[1] or hyperoctant[2] is the analogue in n-dimensional Euclidean space of a quadrant in theplane or an octant in three dimensions.In general an orthant in n-dimensions can be considered the intersection of n mutually orthogonal half-spaces. Bypermutations of half-space signs, there are 2n orthants in n-dimensional space.More specically, a closed orthant in Rn is a subset dened by constraining each Cartesian coordinate to be non-

    8

  • 3.1. SEE ALSO 9

    negative or nonpositive. Such a subset is dened by a system of inequalities:

    1x1 0 2x2 0 nxn 0,

    where each i is +1 or 1.Similarly, an open orthant in Rn is a subset dened by a system of strict inequalities

    1x1 > 0 2x2 > 0 nxn > 0,

    where each i is +1 or 1.By dimension:

    1. In one dimension, an orthant is a ray.

    2. In two dimensions, an orthant is a quadrant.3. In three dimensions, an orthant is an octant.

    John Conway dened the term n-orthoplex from orthant complex as a regular polytope in n-dimensions with 2nsimplex facets, one per orthant.[3]

    3.1 See also Cross polytope (or orthoplex) - a family of regular polytopes in n-dimensions which can be constructed withone simplex facets in each orthant space.

    Measure polytope (or hypercube) - a family of regular polytopes in n-dimensions which can be constructedwith one vertex in each orthant space.

    Orthotope - Generalization of a rectangle in n-dimensions, with one vertex in each orthant.

    3.2 Notes[1] Advanced linear algebra By Steven Roman, Chapter 15

    [2] Weisstein, Eric W., Hyperoctant, MathWorld.

    [3] J. H. Conway, N. J. A. Sloane, The Cell Structures of Certain Lattices (1991)

    The facts on le: Geometry handbook, Catherine A. Gorini, 2003, ISBN 0-8160-4875-4, p.113

  • Chapter 4

    Orthogonal basis

    In mathematics, particularly linear algebra, an orthogonal basis for an inner product space V is a basis for Vwhose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basisis an orthonormal basis.

    4.1 As coordinatesAny orthogonal basis can be used to dene a system of orthogonal coordinates V. Orthogonal (not necessarily or-thonormal) bases are important due to their appearance from curvilinear orthogonal coordinates in Euclidean spaces,as well as in Riemannian and pseudo-Riemannian manifolds.

    4.2 In functional analysisIn functional analysis, an orthogonal basis is any basis obtained from an orthonormal basis (or Hilbert basis) usingmultiplication by nonzero scalars.

    4.3 ExtensionsThe concept of an orthogonal (but not of an orthonormal) basis is applicable to a vector space V (over any eld)equipped with a symmetric bilinear form ,, where orthogonality of two vectors v and w means v, w = 0. For anorthogonal basis {ek} :

    hej ; eki =

    q(ek) j = k0 j 6= k ;

    where q is a quadratic form associated with ,: q(v) = v, v (in an inner product space q(v) = | v |2). Hence,

    hv;wi =Xk

    q(ek)vkwk ;

    where vk and wk are components of v and w in {ek} .

    4.4 References Lang, Serge (2004), Algebra, Graduate Texts in Mathematics 211 (Corrected fourth printing, revised thirded.), New York: Springer-Verlag, pp. 572585, ISBN 978-0-387-95385-4

    10

  • 4.5. EXTERNAL LINKS 11

    Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzge-biete 73. Springer-Verlag. p. 6. ISBN 3-540-06009-X. Zbl 0292.10016.

    4.5 External links Weisstein, Eric W., Orthogonal Basis, MathWorld.

  • Chapter 5

    Orthogonal complement

    In the mathematical elds of linear algebra and functional analysis, the orthogonal complement of a subspaceW ofa vector space V equipped with a bilinear form B is the setW of all vectors in V that are orthogonal to every vectorinW. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.

    5.1 General bilinear formsLet V be a vector space over a eld F equipped with a bilinear form B . We dene u to be left-orthogonal to v , andv to be right-orthogonal to u , when B(u; v) = 0 . For a subsetW of V we dene the left orthogonal complementW? to be

    W? = fx 2 V : B(x; y) = 0 for all y 2Wg :

    There is a corresponding denition of right orthogonal complement. For a reexive bilinear form, whereB(u; v) = 0implies B(v; u) = 0 for all u and v in V , the left and right complements coincide. This will be the case if B is asymmetric or an alternating form.The denition extends to a bilinear form on a freemodule over a commutative ring, and to a sesquilinear form extendedto include any free module over a commutative ring with conjugation.[1]

    5.1.1 Properties

    An orthogonal complement is a subspace of V ;

    If X Y then X? Y ? ;

    The radical V ? of V is a subspace of every orthogonal complement;

    W (W?)? ;

    If B is non-degenerate and V is nite-dimensional, then dim(W ) + dim(W?) = dimV .

    5.2 ExampleIn special relativity the orthogonal complement is used to determine the simultaneous hyperplane at a point of a worldline. The bilinear form used in Minkowski space determines a pseudo-Euclidean space of events. The origin and allevents on the light cone are self-orthogonal. When a time event and a space event evaluate to zero under the bilinearform, then they are hyperbolic-orthogonal. This terminology stems from the use of two conjugate hyperbolas in thepseudo-Euclidean plane: conjugate diameters of these hyperbolas are hyperbolic-orthogonal.

    12

  • 5.3. INNER PRODUCT SPACES 13

    5.3 Inner product spacesThis section considers orthogonal complements in inner product spaces.[2]

    5.3.1 PropertiesThe orthogonal complement is always closed in the metric topology. In nite-dimensional spaces, that is merelyan instance of the fact that all subspaces of a vector space are closed. In innite-dimensional Hilbert spaces, somesubspaces are not closed, but all orthogonal complements are closed. In such spaces, the orthogonal complement ofthe orthogonal complement ofW is the closure ofW , i.e.,

    (W?)? =W

    Some other useful properties that always hold are the following. Let H be a Hilbert space and let X and Y be itslinear subspaces. Then:

    X? = X? ; if Y X , then X? Y ? ; X \X? = f0g ; X (X?)? ; if X is a closed linear subspace of H , then (X?)? = X ; if X is a closed linear subspace of H , then H = X X? , the (inner) direct sum.

    The orthogonal complement generalizes to the annihilator, and gives a Galois connection on subsets of the innerproduct space, with associated closure operator the topological closure of the span.

    5.3.2 Finite dimensionsFor a nite-dimensional inner product space of dimension n, the orthogonal complement of a k-dimensional subspaceis an (n k)-dimensional subspace, and the double orthogonal complement is the original subspace:

    (W) =W.

    If A is an m n matrix, where Row A, Col A, and Null A refer to the row space, column space, and null space of A(respectively), we have

    (Row A) = Null A(Col A) = Null AT.

    5.4 Banach spacesThere is a natural analog of this notion in general Banach spaces. In this case one denes the orthogonal complementofW to be a subspace of the dual of V dened similarly as the annihilator

    W? = fx 2 V : 8y 2W;x(y) = 0 g :It is always a closed subspace of V. There is also an analog of the double complement property. W is now asubspace of V (which is not identical to V). However, the reexive spaces have a natural isomorphism i between Vand V. In this case we have

  • 14 CHAPTER 5. ORTHOGONAL COMPLEMENT

    iW =W??:

    This is a rather straightforward consequence of the HahnBanach theorem.

    5.5 See also Complemented lattice

    5.6 References[1] Adkins & Weintraub (1992) p.359

    [2] Adkins&Weintraub (1992) p.272

    Adkins, William A.; Weintraub, Steven H. (1992), Algebra: An Approach via Module Theory, Graduate Textsin Mathematics 136, Springer-Verlag, ISBN 3-540-97839-9, Zbl 0768.00003

    Halmos, Paul R. (1974), Finite-dimensional vector spaces, Undergraduate Texts in Mathematics, Berlin, NewYork: Springer-Verlag, ISBN 978-0-387-90093-3, Zbl 0288.15002

    Milnor, J.; Husemoller, D. (1973), Symmetric Bilinear Forms, Ergebnisse der Mathematik und ihrer Grenzge-biete 73, Springer-Verlag, ISBN 3-540-06009-X, Zbl 0292.10016

    5.7 External links Instructional video describing orthogonal complements (Khan Academy)

  • Chapter 6

    Orthogonal diagonalization

    In linear algebra, an orthogonal diagonalization of a symmetricmatrix is a diagonalization bymeans of an orthogonalchange of coordinates.The following is an orthogonal diagonalization algorithm that diagonalizes a quadratic form q(x) on Rn by means ofan orthogonal change of coordinates X = PY.[1]

    Step 1: nd the symmetric matrix A which represents q and nd its characteristic polynomial(t): Step 2: nd the eigenvalues of A which are the roots of(t) . Step 3: for each eigenvalues of A in step 2, nd an orthogonal basis of its eigenspace. Step 4: normalize all eigenvectors in step 3 which then form an orthonormal basis of Rn. Step 5: let P be the matrix whose columns are the normalized eigenvectors in step 4.

    TheX=PY is the required orthogonal change of coordinates, and the diagonal entries ofPTAP will be the eigenvalues1; : : : ; n which correspond to the columns of P.

    6.1 References[1] Lipschutz, Seymour. 3000 Solved Problems in Linear Algebra.

    15

  • Chapter 7

    Orthogonal Procrustes problem

    The orthogonal Procrustes problem [1] is a matrix approximation problem in linear algebra. In its classical form,one is given two matrices A and B and asked to nd an orthogonal matrix R which most closely maps A to B . [2]Specically,

    R = argmin

    kABkF subject to T = I;

    where k kF denotes the Frobenius norm.The name Procrustes refers to a bandit from Greek mythology who made his victims t his bed by either stretchingtheir limbs or cutting them o.

    7.1 Solution

    This problemwas originally solved by Peter Schonemann in a 1964 thesis. The individual solution was later published.[3] A proof is also given in [4]

    This problem is equivalent to nding the nearest orthogonal matrix to a given matrix M = ATB . To nd thisorthogonal matrix R , one uses the singular value decomposition

    M = UV

    to write

    R = UV :

    7.2 Proof

    One proof depends on basic properties of the standard matrix inner product that induces the Frobenius norm:

    16

  • 7.3. GENERALIZED/CONSTRAINED PROCRUSTES PROBLEMS 17

    R = argmin

    kABk2F

    = argmin

    hAB;ABi

    = argmin

    kAk2F + kBk2F 2hA; Bi

    = argmax

    h; ATBi

    = argmax

    hUTV;i

    = U

    argmax

    0h0 ;i

    V T

    = UV T :

    7.3 Generalized/constrained Procrustes problemsThere are a number of related problems to the classical orthogonal Procrustes problem. One might generalize it byseeking the closest matrix in which the columns are orthogonal, but not necessarily orthonormal. [5]

    Alternately, one might constrain it by only allowing rotation matrices (i.e. orthogonal matrices with determinant 1,also known as special orthogonal matrices). In this case, one can write (using the above decompositionM = UV )

    R = U0V ;

    where 0 is a modied , with the smallest singular value replaced by sign(det(UV )) (+1 or 1), and the othersingular values replaced by 1, so that the determinant of R is guaranteed to be positive. [6] For more information, seethe Kabsch algorithm.

    7.4 See also Procrustes analysis Procrustes transformation

    7.5 References[1] Gower, J.C; Dijksterhuis, G.B. (2004), Procrustes Problems, Oxford University Press

    [2] Hurley, J.R.; Cattell, R.B. (1962), Producing direct rotation to test a hypothesized factor structure, Behavioral Science 7(2): 258262, doi:10.1002/bs.3830070216

    [3] Schonemann, P.H. (1966), A generalized solution of the orthogonal Procrustes problem (PDF), Psychometrika 31: 110,doi:10.1007/BF02289451.

    [4] Zhang, Z. (1998), A Flexible New Technique for Camera Calibration, (PDF), MSR-TR-98-71 http://research.microsoft.com/en-us/um/people/zhang/Papers/TR98-71.pdf Missing or empty |title= (help)

    [5] Everson, R (1997), Orthogonal, but not Orthonormal, Procrustes Problems

    [6] Eggert, DW; Lorusso, A; Fisher, RB (1997), Estimating 3-D rigid body transformations: a comparison of four majoralgorithms, Machine Vision and Applications 9 (5): 272290, doi:10.1007/s001380050048

  • Chapter 8

    Orthogonal transformation

    In linear algebra, an orthogonal transformation is a linear transformation T : V V on a vector space V that has anondegenerate symmetric bilinear form such that T preserves the bilinear form. That is, for each pair u, v of elementsof V, we have[1]

    hu; vi = hTu; Tvi :

    Since the lengths of vectors and the angles between them are dened through the bilinear form, orthogonal trans-formations preserve lengths of vectors and angles between them. In particular, orthogonal transformations maporthonormal bases to orthonormal bases.Orthogonal transformations in two- or three-dimensional Euclidean space are sti rotations, reections, or combina-tions of a rotation and a reection (also known as improper rotations). Reections are transformations that exchangeleft and right, similar to mirror images. The matrices corresponding to proper rotations (without reection) havedeterminant +1. Transformations with reection are represented by matrices with determinant 1. This allows theconcept of rotation and reection to be generalized to higher dimensions.In nite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal trans-formation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitutean orthonormal basis of V. The columns of the matrix form another orthonormal basis of V.The inverse of an orthogonal transformation is another orthogonal transformation. Its matrix representation is thetranspose of the matrix representation of the original transformation.

    8.1 See also Improper rotation Inner product Linear transformation Orthogonal matrix Unitary transformation

    8.2 References[1] Rowland, Todd. Orthogonal Transformation. MathWorld. Retrieved 4 May 2012.

    18

  • Chapter 9

    Orthogonality

    Orthogonal redirects here. For the trilogy of novels by Greg Egan, see Orthogonal (novel).In mathematics, orthogonality is the relation of two lines at right angles to one another (perpendicularity), and the

    A

    C D B The line segments AB and CD are orthogonal to each other.

    generalization of this relation into n dimensions; and to a variety of mathematical relations thought of as describingnon-overlapping, uncorrelated, or independent objects of some kind.The concept of orthogonality has been broadly generalized in mathematics, science, and engineering, especially sincethe beginning of the 16th century. Much of the generalizing has taken place in the areas of mathematical functions,calculus and linear algebra.

    9.1 EtymologyThe word comes from the Greek (orthos), meaning upright, and (gonia), meaning angle. Theancient Greek orthognion (< orthos 'upright'[1] + gnia 'angle'[2]) and classical Latin

    19

  • 20 CHAPTER 9. ORTHOGONALITY

    orthogonium originally denoted a rectangle.[3] Later, they came to mean a right triangle. In the 12th century, thepost-classical Latin word orthogonalis came to mean a right angle or something related to a right angle.[4]

    9.2 Mathematics

    x

    x'

    y y' v < c

    x

    t

    t'

    x'

    v > c

    v = c

    Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle , right: inMinkowski spacetime through hyperbolic angle (red lines labelled c denote the worldlines of a light signal, a vector is orthogonalto itself if it lies on this line).[5]

    9.2.1 Denitions In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i.e., they form a right angle. Two vectors, x and y, in an inner product space, V, are orthogonal if their inner product hx; yi is zero.[6] Thisrelationship is denoted x? y .

    Two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vectorin A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a given subspace is itsorthogonal complement.

    Given a module M and its dual M, an element m of M and an element m of M and are orthogonal if theirduality pairing is zero, i.e. m, m = 0. Two sets S M and S M are orthogonal if each element of S isorthogonal to each element of S.[7]

    A term rewriting system is said to be orthogonal if it is left-linear and is non-ambiguous. Orthogonal termrewriting systems are conuent.

    A set of vectors is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonalset. Nonzero pairwise orthogonal vectors are always linearly independent.In certain cases, the word normal is used to mean orthogonal, particularly in the geometric sense as in the normal toa surface. For example, the y-axis is normal to the curve y = x2 at the origin. However, normal may also refer to themagnitude of a vector. In particular, a set is called orthonormal (orthogonal plus normal) if it is an orthogonal setof unit vectors. As a result, use of the term normal to mean orthogonal is often avoided. The word normal alsohas a dierent meaning in probability and statistics.A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to twovectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolicorthogonality. In the diagram, axes x and t are hyperbolic-orthogonal for any given .

  • 9.2. MATHEMATICS 21

    9.2.2 Euclidean vector spacesIn 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero,i.e. they make an angle of 90, or /2 radians.[8] Hence orthogonality of vectors is an extension of the concept ofperpendicular vectors into higher-dimensional spaces.In terms of Euclidean subspaces, a subspace has an orthogonal complement such that every vector in the subspaceis orthogonal to every vector in the complement. In three-dimensional Euclidean space, the orthogonal complementof a line is the plane perpendicular to it, and vice versa.[9]

    Note however that there is no correspondence with regards to perpendicular planes, because vectors in subspaces startfrom the origin (by the denition of a Linear subspace).In four-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that ofa plane is a plane.[9]

    9.2.3 Orthogonal functionsMain article: Orthogonal functions

    By using integral calculus, it is common to use the following to dene the inner product of two functions f and g:

    hf; giw =Z ba

    f(x)g(x)w(x) dx:

    Here we introduce a nonnegative weight function w(x) in the denition of this inner product. In simple cases, w(x) =1.We say that these functions are orthogonal if that inner product is zero:

    Z ba

    f(x)g(x)w(x) dx = 0:

    We write the norms with respect to this inner product and the weight function as

    kfkw =phf; fiw

    The members of a set of functions {{{1}}}are:

    orthogonal on the closed interval [a, b] if

    hfi; fji =Z ba

    fi(x)fj(x)w(x) dx = kfik2i;j = kfjk2i;j

    orthonormal on the interval [a, b] if

    hfi; fji =Z ba

    fi(x)fj(x)w(x) dx = i;j

    where

    i;j =

    1 if i = j0 if i 6= j

    is the "Kronecker delta" function. In other words, any two of them are orthogonal, and the norm of each is 1 in thecase of the orthonormal sequence. See in particular the orthogonal polynomials.

  • 22 CHAPTER 9. ORTHOGONALITY

    9.2.4 Examples The vectors (1, 3, 2)T, (3, 1, 0)T, (1, 3, 5)T are orthogonal to each other, since (1)(3) + (3)(1) + (2)(0) =0, (3)(1) + (1)(3) + (0)(5) = 0, and (1)(1) + (3)(3) + (2)(5) = 0.

    The vectors (1, 0, 1, 0, ...)T and (0, 1, 0, 1, ...)T are orthogonal to each other. The dot product of these vectorsis 0. We can then make the generalization to consider the vectors in Z2n:

    vk =n/aXi=0

    ai+k

  • 9.4. COMPUTER SCIENCE 23

    9.4 Computer science

    Orthogonality in programming language design is the ability to use various language features in arbitrary combinationswith consistent results.[10] This usage was introduced by van Wijngaarden in the design of Algol 68:

    The number of independent primitive concepts has been minimized in order that the language beeasy to describe, to learn, and to implement. On the other hand, these concepts have been applied or-thogonally in order to maximize the expressive power of the language while trying to avoid deleterioussuperuities.[11]

    Orthogonality is a system design property which guarantees that modifying the technical eect produced by a com-ponent of a system neither creates nor propagates side eects to other components of the system. Typically this isachieved through the separation of concerns and encapsulation, and it is essential for feasible and compact designsof complex systems. The emergent behavior of a system consisting of components should be controlled strictly byformal denitions of its logic and not by side eects resulting from poor integration, i.e., non-orthogonal design ofmodules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designsthat neither cause side eects nor depend on them.An instruction set is said to be orthogonal if it lacks redundancy (i.e., there is only a single instruction that can beused to accomplish a given task)[12] and is designed such that instructions can use any register in any addressing mode.This terminology results from considering an instruction as a vector whose components are the instruction elds. Oneeld identies the registers to be operated upon and another species the addressing mode. An orthogonal instructionset uniquely encodes all combinations of registers and addressing modes.

    9.5 Communications

    In communications, multiple-access schemes are orthogonal when an ideal receiver can completely reject arbitrarilystrong unwanted signals from the desired signal using dierent basis functions. One such scheme is TDMA, wherethe orthogonal basis functions are nonoverlapping rectangular pulses (time slots).Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single trans-mitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make themorthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of802.11 Wi-Fi; WiMAX; ITU-T G.hn, DVB-T, the terrestrial digital TV broadcast system used in most of the worldoutside North America; and DMT (Discrete Multi Tone), the standard form of ADSL.In OFDM, the subcarrier frequencies are chosen so that the subcarriers are orthogonal to each other, meaning thatcrosstalk between the subchannels is eliminated and intercarrier guard bands are not required. This greatly simpliesthe design of both the transmitter and the receiver. In conventional FDM, a separate lter for each subchannel isrequired.

    9.6 Statistics, econometrics, and economics

    When performing statistical analysis, independent variables that aect a particular dependent variable are said to beorthogonal if they are uncorrelated,[13] since the covariance forms an inner product. In this case the same results areobtained for the eect of any of the independent variables upon the dependent variable, regardless of whether onemodels the eects of the variables individually with simple regression or simultaneously with multiple regression. Ifcorrelation is present, the factors are not orthogonal and dierent results are obtained by the two methods. This usagearises from the fact that if centered by subtracting the expected value (themean), uncorrelated variables are orthogonalin the geometric sense discussed above, both as observed data (i.e., vectors) and as random variables (i.e., densityfunctions). One econometric formalism that is alternative to the maximum likelihood framework, the GeneralizedMethod of Moments, relies on orthogonality conditions. In particular, the Ordinary Least Squares estimator may beeasily derived from an orthogonality condition between the explanatory variables and model residuals.

  • 24 CHAPTER 9. ORTHOGONALITY

    9.7 TaxonomyIn taxonomy, an orthogonal classication is one in which no item is a member of more than one group, that is, theclassications are mutually exclusive.

    9.8 CombinatoricsIn combinatorics, two nn Latin squares are said to be orthogonal if their superimposition yields all possible n2combinations of entries.[14]

    9.9 ChemistryIn synthetic organic chemistry orthogonal protection is a strategy allowing the deprotection of functional groupsindependently of each other. In supramolecular chemistry the notion of orthogonality refers to the possibility of twoor more supramolecular, often non-covalent, interactions being compatible; reversibly forming without interferencefrom the other.

    9.10 System reliabilityIn the eld of system reliability orthogonal redundancy is that form of redundancy where the form of backup deviceor method is completely dierent from the prone to error device or method. The failure mode of an orthogonallyredundant back-up device or method does not intersect with and is completely dierent from the failure mode of thedevice or method in need of redundancy to safeguard the total system against catastrophic failure.

    9.11 NeuroscienceIn neuroscience, a sensory map in the brain which has overlapping stimulus coding (e.g. location and quality) is calledan orthogonal map.

    9.12 GamingIn board games such as chess which feature a grid of squares, 'orthogonal' is commonly used to mean in the samerow/'rank' or column/'le'". In this context 'orthogonal' and 'diagonal' are considered opposites.[15]

    9.13 Other examplesVinyl records from the 1960s encoded both the left and right stereo channels in a single groove. By making the groovea 90-degree cut into the vinyl, variation in one wall was independent of variations in the other wall. The cartridgesenses the resultant motion of the stylus following the groove in two orthogonal directions: 45 degrees from verticalto either side.[16]

    9.14 See also Imaginary number Isogonal Isogonal trajectory

  • 9.15. NOTES 25

    Orthogonal complement Orthogonal group Orthogonal matrix Orthogonal polynomials Orthogonalization

    GramSchmidt process Orthonormal basis Orthonormality Orthogonal transform Pan-orthogonality occurs in coquaternions Surface normal

    9.15 Notes[1] Liddell and Scott, A GreekEnglish Lexicon s.v.

    [2] Liddell and Scott, A GreekEnglish Lexicon s.v.

    [3] Liddell and Scott, A GreekEnglish Lexicon s.v.

    [4] Oxford English Dictionary, Third Edition, September 2004, s.v. orthogonal

    [5] J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 58. ISBN 0-7167-0344-0.

    [6] Wolfram MathWorld.

    [7] Bourbaki, Algebra, ch. II 2.4, p.234

    [8] Trefethen, Lloyd N. & Bau, David (1997). Numerical linear algebra. SIAM. p. 13. ISBN 978-0-89871-361-9.

    [9] R. Penrose (2007). The Road to Reality. Vintage books. pp. 417419. ISBN 0-679-77631-1.

    [10] Michael L. Scott, Programming Language Pragmatics, p. 228

    [11] 1968, Adriaan vanWijngaarden et al., Revised Report on the Algorithmic Language ALGOL 68, section 0.1.2, Orthogonaldesign

    [12] Null, Linda & Lobur, Julia (2006). The essentials of computer organization and architecture (2nd ed.). Jones & BartlettLearning. p. 257. ISBN 978-0-7637-3769-6.

    [13] Athanasios Papoulis, S. Unnikrishna Pillai (2002). Probability, Random Variables and Stochastic Processes. McGraw-Hill.p. 211. ISBN 0-07-366011-6.

    [14] Hedayat, A. et al. (1999). Orthogonal arrays: theory and applications. Springer. p. 168. ISBN 978-0-387-98766-8.

    [15] chessvariants.org chess glossary.

    [16] For an illustration, see YouTube.

    9.16 References Chapter 4 Compactness and Orthogonality in The Art of Unix Programming

  • Chapter 10

    Orthogonalization

    In linear algebra, orthogonalization is the process of nding a set of orthogonal vectors that span a particularsubspace. Formally, starting with a linearly independent set of vectors {v1, ... , vk} in an inner product space(most commonly the Euclidean space Rn), orthogonalization results in a set of orthogonal vectors {u1, ... , uk} thatgenerate the same subspace as the vectors v1, ... , vk. Every vector in the new set is orthogonal to every other vectorin the new set; and the new set and the old set have the same linear span.In addition, if we want the resulting vectors to all be unit vectors, then the procedure is called orthonormalization.Orthogonalization is also possible with respect to any symmetric bilinear form (not necessarily an inner product, notnecessarily over real numbers), but standard algorithms may encounter division by zero in this more general setting.

    10.1 Orthogonalization algorithmsMethods for performing orthogonalization include:

    GramSchmidt process, which uses projection Householder transformation, which uses reection Givens rotation

    When performing orthogonalization on a computer, the Householder transformation is usually preferred over theGramSchmidt process since it is more numerically stable, i.e. rounding errors tend to have less serious eects.On the other hand, the GramSchmidt process produces the jth orthogonalized vector after the jth iteration, whileorthogonalization using Householder reections produces all the vectors only at the end. This makes only the GramSchmidt process applicable for iterative methods like the Arnoldi iteration.The Givens rotation is more easily parallelized than Householder transformations.

    10.2 See also Orthogonality Biorthogonal system Orthogonal basis

    26

  • Chapter 11

    Orthographic projection

    For the orthographic projection as a map projection, see Orthographic projection in cartography.

    Orthographic projection (or orthogonal projection) is a means of representing a three-dimensional object in twodimensions. It is a form of parallel projection, where all the projection lines are orthogonal to the projection plane,[1]resulting in every plane of the scene appearing in ane transformation on the viewing surface. It is further dividedinto multiview orthographic projections and axonometric projections. A lens providing an orthographic projection isknown as an (object-space) telecentric lens.The term orthographic is also sometimes reserved specically for depictions of objects where the axis or plane of theobject is also parallel with the projection plane,[1] as in multiview orthographic projections.

    11.1 OriginThe orthographic projection has been known since antiquity, with its cartographic uses being well documented.Hipparchus used the projection in the 2nd century BC to determine the places of star-rise and star-set. In about 14 BC,Roman engineer Marcus Vitruvius Pollio used the projection to construct sundials and to compute sun positions.[2]

    Vitruvius also seems to have devised the term orthographic (from the Greek orthos (= straight) and graph (=drawing) for the projection. However, the name analemma, which also meant a sundial showing latitude andlongitude, was the common name until Franois d'Aguilon of Antwerp promoted its present name in 1613.[2]

    The earliest surviving maps on the projection appear as woodcut drawings of terrestrial globes of 1509 (anonymous),1533 and 1551 (Johannes Schner), and 1524 and 1551 (Apian).[2]

    11.2 GeometryA simple orthographic projection onto the plane z = 0 can be dened by the following matrix:

    P =

    241 0 00 1 00 0 0

    35For each point v = (vx, vy, vz), the transformed point would be

    Pv =

    241 0 00 1 00 0 0

    3524vxvyvz

    35 =24vxvy0

    35Often, it is more useful to use homogeneous coordinates. The transformation above can be represented for homoge-neous coordinates as

    27

  • 28 CHAPTER 11. ORTHOGRAPHIC PROJECTION

    P =

    26641 0 0 00 1 0 00 0 0 00 0 0 1

    3775For each homogeneous vector v = (vx, vy, vz, 1), the transformed vector would be

    Pv =

    26641 0 0 00 1 0 00 0 0 00 0 0 1

    37752664vxvyvz1

    3775 =2664vxvy01

    3775In computer graphics, one of the most common matrices used for orthographic projection can be dened by a 6-tuple,(left, right, bottom, top, near, far), which denes the clipping planes. These planes form a box with the minimumcorner at (left, bottom, -near) and the maximum corner at (right, top, -far).The box is translated so that its center is at the origin, then it is scaled to the unit cube which is dened by having aminimum corner at (1,1,1) and a maximum corner at (1,1,1).The orthographic transform can be given by the following matrix:

    P =

    266642

    rightleft 0 0 right+leftrightleft0 2topbottom 0 top+bottomtopbottom0 0 2farnear

    far+nearfarnear

    0 0 0 1

    37775which can be given as a scaling followed by a translation of the form

    P = ST =

    26642

    rightleft 0 0 00 2topbottom 0 00 0 2farnear 00 0 0 1

    377526641 0 0 left+right20 1 0 top+bottom20 0 1 far+near20 0 0 1

    3775The inversion of the Projection Matrix, which can be used as the Unprojection Matrix is dened:

    P1 =

    2664rightleft

    2 0 0left+right

    2

    0 topbottom2 0top+bottom

    2

    0 0 farnear2far+near

    2

    0 0 0 1

    3775

    11.3 Multiview orthographic projectionsMain article: Multiview orthographic projectionWith multiview orthographic projections, up to six pictures of an object are produced, with each projection planeparallel to one of the coordinate axes of the object. The views are positioned relative to each other according to eitherof two schemes: rst-angle or third-angle projection. In each, the appearances of views may be thought of as beingprojected onto planes that form a 6-sided box around the object. Although six dierent sides can be drawn, usuallythree views of a drawing give enough information to make a 3D object. These views are known as front view, topview and end view.

    11.4 PictorialsMain article: Axonometric projection

  • 11.5. CARTOGRAPHY 29

    Symbols used to dene whether a projection is either Third Angle (right) or First Angle (left).

    Within orthographic projection there is the subcategory known as pictorials. Axonometric pictorials show an imageof an object as viewed from a skew direction in order to reveal all three directions (axes) of space in a single picture.[3]Orthographic pictorial instrument drawings are often used to approximate graphical perspective projections, but thereis attendant distortion in the approximation. Because pictorial projections inherently have this distortion, in theinstrument drawing of pictorials, great liberties may then be taken for economy of eort and best eect. Orthographicpictorials rely on the technique of axonometric projection (to measure along axes).

    11.5 CartographyMain article: Orthographic projection in cartography

    An orthographic projection map is a map projection of cartography. Like the stereographic projection and gnomonicprojection, orthographic projection is a perspective (or azimuthal) projection, in which the sphere is projected onto atangent plane or secant plane. The point of perspective for the orthographic projection is at innite distance. It depictsa hemisphere of the globe as it appears from outer space, where the horizon is a great circle. The shapes and areasare distorted, particularly near the edges.[4][2]

    11.6 See also

    Graphical projection

    Multiview orthographic projection

    Telecentric lens

    Telephoto lens

    11.7 References[1] Maynard, Patric (2005). Drawing distinctions: the varieties of graphic expression. Cornell University Press. p. 22. ISBN

    0-8014-7280-6.

    [2] Snyder, John P. (1993). Flattening the Earth: Two Thousand Years of Map Projections pp. 1618. Chicago and London:The University of Chicago Press. ISBN 0-226-76746-9.

    [3] Mitchell, William; MalcolmMcCullough (1994). Digital design media. JohnWiley and Sons. p. 169. ISBN 0-471-28666-4.

    [4] Snyder, J. P. (1987). Map ProjectionsA Working Manual (US Geologic Survey Professional Paper 1395). Washington,D.C.: US Government Printing Oce. pp. 145153.

  • 30 CHAPTER 11. ORTHOGRAPHIC PROJECTION

    11.8 External links Normale (orthogonale) Axonometrie (German) Orthographic Projection Video and mathematics

  • Chapter 12

    Orthonormal basis

    In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with nite dimensionis a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other.[1][2][3]For example, the standard basis for a Euclidean space Rn is an orthonormal basis, where the relevant inner productis the dot product of vectors. The image of the standard basis under a rotation or reection (or any orthogonaltransformation) is also orthonormal, and every orthonormal basis for Rn arises in this fashion.For a general inner product space V, an orthonormal basis can be used to dene normalized orthogonal coordinates onV. Under these coordinates, the inner product becomes dot product of vectors. Thus the presence of an orthonormalbasis reduces the study of a nite-dimensional inner product space to the study ofRn under dot product. Every nite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using theGramSchmidt process.In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (innite-dimensional) innerproduct spaces (or pre-Hilbert spaces).[4] Given a pre-Hilbert space H, an orthonormal basis for H is an orthonormalset of vectors with the property that every vector in H can be written as an innite linear combination of the vectorsin the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for H. Note that an orthonormalbasis in this sense is not generally a Hamel basis, since innite linear combinations are required. Specically, thelinear span of the basis must be dense in H, but it may not be the entire space.

    12.1 Examples The set of vectors {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} (the standard basis) forms an orthonormal basisof R3.

    Proof: A straightforward computation shows that the inner products of these vectors equalszero, e1, e2 = e1, e3 = e2, e3 = 0 and that each of their magnitudes equals one, ||e1|| =||e2|| = ||e3|| = 1. This means that {e1, e2, e3} is an orthonormal set. All vectors (x, y, z) inR3 can be expressed as a sum of the basis vectors scaled

    (x; y; z) = xe1 + ye2 + ze3;

    so {e1, e2, e3} spans R3 and hence must be a basis. It may also be shown that the standardbasis rotated about an axis through the origin or reected in a plane through the origin formsan orthonormal basis of R3.

    The set {fn : n Z} with fn(x) = exp(2inx) forms an orthonormal basis of the space of functions with niteLebesgue integrals, L2([0,1]), with respect to the 2-norm. This is fundamental to the study of Fourier series.

    The set {eb : b B} with eb(c) = 1 if b = c and 0 otherwise forms an orthonormal basis of 2(B). Eigenfunctions of a SturmLiouville eigenproblem. An orthogonal matrix is a matrix whose column vectors form an orthonormal set.

    31

  • 32 CHAPTER 12. ORTHONORMAL BASIS

    12.2 Basic formulaIf B is an orthogonal basis of H, then every element x of H may be written as

    x =Xb2B

    hx; bikbk2 b:

    When B is orthonormal, this simplies to

    x =Xb2B

    hx; bib

    and the square of the norm of x can be given by

    kxk2 =Xb2B

    jhx; bij2:

    Even if B is uncountable, only countably many terms in this sumwill be non-zero, and the expression is therefore well-dened. This sum is also called the Fourier expansion of x, and the formula is usually known as Parsevals identity.See also Generalized Fourier series.If B is an orthonormal basis of H, then H is isomorphic to 2(B) in the following sense: there exists a bijective linearmap : H 2(B) such that

    h(x);(y)i = hx; yi

    for all x and y in H.

    12.3 Incomplete orthogonal setsGiven a Hilbert space H and a set S of mutually orthogonal vectors in H, we can take the smallest closed linearsubspace V of H containing S. Then S will be an orthogonal basis of V ; which may of course be smaller than H itself,being an incomplete orthogonal set, or be H, when it is a complete orthogonal set.

    12.4 ExistenceUsing Zorns lemma and the GramSchmidt process (or more simply well-ordering and transnite recursion), onecan show that every Hilbert space admits a basis and thus an orthonormal basis; furthermore, any two orthonormalbases of the same space have the same cardinality (this can be proven in a manner akin to that of the proof of theusual dimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate iscountable or not). A Hilbert space is separable if and only if it admits a countable orthonormal basis. (One can provethis last statement without using the axiom of choice).

    12.5 As a homogeneous spaceMain article: Stiefel manifold

    The set of orthonormal bases for a space is a principal homogeneous space for the orthogonal group O(n), and iscalled the Stiefel manifold Vn(Rn) of orthonormal n-frames.

  • 12.6. SEE ALSO 33

    In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: givenan orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-onecorrespondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends agiven basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonalbasis to any other orthogonal basis.The other Stiefel manifolds Vk(Rn) for k < n of incomplete orthonormal bases (orthonormal k-frames) are stillhomogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken toany other k-frame by an orthogonal map, but this map is not uniquely determined.

    12.6 See also Basis (linear algebra) Schauder basis Total set

    12.7 References[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). AddisonWesley. ISBN 0-321-28713-4.

    [2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.

    [3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.

    [4] Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.

  • Chapter 13

    Orthonormal function system

    An orthonormal function system (ONS) is an orthonormal basis in a vector space of functions.[1]

    See basis (linear algebra), Fourier analysis, square-integrable, Hilbert space for more.

    13.1 References[1] Melzak, Z. A. (2012), Companion to Concrete Mathematics, Dover Books on Mathematics, Courier Dover Corporation, p.

    138, ISBN 9780486135816.

    34

  • Chapter 14

    Orthonormality

    In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and unit vectors. Aset of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. Anorthonormal set which forms a basis is called an orthonormal basis.

    14.1 Intuitive overviewThe construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicularvectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the anglebetween them is 90 (i.e. if they form a right angle). This denition can be formalized in Cartesian space by deningthe dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero.Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the lengthof a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vectordotted with itself. That is,

    kxk = px x

    Many important results in linear algebra deal with collections of two or more orthogonal vectors. But often, it is easierto deal with vectors of unit length. That is, it often simplies things to only consider vectors whose norm equals 1.The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given aspecial name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.

    14.1.1 Simple exampleWhat does a pair of orthonormal vectors in 2-D Euclidean space look like?Let u = (x1, y1) and v = (x2, y2). Consider the restrictions on x1, x2, y1, y2 required to make u and v form anorthonormal pair.

    From the orthogonality restriction, u v = 0. From the unit length restriction on u, ||u|| = 1. From the unit length restriction on v, ||v|| = 1.

    Expanding these terms gives 3 equations:

    1. x1x2 + y1y2 = 0

    2.px12 + y12 = 1

    35

  • 36 CHAPTER 14. ORTHONORMALITY

    3.px22 + y22 = 1

    Converting from Cartesian to polar coordinates, and considering Equation (2) and Equation (3) immediately givesthe result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unitcircle.After substitution, Equation (1) becomes cos 1 cos 2 + sin 1 sin 2 = 0 . Rearranging gives tan 1 = cot 2 .Using a trigonometric identity to convert the cotangent term gives

    tan(1) = tan2 +

    2

    ) 1 = 2 + 2It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose dierence in angles equals90.

    14.2 DenitionLet V be an inner-product space. A set of vectors

    fu1; u2; : : : ; un; : : :g 2 V

    is called orthonormal if and only if

    8i; j : hui; uji = ijwhere ij is the Kronecker delta and h; i is the inner product dened over V .

    14.3 SignicanceOrthonormal sets are not especially signicant on their own. However, they display certain features that make themfundamental in exploring the notion of diagonalizability of certain operators on vector spaces.

    14.3.1 PropertiesOrthonormal sets have certain very appealing properties, which make them particularly easy to work with.

    Theorem. If {e1, e2,...,e} is an orthonormal list of vectors, then

    jja1e1 + a2e2 + + anenjj2 = ja1j2 + ja2j2 + + janj2

    Theorem. Every orthonormal list of vectors is linearly independent.

    14.3.2 Existence Gram-Schmidt theorem. If {v1, v2,...,v} is a linearly independent list of vectors in an inner-product spaceV , then there exists an orthonormal list {e1, e2,...,e} of vectors in V such that span(e1, e2,...,e) = span(v1,v2,...,v).

  • 14.4. EXAMPLES 37

    Proof of the Gram-Schmidt theorem is constructive, and discussed at length elsewhere. The Gram-Schmidt theorem,together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possiblythe most signicant use of orthonormality, as this fact permits operators on inner-product spaces to be discussedin terms of their action on the spaces orthonormal basis vectors. What results is a deep relationship between thediagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterizedby the Spectral Theorem.

    14.4 Examples

    14.4.1 Standard basisThe standard basis for the coordinate space Fn is

    Any two vectors e, e where ij are orthogonal, and all vectors are clearly of unit length. So {e1, e2,...,e} forms anorthonormal basis.

    14.4.2 Real-valued functionsWhen referring to real-valued functions, usually the L inner product is assumed unless otherwise stated. Two func-tions (x) and (x) are orthonormal over the interval [a; b] if

    (1) h(x); (x)i =Z ba

    (x) (x)dx = 0; and

    (2) jj(x)jj2 = jj (x)jj2 ="Z b

    a

    j(x)j2dx# 12

    =

    "Z ba

    j (x)j2dx# 12

    = 1:

    14.4.3 Fourier seriesThe Fourier series is amethod of expressing a periodic function in terms of sinusoidal basis functions. TakingC[,]to be the space of all real-valued functions continuous on the interval [,] and taking the inner product to be

    hf; gi =Z

    f(x)g(x)dx

    It can be shown that

    1p2

    ;sin(x)p

    ;sin(2x)p

    ; : : : ;

    sin(nx)p

    ;cos(x)p

    ;cos(2x)p

    ; : : : ;

    cos(nx)p

    ; n 2 N

    forms an orthonormal set.However, this is of little consequence, becauseC[,] is innite-dimensional, and a nite set of vectors cannot spanit. But, removing the restriction that n be nite makes the set dense in C[,] and therefore an orthonormal basisof C[,].

    14.5 See also Orthogonalization

  • 38 CHAPTER 14. ORTHONORMALITY

    14.6 References Axler, Sheldon (1997), Linear Algebra Done Right (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98258-8

  • Chapter 15

    Overdetermined system

    For the philosophical term, see overdetermination.

    In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns.[1]An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coe-cients. However, an overdetermined system will have solutions in some cases, for example if some equation occursseveral times in the system, or if some equations are linear combinations of the others.The terminology can be described in terms of the concept of constraint counting. Each unknown can be seen as anavailable degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts onedegree of freedom. Therefore the critical case occurs when the number of equations and the number of free variablesare equal. For every variable giving a degree of freedom, there exists a corresponding constraint. The overdeterminedcase occurs when the system has been overconstrained that is, when the equations outnumber the unknowns. Incontrast, the underdetermined case occurs when the system has been underconstrained that is, when the numberof equations is fewer than the number of unknowns.

    15.1 Systems of equations

    15.1.1 An example in two dimensionsConsider the system of 3 equations and 2 unknowns (x1 and x2), which is overdetermined because 3>2, and whichcorresponds to Diagram #1:2x1 + x2 = 13x1 + x2 = 2x1 + x2 = 1 .There is one solution for each pair of linear equations: for the rst and second equations (0.2, 1.4), for the rstand third (2/3, 1/3), and for the second and third (1.5, 2.5). However there is no solution that satises all threesimultaneously. Diagrams #2 and 3 show other congurations that are inconsistent because no point is on all of thelines. Systems of this variety are deemed inconsistent.The only cases where the overdetermined system does in fact have a solution are demonstrated in Diagrams #4, 5, and6. These exceptions can occur only when the overdetermined system contains enough linearly dependent equationsthat the number of independent equations does not exceed the number of unknowns. Linear dependence means thatsome equations can be obtained from linearly combining other equations. For example, y = x + 1 and 2y = 2x + 2 arelinearly dependent equations because the second one can be obtained by taking twice the rst one.

    15.1.2 Matrix formAny system of linear equations can be written as a matrix equation. The previous system of equations can be writtenas follows:

    39

  • 40 CHAPTER 15. OVERDETERMINED SYSTEM

    #1 A system of three linearly independent equations, three lines, three intersections

    24 2 13 11 1

    35X1X2

    =

    24121

    35Notice that the rows of the matrix (corresponding to equations) outnumber the columns (corresponding to unknowns),meaning that the system is overdetermined. In linear algebra the concepts of row space, column space and null spaceare important for determining the properties of matrices. The informal discussion of constraints and degrees of free-dom above relates directly to these more formal concepts.

    15.2 Homogeneous caseThe homogeneous case (in which all constant terms are zero) is always consistent (because there is a trivial, all-zerosolution). There are two cases, depending on the number of linearly dependent equations: either there is just thetrivial solution, or there is the trivial solution plus an innite set of other solutions.Consider the system of linear equations: Li = 0 for 1 iM, and variablesX1,X2, ...,XN, where each Li is a weightedsum of the Xis. Then X1 = X2 = ... = XN = 0 is always a solution. When M < N the system is underdetermined andthere are always an innitude of further solutions. In fact the dimension of the space of solutions is always at least N M.ForM N, there may be no solution other than all values being 0. There will be an innitude of other solutions onlywhen the system of equations has enough dependencies (linearly dependent equations) that the number of independentequations is at most N 1. But with M N the number of independent equations could be as high as N, in whichcase the trivial solution is the only one.

  • 15.3. NON-HOMOGENEOUS CASE 41

    #2 A system of three linearly independent equations, three lines (two parallel), two intersections

    15.3 Non-homogeneous caseIn systems of linear equations, Li=ci for 1 i M, in variables X1, X2, ..., XN the equations are sometimes linearlydependent; in fact the number of linearly independent equations cannot exceed N+1. We have the following possiblecases for an overdetermined system with N unknowns and M equations (M>N).

    M = N+1 and all M equations are linearly independent. This case yields no solution. Example: x = 1, x = 2.

    M > N but only K equations (K < M and K N+1) are linearly independent. There exist three possiblesub-cases of this:

    K = N+1. This case yields no solutions. Example: 2x = 2, x = 1, x = 2. K = N. This case yields either a single solution or no solution, the latter occurring when the coecientvector of one equation can be replicated by a weighted sum of the coecient vectors of the other equationsbut that weighted sum applied to the constant terms of the other equations does not replicate the oneequations constant term. Example with one solution: 2x = 2, x = 1. Example with no solution: 2x + 2y= 2, x + y = 1, x + y = 3.

    K < N. This case yields either innitely many solutions or no solution, the latter occurring as in theprevious sub-case. Example with innitely many solutions: 3x + 3y = 3, 2x + 2y = 2, x + y = 1. Examplewith no solution: 3x + 3y + 3z = 3, 2x + 2y + 2z = 2, x + y + z = 1, x + y + z = 4.

  • 42 CHAPTER 15. OVERDETERMINED SYSTEM

    #3 A system of three linearly independent equations, three lines (three parallel), no intersections

    These results may be easier to understand by putting the augmented matrix of the coecients of the system in rowechelon form by using Gaussian elimination. This row echelon form is the augmented matrix of a system of equationsthat is equivalent to the given system (it has exactly the same solutions). The number of independent equations inthe original system is the number of non-zero rows in the echelon form. The system is inconsistent (no solution) ifand only if the last non-zero row in echelon form has only one non-zero entry that is in the last column (giving anequation 0 = c where c is a non-zero constant). Otherwise, there is exactly one solution when the number of non-zerorows in echelon form is equal to the number of unknowns, and there are innitely many solutions when the numberof non-zero rows is lower than the number of variables.Putting it another way, according to the RouchCapelli theorem, any system of equations (overdetermined or oth-erwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coecient matrix. If, onthe other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution isunique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameterswhere k is the dierence between the number of variables and the rank; hence in such a case there are an innitudeof solutions.

    15.4 Exact solutions

    All exact solutions can be obtained, or it can be shown that none exist, using matrix algebra. See System of linearequations#Matrix solution.

  • 15.5. APPROXIMATE SOLUTIONS 43

    #4 A system of three equations (one equation linearly dependent on the others), three lines (two collinear), one intersection

    15.5 Approximate solutionsThe method of ordinary least squares can be used to nd an approximate solution to overdetermined systems. Forthe system Ax = b; the least squares formula is obtained from the problem

    minxkAx bk;

    the solution of which can be written with the normal equations,[2]

    x = (ATA)1ATb;

    where T indicates a matrix transpose, provided (ATA)1 exists (that is, provided A has full column rank). Withthis formula an approximate solution is found when no exact solution exists, and it gives an exact solution when onedoes exist. However, to achieve good numerical accuracy, using the QR factorization of A to solve the least squaresproblem is preferred.[3]

    15.6 In general useThe concept can also be applied to more general systems of equations, such as systems of polynomial equations orpartial dierential equations. In the case of the systems of polynomial equations, it may happen that an overdeterminedsystem has a solution, but that no one equation is a consequence of the others and that, when removing any equation,

  • 44 CHAPTER 15. OVERDETERMINED SYSTEM

    #5 A system of three equations (one equation linearly dependent on the others), three lines, one intersection

    the new system has more solutions. For example, (x 1)(x 2) = 0; (x 1)(x 3) = 0 has the single solutionx = 1; but each equation by itself has two solutions.

    15.7 See also Underdetermined system

    Rouch-Capelli (or, Rouch-Frobenius) theorem

    Integrability condition

    Least squares

    Consistency proof

    Compressed sensing

    MoorePenrose pseudoinverse

  • 15.8. REFERENCES 45

    #6 A system of three equations (two equations linearly dependent on the other), three lines, an innitude of intersections

    15.8 References[1] Planet Math, Overdetermined.

    [2] Anton, Howard; Rorres,, Chris (2005). Elementary Linear Algebra (9th ed.). John Wiley and Sons, Inc. ISBN 978-0-471-66959-3.

    [3] Trefethen, Lloyd; Bau, III, David (1997). Numerical Linear Algebra. ISBN 978-0898713619.

  • 46 CHAPTER 15. OVERDETERMINED SYSTEM

    15.9 Text and image sources, contributors, and licenses15.9.1 Text

    Orientation (vector space) Source: http://en.wikipedia.org/wiki/Orientation_(vector_space)?oldid=650603588 Contributors: Patrick,Michael Hardy, Giftlite, BenFrantzDale, Fropu, Abdull, Paul August, Jheald, Kusma, Oleg Alexandrov, Waldir, Mathbot, Wigie, GaiusCornelius, SmackBot, Nbarth, Fmalan, Tsca.bot, Mhym, 345Kai, Cydebot, Juansempere, Keyi, Mglovesfun, Pjvpjv, Lewallen, Tomy-Duby, Policron, Drew 0123, Bikasuishin, Paolo.dL, Moonraker12, Brews ohare, MystBot, Addbot, Fgnievinski, Ozob, Beren, Luckas-bot, TaBOT-zerem, Amirobot, AnomieBOT, Txebixev, Erik9bot, Sawomir Biay, Guentherwagner, Ripchip Bot, EmausBot, Quondum,Maschen, Helpful Pixie Bot, BG19bot, GooseBayLabradorCamper and Anonymous: 17

    Orientation of a vector bundle Source: http://en.wikipedia.org/wiki/Orientation_of_a_vector_bundle?oldid=641432043 Contributors:TakuyaMurata, Santryl and Masssly

    Orthant Source: http://en.wikipedia.org/wiki/Orthant?oldid=649513197 Contributors: Inferno~enwiki, Charles Matthews, Altenmann,Tomruen, Oleg Alexandrov, Linas, BD2412, Reyk, SmackBot, RDBury, Nbarth, Tamfang, Jim.belk, David Eppstein, R'n'B, Megaparsec,Marc van Leeuwen, Addbot, Luckas-bot, Yobot, AnomieBOT, Erik9bot, Yousam830412, Indiana State and Anonymous: 7

    Orthogonal basis Source: http://en.wikipedia.org/wiki/Orthogonal_basis?oldid=635382038 Contributors: Charles Matthews, Bearcat,M1ss1ontomars2k4, RDBury, Incnis Mrsi, Jim.belk, Magioladitis, Michaeldsuarez, Wdwd, Addbot, Krystofer, Helpful Pixie Bot, Amyxz,Liberio1998, Qetuth, Deltahedron, Mogism, Brirush, AnthonyJ Lock and Anonymous: 3

    Orthogonal complement Source: http://en.wikipedia.org/wiki/Orthogonal_complement?oldid=636755532Contributors: Zundark,MichaelHardy, Bogdangiusca, AugPi, Giftlite, Fropu, Gadykozma, Rgdboer, LOL, Wikipedian231, JWWalker, FlaBot, TheSpook, Wave-length, SmackBot, Mhss, Nbarth, Ashted, CBM, Mcstrother, Kkjc5824, Sullivan.t.j, TXiKiBoT, AlleborgoBot, Ddxc, DragonBot, Ad-dbot, , Yobot, Calle, Xqbot, Erik9bot, RDWD, Quondum, ClueBot NG, Shantham11, HUnTeR4subs, Deltahedron, Jose Brox,Shryke.windgrace and Anonymous: 10

    Orthogonal diagonalization Source: http://en.wikipedia.org/wiki/Orthogonal_diagonalization?oldid=557107982 Contributors: MichaelHardy, Nick Number, Verkhovensky, This, that and the other, Grinevitski, Weleepoxypoo and Anonymous: 1

    Orthogonal Procrustes problem Source: http://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem?oldid=658766072 Contribu-tors: The Anome, Michael Hardy, Willem, BenFrantzDale, Kupopo, Paolo.dL, Lourakis, MystBot, Addbot, DOI bot, Citation bot,Citation bot 1, Ezander~enwiki, Ianschillebeeckx and Anonymous: 2

    Orthogonal transformation Source: http://en.wikipedia.org/wiki/Orthogonal_transformation?oldid=601933247 Contributors: MichaelHardy, Bearcat, Waltpohl, Wavelength, KSmrq, Malcolma, Arthur Rubin, Jim.belk, Maurice Carbonaro, Cacadril, Quondum, ClueBotNG, Beckman16, ChrisGualtieri, DeathOfBalance and Anonymous: 1

    Orthogonality Source: http://en.wikipedia.org/wiki/Orthogonality?oldid=663511727 Contributors: Zundark, Heron, Patrick, MichaelHardy, Paul Barlow, JakeVortex, Clausen, Charles Matthews, Dysprosia, Hyacinth, Bevo, Robbot, R3m0t, MathMartin, Sverdrup, Gdim-itr, Pengo, Mlk, Giftlite, Wolfkeeper, Inkling, BenFrantzDale, Karn, Fropu,Wwoods, Rchandra, Macrakis, Sreyan, JosephMyers, Tom-ruen, Pmanderson, WpZurp, Paul August, Bender235, Elwikipedista~enwiki, CanisRufus, Rgdboer, Indil, EmilJ, GalaxiaGuy, Reinyday,Teorth, Tobacman, Nachocab, Frodet, PatrickFisher, Wtmitchell, Ringbang, Oleg Alexandrov, LOL, Tabletop, InitHello, V8rik, BD2412,Lonestarnot, FlaBot, Jgmakin, Gurch, Jrtayloriv, Srleer, Algebraist, YurikBot, Wavelength, Ksnortum, Stassats, Santaduck, Light cur-rent, Demus Wiesbaden, Cullinane, Josh3580, Tony Liao~enwiki, Pred, Ybbor, NeilN, Nekura, Doren1, SmackBot, RDBury, Inverse-Hypercube, Eskimbot, BiT, Slaniel, Ckerr, Bluebot, Oli Filth, MalafayaBot, SEIBasaurus, Kostmo, Hongooi, Tamfang, Dark link,WilliamAckerman, Bombshell, Dnavarro, SashatoBot, Ksn, Treedog669, Jim.belk, Anand Karia, Loadmaster, Beefyt, Rschwieb, Jason.grossman,MarylandArtLover, Gene Adam, Kendirangu, Klausness, Saimhe, Vernanimalcula, Faizhaider, Dirac66, Hveziris, DerHexer, Infovarius,Robin S, Gwern, Quanda, Pomte, Maurice Carbonaro, Tdadamemd, Nappyrash, Pdcook, Sunbeam44, VolkovBot, LokiClock, MajorF-reakinEnglish, Lonwolve, SieBot, Happysailor, Vanished user kijsdion3i4jf, Anchor Link Bot, TomC phys, Stevex99,