Square Matrices

213
Square matrices[edit] Main article: Square matrix A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. Main types[edit] Name Example with n = 3 Diagonal matrix Lower triangular matrix Upper triangular matrix Diagonal and triangular matrices[edit] If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly if all entries of A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are zero, A is called adiagonal matrix. Identity matrix[edit] The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g. It is a square matrix of order n, and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged: AIn = ImA = A for any m-by-n matrix A.

description

MathMatrix

Transcript of Square Matrices

  • Square matrices[edit]

    Main article: Square matrix

    A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is

    known as a square matrix of order n. Any two square matrices of the same order can be added

    and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the

    imaginary line which runs from the top left corner to the bottom right corner of the matrix.

    Main types[edit] Name Example with n = 3

    Diagonal matrix

    Lower triangular matrix

    Upper triangular matrix

    Diagonal and triangular matrices[edit]

    If all entries of A below the main diagonal are zero, A is called an upper triangular matrix.

    Similarly if all entries of A above the main diagonal are zero, A is called a lower triangular

    matrix. If all entries outside the main diagonal are zero, A is called adiagonal matrix.

    Identity matrix[edit]

    The identity matrix In of size n is the n-by-n matrix in which all the elements on the main

    diagonal are equal to 1 and all other elements are equal to 0, e.g.

    It is a square matrix of order n, and also a special kind of diagonal matrix. It is called

    identity matrix because multiplication with it leaves a matrix unchanged:

    AIn = ImA = A for any m-by-n matrix A.

    http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=11http://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Main_diagonalhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=12http://en.wikipedia.org/wiki/Diagonal_matrixhttp://en.wikipedia.org/wiki/Lower_triangular_matrixhttp://en.wikipedia.org/wiki/Upper_triangular_matrixhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=13http://en.wikipedia.org/wiki/Triangular_matrixhttp://en.wikipedia.org/wiki/Diagonal_matrixhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=14http://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Main_diagonalhttp://en.wikipedia.org/wiki/Main_diagonalhttp://en.wikipedia.org/wiki/Diagonal_matrix

  • Symmetric or skew-symmetric matrix[edit]

    A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If

    instead, A was equal to the negative of its transpose, i.e., A = AT, then A is a skew-

    symmetric matrix. In complex matrices, symmetry is often replaced by the concept

    of Hermitian matrices, which satisfy A = A, where the star or asterisk denotes

    the conjugate transpose of the matrix, i.e., the transpose of the complex

    conjugate of A.

    By the spectral theorem, real symmetric matrices and complex Hermitian matrices

    have an eigenbasis; i.e., every vector is expressible as a linear combination of

    eigenvectors. In both cases, all eigenvalues are real.[29] This theorem can be

    generalized to infinite-dimensional situations related to matrices with infinitely many

    rows and columns, see below.

    Invertible matrix and its inverse[edit]

    A square matrix A is called invertible or non-singular if there exists a matrix B such

    that

    AB = BA = In.[30][31]

    If B exists, it is unique and is called the inverse matrix of A, denoted A1.

    Definite matrix[edit]

    Positive definite matrix Indefinite matrix

    Q(x,y) = 1/4 x2 + y2 Q(x,y) = 1/4 x2 1/4 y2

    Points such that Q(x,y)=1

    (Ellipse).

    Points such that Q(x,y)=1

    (Hyperbola).

    A symmetric nn-matrix is called positive-definite (respectively negative-definite;

    indefinite), if for all nonzero vectorsx Rn the associated quadratic form given by

    Q(x) = xTAx

    http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=15http://en.wikipedia.org/wiki/Symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Hermitian_matrixhttp://en.wikipedia.org/wiki/Asteriskhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Spectral_theoremhttp://en.wikipedia.org/wiki/Eigenbasishttp://en.wikipedia.org/wiki/Linear_combinationhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-29http://en.wikipedia.org/wiki/Matrix_(mathematics)#Infinite_matriceshttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=16http://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-30http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-30http://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=17http://en.wikipedia.org/wiki/Positive_definite_matrixhttp://en.wikipedia.org/wiki/Indefinite_matrixhttp://en.wikipedia.org/wiki/Ellipsehttp://en.wikipedia.org/wiki/Hyperbolahttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Quadratic_formhttp://en.wikipedia.org/wiki/File:Ellipse_in_coordinate_system_with_semi-axes_labelled.svghttp://en.wikipedia.org/wiki/File:Hyperbola2_SVG.svg

  • takes only positive values (respectively only negative values; both some

    negative and some positive values).[32] If the quadratic form takes only non-

    negative (respectively only non-positive) values, the symmetric matrix is

    called positive-semidefinite (respectively negative-semidefinite); hence the

    matrix is indefinite precisely when it is neither positive-semidefinite nor

    negative-semidefinite.

    A symmetric matrix is positive-definite if and only if all its eigenvalues are

    positive, i.e., the matrix is positive-semidefinite and it is invertible.[33] The

    table at the right shows two possibilities for 2-by-2 matrices.

    Allowing as input two different vectors instead yields the bilinear

    form associated to A:

    BA (x, y) = xTAy.[34]

    Orthogonal matrix[edit]

    An orthogonal matrix is a square matrix with real entries whose columns

    and rows are orthogonal unit vectors (i.e., orthonormal vectors).

    Equivalently, a matrix A is orthogonal if its transpose is equal to

    its inverse:

    which entails

    where I is the identity matrix.

    An orthogonal matrix A is necessarily invertible (with

    inverse A1 = AT), unitary (A1 = A*), and normal (A*A = AA*).

    The determinant of any orthogonal matrix is either +1 or 1.

    Aspecial orthogonal matrix is an orthogonal matrix

    with determinant +1. As a linear transformation, every

    orthogonal matrix with determinant +1 is a pure rotation, while

    every orthogonal matrix with determinant -1 is either a

    pure reflection, or a composition of reflection and rotation.

    The complex analogue of an orthogonal matrix is a unitary

    matrix.

    Main operations[edit]

    Trace[edit]

    The trace, tr(A) of a square matrix A is the sum of its diagonal

    entries. While matrix multiplication is not commutative as

    mentioned above, the trace of the product of two matrices is

    independent of the order of the factors:

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-32http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-33http://en.wikipedia.org/wiki/Bilinear_formhttp://en.wikipedia.org/wiki/Bilinear_formhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-34http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=18http://en.wikipedia.org/wiki/Matrix_(mathematics)#Square_matriceshttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Orthogonalhttp://en.wikipedia.org/wiki/Unit_vectorhttp://en.wikipedia.org/wiki/Orthonormalityhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/wiki/Unitary_matrixhttp://en.wikipedia.org/wiki/Normal_matrixhttp://en.wikipedia.org/wiki/Determinanthttp://en.wikipedia.org/wiki/Determinanthttp://en.wikipedia.org/wiki/Linear_transformationhttp://en.wikipedia.org/wiki/Rotation_(mathematics)http://en.wikipedia.org/wiki/Reflection_(mathematics)http://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Unitary_matrixhttp://en.wikipedia.org/wiki/Unitary_matrixhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=19http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=20http://en.wikipedia.org/wiki/Trace_of_a_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#non_commutative

  • tr(AB) = tr(BA).

    This is immediate from the definition of matrix multiplication:

    Also, the trace of a matrix is equal to that of its

    transpose, i.e.,

    tr(A) = tr(AT).

    Determinant[edit]

    Main article: Determinant

    A linear transformation on R2 given by the indicated

    matrix. The determinant of this matrix is 1, as the

    area of the green parallelogram at the right is 1, but

    the map reverses the orientation, since it turns the

    counterclockwise orientation of the vectors to a

    clockwise one.

    The determinant det(A) or |A| of a square

    matrix A is a number encoding certain properties of

    the matrix. A matrix is invertible if and only if its

    determinant is nonzero. Its absolute value equals

    the area (in R2) or volume (in R3) of the image of the

    unit square (or cube), while its sign corresponds to

    the orientation of the corresponding linear map: the

    determinant is positive if and only if the orientation

    is preserved.

    The determinant of 2-by-2 matrices is given by

    The determinant of 3-by-3 matrices involves 6

    terms (rule of Sarrus). The more lengthy Leibniz

    formula generalises these two formulae to all

    dimensions.[35]

    http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=21http://en.wikipedia.org/wiki/Determinanthttp://en.wikipedia.org/wiki/Orientation_(mathematics)http://en.wikipedia.org/wiki/If_and_only_ifhttp://en.wikipedia.org/wiki/Absolute_valuehttp://en.wikipedia.org/wiki/Rule_of_Sarrushttp://en.wikipedia.org/wiki/Leibniz_formula_for_determinantshttp://en.wikipedia.org/wiki/Leibniz_formula_for_determinantshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-35http://en.wikipedia.org/wiki/File:Determinant_example.svg

  • The determinant of a product of square

    matrices equals the product of their

    determinants:

    det(AB) = det(A) det(B).[36]

    Adding a multiple of any row to another

    row, or a multiple of any column to another

    column, does not change the determinant.

    Interchanging two rows or two columns

    affects the determinant by multiplying it by

    1.[37] Using these operations, any matrix

    can be transformed to a lower (or upper)

    triangular matrix, and for such matrices the

    determinant equals the product of the

    entries on the main diagonal; this provides

    a method to calculate the determinant of

    any matrix. Finally, the Laplace

    expansion expresses the determinant in

    terms of minors, i.e., determinants of

    smaller matrices.[38] This expansion can be

    used for a recursive definition of

    determinants (taking as starting case the

    determinant of a 1-by-1 matrix, which is its

    unique entry, or even the determinant of a

    0-by-0 matrix, which is 1), that can be seen

    to be equivalent to the Leibniz formula.

    Determinants can be used to solve linear

    systems using Cramer's rule, where the

    division of the determinants of two related

    square matrices equates to the value of

    each of the system's variables.[39]

    Eigenvalues and eigenvectors[edit]

    Main article: Eigenvalues and eigenvectors

    A number and a non-zero vector v satisfying

    Av = v

    are called an eigenvalue and

    an eigenvector of A, respectively.[nb

    1][40] The number is an eigenvalue of

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-36http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-37http://en.wikipedia.org/wiki/Laplace_expansionhttp://en.wikipedia.org/wiki/Laplace_expansionhttp://en.wikipedia.org/wiki/Minor_(linear_algebra)http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-38http://en.wikipedia.org/wiki/Linear_systemhttp://en.wikipedia.org/wiki/Linear_systemhttp://en.wikipedia.org/wiki/Cramer%27s_rulehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-39http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=22http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspacehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-40http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-40http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-41

  • an nn-matrix A if and only if AIn is

    not invertible, which is equivalentto

    [41]

    The polynomial pA in

    an indeterminate X given by

    evaluation the determinant

    det(XInA) is called

    the characteristic polynomial of A.

    It is a monic

    polynomial of degree n. Therefore

    the polynomial equation pA() = 0

    has at most n different solutions,

    i.e., eigenvalues of the

    matrix.[42] They may be complex

    even if the entries of A are real.

    According to the CayleyHamilton

    theorem, pA(A) = 0, that is, the

    result of substituting the matrix

    itself into its own characteristic

    polynomial yields the zero matrix.

    Computational aspects[edit]

    Matrix calculations can be often

    performed with different

    techniques. Many problems can be

    solved by both direct algorithms or

    iterative approaches. For example,

    the eigenvectors of a square matrix

    can be obtained by finding

    a sequence of

    vectors xn converging to an

    eigenvector when n tends

    to infinity.[43]

    To be able to choose the more

    appropriate algorithm for each

    specific problem, it is important to

    determine both the effectiveness

    and precision of all the available

    http://en.wikipedia.org/wiki/Logical_equivalencehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-42http://en.wikipedia.org/wiki/Indeterminate_(variable)http://en.wikipedia.org/wiki/Characteristic_polynomialhttp://en.wikipedia.org/wiki/Monic_polynomialhttp://en.wikipedia.org/wiki/Monic_polynomialhttp://en.wikipedia.org/wiki/Degree_of_a_polynomialhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-43http://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theoremhttp://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theoremhttp://en.wikipedia.org/wiki/Zero_matrixhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=23http://en.wikipedia.org/wiki/Sequence_(mathematics)http://en.wikipedia.org/wiki/Limit_of_a_sequencehttp://en.wikipedia.org/wiki/Infinityhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-44

  • algorithms. The domain studying

    these matters is called numerical

    linear algebra.[44] As with other

    numerical situations, two main

    aspects are the complexity of

    algorithms and their numerical

    stability.

    Determining the complexity of an

    algorithm means finding upper

    bounds or estimates of how many

    elementary operations such as

    additions and multiplications of

    scalars are necessary to perform

    some algorithm, e.g., multiplication

    of matrices. For example,

    calculating the matrix product of

    two n-by-n matrix using the

    definition given above

    needs n3multiplications, since for

    any of the n2 entries of the

    product, n multiplications are

    necessary. The Strassen

    algorithm outperforms this "naive"

    algorithm; it needs

    only n2.807multiplications.[45] A refined

    approach also incorporates specific

    features of the computing devices.

    In many practical situations

    additional information about the

    matrices involved is known. An

    important case are sparse

    matrices, i.e., matrices most of

    whose entries are zero. There are

    specifically adapted algorithms for,

    say, solving linear

    systems Ax = b for sparse

    matrices A, such as the conjugate

    gradient method.[46]

    An algorithm is, roughly speaking,

    numerically stable, if little

    http://en.wikipedia.org/wiki/Numerical_linear_algebrahttp://en.wikipedia.org/wiki/Numerical_linear_algebrahttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-45http://en.wikipedia.org/wiki/Complexity_analysishttp://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Upper_boundhttp://en.wikipedia.org/wiki/Upper_boundhttp://en.wikipedia.org/wiki/Strassen_algorithmhttp://en.wikipedia.org/wiki/Strassen_algorithmhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-46http://en.wikipedia.org/wiki/Sparse_matrixhttp://en.wikipedia.org/wiki/Sparse_matrixhttp://en.wikipedia.org/wiki/Conjugate_gradient_methodhttp://en.wikipedia.org/wiki/Conjugate_gradient_methodhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-47

  • deviations in the input values do

    not lead to big deviations in the

    result. For example, calculating the

    inverse of a matrix via Laplace's

    formula (Adj (A) denotes

    the adjugate matrix of A)

    A1 = Adj(A) / det(A)

    may lead to significant

    rounding errors if the

    determinant of the matrix is

    very small. The norm of a

    matrix can be used to capture

    the conditioning of linear

    algebraic problems, such as

    computing a matrix's inverse.[47]

    Although most computer

    languages are not designed

    with commands or libraries for

    matrices, as early as the

    1970s, some engineering

    desktop computers such as

    the HP 9830had ROM

    cartridges to add BASIC

    commands for matrices. Some

    computer languages such

    as APL were designed to

    manipulate matrices,

    and various mathematical

    programscan be used to aid

    computing with matrices.[48]

    Decomposition[edit] Main articles: Matrix

    decomposition, Matrix

    diagonalization, Gaussian

    elimination and Montante's

    method

    There are several methods to

    render matrices into a more

    http://en.wikipedia.org/wiki/Laplace%27s_formulahttp://en.wikipedia.org/wiki/Laplace%27s_formulahttp://en.wikipedia.org/wiki/Adjugate_matrixhttp://en.wikipedia.org/wiki/Matrix_normhttp://en.wikipedia.org/wiki/Matrix_normhttp://en.wikipedia.org/wiki/Condition_numberhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-48http://en.wikipedia.org/wiki/Computer_languagehttp://en.wikipedia.org/wiki/Computer_languagehttp://en.wikipedia.org/wiki/HP_9830http://en.wikipedia.org/wiki/APL_(programming_language)http://en.wikipedia.org/wiki/List_of_numerical_analysis_softwarehttp://en.wikipedia.org/wiki/List_of_numerical_analysis_softwarehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-49http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=24http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=24http://en.wikipedia.org/wiki/Matrix_decompositionhttp://en.wikipedia.org/wiki/Matrix_decompositionhttp://en.wikipedia.org/wiki/Matrix_diagonalizationhttp://en.wikipedia.org/wiki/Matrix_diagonalizationhttp://en.wikipedia.org/wiki/Gaussian_eliminationhttp://en.wikipedia.org/wiki/Gaussian_eliminationhttp://en.wikipedia.org/wiki/Montante%27s_methodhttp://en.wikipedia.org/wiki/Montante%27s_method

  • easily accessible form. They

    are generally referred to

    as matrix

    decomposition or matrix

    factorization techniques. The

    interest of all these techniques

    is that they preserve certain

    properties of the matrices in

    question, such as determinant,

    rank or inverse, so that these

    quantities can be calculated

    after applying the

    transformation, or that certain

    matrix operations are

    algorithmically easier to carry

    out for some types of matrices.

    The LU decomposition factors

    matrices as a product of lower

    (L) and an upper triangular

    matrices (U).[49] Once this

    decomposition is calculated,

    linear systems can be solved

    more efficiently, by a simple

    technique called forward and

    back substitution. Likewise,

    inverses of triangular matrices

    are algorithmically easier to

    calculate. The Gaussian

    elimination is a similar

    algorithm; it transforms any

    matrix to row echelon

    form.[50] Both methods proceed

    by multiplying the matrix by

    suitable elementary matrices,

    which correspond to permuting

    rows or columns and adding

    multiples of one row to another

    row. Singular value

    decomposition expresses any

    matrix A as a product UDV,

    where Uand V are unitary

    http://en.wikipedia.org/wiki/LU_decompositionhttp://en.wikipedia.org/wiki/Triangular_matrixhttp://en.wikipedia.org/wiki/Triangular_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-50http://en.wikipedia.org/wiki/Forward_substitutionhttp://en.wikipedia.org/wiki/Forward_substitutionhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-51http://en.wikipedia.org/wiki/Elementary_matrixhttp://en.wikipedia.org/wiki/Permutation_matrixhttp://en.wikipedia.org/wiki/Permutation_matrixhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Unitary_matrix

  • matrices and D is a diagonal

    matrix.

    An example of a matrix in

    Jordan normal form. The grey

    blocks are called Jordan

    blocks.

    The eigendecomposition or dia

    gonalization expresses A as a

    product VDV1, where D is a

    diagonal matrix and V is a

    suitable invertible

    matrix.[51] If A can be written in

    this form, it is

    called diagonalizable. More

    generally, and applicable to all

    matrices, the Jordan

    decomposition transforms a

    matrix into Jordan normal form,

    that is to say matrices whose

    only nonzero entries are the

    eigenvalues 1 to n of A,

    placed on the main diagonal

    and possibly entries equal to

    one directly above the main

    diagonal, as shown at the

    right.[52] Given the

    eigendecomposition,

    the nth power of A (i.e., n-fold

    iterated matrix multiplication)

    can be calculated via

    http://en.wikipedia.org/wiki/Unitary_matrixhttp://en.wikipedia.org/wiki/Eigendecompositionhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-52http://en.wikipedia.org/wiki/Diagonalizable_matrixhttp://en.wikipedia.org/wiki/Jordan_normal_formhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-53http://en.wikipedia.org/wiki/File:Jordan_blocks.svg

  • An = (VDV1)n = VDV1VDV1...VDV1 = VDnV1

    and the power of a

    diagonal matrix can be

    calculated by taking the

    corresponding powers of

    the diagonal entries, which

    is much easier than doing

    the exponentiation

    for A instead. This can be

    used to compute the matrix

    exponential eA, a need

    frequently arising in

    solving linear differential

    equations, matrix

    logarithms and square

    roots of matrices.[53] To

    avoid numerically ill-

    conditionedsituations,

    further algorithms such as

    the Schur

    decomposition can be

    employed.[54]

    Abstract algebraic aspects and generalizations[edit]

    Matrices can be

    generalized in different

    ways. Abstract algebra

    uses matrices with entries

    in more general fields or

    even rings, while linear

    algebra codifies properties

    of matrices in the notion of

    linear maps. It is possible

    to consider matrices with

    infinitely many columns

    and rows. Another

    http://en.wikipedia.org/wiki/Matrix_exponentialhttp://en.wikipedia.org/wiki/Matrix_exponentialhttp://en.wikipedia.org/wiki/Linear_differential_equationhttp://en.wikipedia.org/wiki/Linear_differential_equationhttp://en.wikipedia.org/wiki/Matrix_logarithmhttp://en.wikipedia.org/wiki/Matrix_logarithmhttp://en.wikipedia.org/wiki/Square_root_of_a_matrixhttp://en.wikipedia.org/wiki/Square_root_of_a_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-54http://en.wikipedia.org/wiki/Condition_numberhttp://en.wikipedia.org/wiki/Condition_numberhttp://en.wikipedia.org/wiki/Schur_decompositionhttp://en.wikipedia.org/wiki/Schur_decompositionhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-55http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=25http://en.wikipedia.org/wiki/Field_(mathematics)http://en.wikipedia.org/wiki/Ring_(mathematics)

  • extension are tensors,

    which can be seen as

    higher-dimensional arrays

    of numbers, as opposed to

    vectors, which can often

    be realised as sequences

    of numbers, while matrices

    are rectangular or two-

    dimensional array of

    numbers.[55] Matrices,

    subject to certain

    requirements tend to

    form groups known as

    matrix groups.

    Matrices with more general entries[edit]

    This article focuses on

    matrices whose entries are

    real or complex

    numbers. However,

    matrices can be

    considered with much

    more general types of

    entries than real or

    complex numbers. As a

    first step of generalization,

    any field, i.e.,

    a set where addition, subtr

    action, multiplication and di

    vision operations are

    defined and well-behaved,

    may be used instead

    of R or C, for

    example rational

    numbers or finite fields.

    For example, coding

    theory makes use of

    matrices over finite fields.

    Wherever eigenvalues are

    considered, as these are

    roots of a polynomial they

    http://en.wikipedia.org/wiki/Tensorshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-56http://en.wikipedia.org/wiki/Group_(mathematics)http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=26http://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Field_(mathematics)http://en.wikipedia.org/wiki/Set_(mathematics)http://en.wikipedia.org/wiki/Additionhttp://en.wikipedia.org/wiki/Subtractionhttp://en.wikipedia.org/wiki/Subtractionhttp://en.wikipedia.org/wiki/Multiplicationhttp://en.wikipedia.org/wiki/Division_(mathematics)http://en.wikipedia.org/wiki/Division_(mathematics)http://en.wikipedia.org/wiki/Rational_numberhttp://en.wikipedia.org/wiki/Rational_numberhttp://en.wikipedia.org/wiki/Finite_fieldhttp://en.wikipedia.org/wiki/Coding_theoryhttp://en.wikipedia.org/wiki/Coding_theoryhttp://en.wikipedia.org/wiki/Eigenvalue

  • may exist only in a larger

    field than that of the entries

    of the matrix; for instance

    they may be complex in

    case of a matrix with real

    entries. The possibility to

    reinterpret the entries of a

    matrix as elements of a

    larger field (e.g., to view a

    real matrix as a complex

    matrix whose entries

    happen to be all real) then

    allows considering each

    square matrix to possess a

    full set of eigenvalues.

    Alternatively one can

    consider only matrices with

    entries in an algebraically

    closed field, such as C,

    from the outset.

    More generally, abstract

    algebra makes great use

    of matrices with entries in

    a ring R.[56] Rings are a

    more general notion than

    fields in that a division

    operation need not exist.

    The very same addition

    and multiplication

    operations of matrices

    extend to this setting, too.

    The set M(n, R) of all

    square n-by-n matrices

    over R is a ring

    called matrix ring,

    isomorphic to

    the endomorphism ring of

    the left R-module Rn.[57] If

    the ring R is commutative,

    i.e., its multiplication is

    commutative, then M(n, R)

    http://en.wikipedia.org/wiki/Algebraically_closed_fieldhttp://en.wikipedia.org/wiki/Algebraically_closed_fieldhttp://en.wikipedia.org/wiki/Ring_(mathematics)http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-57http://en.wikipedia.org/wiki/Matrix_ringhttp://en.wikipedia.org/wiki/Endomorphism_ringhttp://en.wikipedia.org/wiki/Module_(mathematics)http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-58http://en.wikipedia.org/wiki/Commutative_ring

  • is a unitary

    noncommutative

    (unless n = 1) associative

    algebra over R.

    The determinant of square

    matrices over a

    commutative ring R can

    still be defined using

    the Leibniz formula; such a

    matrix is invertible if and

    only if its determinant

    is invertible in R,

    generalising the situation

    over a field F, where every

    nonzero element is

    invertible.[58] Matrices

    over superrings are

    calledsupermatrices.[59]

    Matrices do not always

    have all their entries in the

    same ring or even in any

    ring at all. One special but

    common case is block

    matrices, which may be

    considered as matrices

    whose entries themselves

    are matrices. The entries

    need not be quadratic

    matrices, and thus need

    not be members of any

    ordinary ring; but their

    sizes must fulfil certain

    compatibility conditions.

    Relationship to linear maps[edit]

    Linear maps Rn Rm are

    equivalent to m-by-

    n matrices, as

    described above. More

    generally, any linear

    http://en.wikipedia.org/wiki/Associative_algebrahttp://en.wikipedia.org/wiki/Associative_algebrahttp://en.wikipedia.org/wiki/Determinanthttp://en.wikipedia.org/wiki/Leibniz_formula_(determinant)http://en.wikipedia.org/wiki/Invertiblehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-59http://en.wikipedia.org/wiki/Superringhttp://en.wikipedia.org/wiki/Supermatrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-60http://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/wiki/Ring_(mathematics)http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=27http://en.wikipedia.org/wiki/Matrix_(mathematics)#linear_maps

  • map f: V W between

    finite-dimensional vector

    spaces can be described

    by a matrix A = (aij), after

    choosing bases v1,

    ..., vn of V, and w1,

    ..., wm of W (so n is the

    dimension of V and m is

    the dimension of W), which

    is such that

    In other words,

    column j of A expresse

    s the image of vj in

    terms of the basis

    vectors wi of W; thus

    this relation uniquely

    determines the entries

    of the matrix A. Note

    that the matrix

    depends on the choice

    of the bases: different

    choices of bases give

    rise to different,

    but equivalent

    matrices.[60] Many of

    the above concrete

    notions can be

    reinterpreted in this

    light, for example, the

    transpose

    matrix AT describes

    the transpose of the

    linear map given by A,

    with respect to the dual

    bases.[61]

    These properties can

    be restated in a more

    natural way:

    http://en.wikipedia.org/wiki/Hamel_dimensionhttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Basis_(linear_algebra)http://en.wikipedia.org/wiki/Matrix_equivalencehttp://en.wikipedia.org/wiki/Matrix_equivalencehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-61http://en.wikipedia.org/wiki/Transpose_of_a_linear_maphttp://en.wikipedia.org/wiki/Transpose_of_a_linear_maphttp://en.wikipedia.org/wiki/Dual_spacehttp://en.wikipedia.org/wiki/Dual_spacehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-62

  • the category of all

    matrices with entries in

    a field with

    multiplication as

    composition

    is equivalent to the

    category of finite

    dimensional vector

    spaces and linear

    maps over this field.

    More generally, the set

    of mn matrices can

    be used to represent

    the R-linear maps

    between the free

    modules Rm and Rn for

    an arbitrary ring R with

    unity.

    When n = mcompositio

    n of these maps is

    possible, and this

    gives rise to the matrix

    ring of nn matrices

    representing

    the endomorphism

    ring of Rn.

    Matrix groups[edit]

    Main article: Matrix

    group

    A group is a

    mathematical structure

    consisting of a set of

    objects together with

    a binary operation, i.e.,

    an operation

    combining any two

    objects to a third,

    subject to certain

    requirements.[62] A

    http://en.wikipedia.org/wiki/Category_(mathematics)http://en.wikipedia.org/wiki/Equivalence_of_categorieshttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Matrix_ringhttp://en.wikipedia.org/wiki/Matrix_ringhttp://en.wikipedia.org/wiki/Endomorphism_ringhttp://en.wikipedia.org/wiki/Endomorphism_ringhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=28http://en.wikipedia.org/wiki/Matrix_grouphttp://en.wikipedia.org/wiki/Matrix_grouphttp://en.wikipedia.org/wiki/Group_(mathematics)http://en.wikipedia.org/wiki/Binary_operationhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-63

  • group in which the

    objects are matrices

    and the group

    operation is matrix

    multiplication is called

    a matrix group.[nb

    2][63] Since in a group

    every element has to

    be invertible, the most

    general matrix groups

    are the groups of all

    invertible matrices of a

    given size, called

    the general linear

    groups.

    Any property of

    matrices that is

    preserved under matrix

    products and inverses

    can be used to define

    further matrix groups.

    For example, matrices

    with a given size and

    with a determinant of 1

    form a subgroup of

    (i.e., a smaller group

    contained in) their

    general linear group,

    called a special linear

    group.[64] Orthogonal

    matrices, determined

    by the condition

    MTM = I,

    form

    the orthogonal

    group.[65] Every

    orthogonal matrix

    has determinant 1

    or 1. Orthogonal

    matrices with

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-64http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-64http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-65http://en.wikipedia.org/wiki/General_linear_grouphttp://en.wikipedia.org/wiki/General_linear_grouphttp://en.wikipedia.org/wiki/Subgrouphttp://en.wikipedia.org/wiki/Special_linear_grouphttp://en.wikipedia.org/wiki/Special_linear_grouphttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-66http://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Orthogonal_grouphttp://en.wikipedia.org/wiki/Orthogonal_grouphttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-67http://en.wikipedia.org/wiki/Determinant

  • determinant 1 form

    a subgroup

    called special

    orthogonal group.

    Every finite

    group is isomorphi

    c to a matrix

    group, as one can

    see by considering

    the regular

    representation of

    the symmetric

    group.[66] General

    groups can be

    studied using

    matrix groups,

    which are

    comparatively

    well-understood,

    by means

    of representation

    theory.[67]

    Infinite matrices[edit]

    It is also possible

    to consider

    matrices with

    infinitely many

    rows and/or

    columns[68] even if,

    being infinite

    objects, one

    cannot write down

    such matrices

    explicitly. All that

    matters is that for

    every element in

    the set indexing

    rows, and every

    element in the set

    http://en.wikipedia.org/wiki/Finite_grouphttp://en.wikipedia.org/wiki/Finite_grouphttp://en.wikipedia.org/wiki/Isomorphichttp://en.wikipedia.org/wiki/Isomorphichttp://en.wikipedia.org/wiki/Regular_representationhttp://en.wikipedia.org/wiki/Regular_representationhttp://en.wikipedia.org/wiki/Symmetric_grouphttp://en.wikipedia.org/wiki/Symmetric_grouphttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-68http://en.wikipedia.org/wiki/Representation_theoryhttp://en.wikipedia.org/wiki/Representation_theoryhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-69http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=29http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-70

  • indexing columns,

    there is a well-

    defined entry

    (these index sets

    need not even be

    subsets of the

    natural numbers).

    The basic

    operations of

    addition,

    subtraction, scalar

    multiplication and

    transposition can

    still be defined

    without problem;

    however matrix

    multiplication may

    involve infinite

    summations to

    define the resulting

    entries, and these

    are not defined in

    general.

    If R is any ring with

    unity, then the ring

    of endomorphisms

    of

    as a

    right R module is

    isomorphic to the

    ring of column

    finite

    matrices

    who

    se entries are

    indexed by

    , and whose

    columns each

    contain only finitely

    many nonzero

  • entries. The

    endomorphisms

    of M considered as

    a left R module

    result in an

    analogous object,

    the row finite

    matrices

    who

    se rows each only

    have finitely many

    nonzero entries.

    If infinite matrices

    are used to

    describe linear

    maps, then only

    those matrices can

    be used all of

    whose columns

    have but a finite

    number of nonzero

    entries, for the

    following reason.

    For a matrix A to

    describe a linear

    map f: VW,

    bases for both

    spaces must have

    been chosen;

    recall that by

    definition this

    means that every

    vector in the space

    can be written

    uniquely as a

    (finite) linear

    combination of

    basis vectors, so

    that written as a

    (column)

    vector v of

    http://en.wikipedia.org/wiki/Linear_combinationhttp://en.wikipedia.org/wiki/Linear_combination

  • coefficients, only

    finitely many

    entries vi are

    nonzero. Now the

    columns

    ofA describe the

    images by f of

    individual basis

    vectors of V in the

    basis of W, which

    is only meaningful

    if these columns

    have only finitely

    many nonzero

    entries. There is

    no restriction on

    the rows

    of A however: in

    the

    product Av there

    are only finitely

    many nonzero

    coefficients

    of v involved, so

    every one of its

    entries, even if it is

    given as an infinite

    sum of products,

    involves only

    finitely many

    nonzero terms and

    is therefore well

    defined. Moreover

    this amounts to

    forming a linear

    combination of the

    columns of A that

    effectively involves

    only finitely many

    of them, whence

    the result has only

  • finitely many

    nonzero entries,

    because each of

    those columns do.

    One also sees that

    products of two

    matrices of the

    given type is well

    defined (provided

    as usual that the

    column-index and

    row-index sets

    match), is again of

    the same type,

    and corresponds

    to the composition

    of linear maps.

    If R is a normed

    ring, then the

    condition of row or

    column finiteness

    can be relaxed.

    With the norm in

    place, absolutely

    convergent

    series can be used

    instead of finite

    sums. For

    example, the

    matrices whose

    column sums are

    absolutely

    convergent

    sequences form a

    ring. Analogously

    of course, the

    matrices whose

    row sums are

    absolutely

    convergent series

    also form a ring.

    http://en.wikipedia.org/w/index.php?title=Normed_ring&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Normed_ring&action=edit&redlink=1http://en.wikipedia.org/wiki/Absolutely_convergent_serieshttp://en.wikipedia.org/wiki/Absolutely_convergent_serieshttp://en.wikipedia.org/wiki/Absolutely_convergent_series

  • In that vein, infinite

    matrices can also

    be used to

    describe operators

    on Hilbert spaces,

    where

    convergence

    and continuity que

    stions arise, which

    again results in

    certain constraints

    that have to be

    imposed.

    However, the

    explicit point of

    view of matrices

    tends to obfuscate

    the matter,[nb 3] and

    the abstract and

    more powerful

    tools of functional

    analysis can be

    used instead.

    Empty matrices[edit]

    An empty matrix is

    a matrix in which

    the number of

    rows or columns

    (or both) is

    zero.[69][70] Empty

    matrices help

    dealing with maps

    involving the zero

    vector space. For

    example, if A is a

    3-by-0 matrix

    and B is a 0-by-3

    matrix, then AB is

    the 3-by-3 zero

    matrix

    http://en.wikipedia.org/wiki/Hilbert_space#Operators_on_Hilbert_spaceshttp://en.wikipedia.org/wiki/Hilbert_space#Operators_on_Hilbert_spaceshttp://en.wikipedia.org/wiki/Continuous_functionhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-71http://en.wikipedia.org/wiki/Functional_analysishttp://en.wikipedia.org/wiki/Functional_analysishttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=30http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-72http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-72http://en.wikipedia.org/wiki/Zero_vector_spacehttp://en.wikipedia.org/wiki/Zero_vector_space

  • corresponding to

    the null map from

    a 3-dimensional

    space V to itself,

    while BA is a 0-by-

    0 matrix. There is

    no common

    notation for empty

    matrices, but

    most computer

    algebra

    systems allow

    creating and

    computing with

    them. The

    determinant of the

    0-by-0 matrix is 1

    as follows from

    regarding

    the empty

    product occurring

    in the Leibniz

    formula for the

    determinant as 1.

    This value is also

    consistent with the

    fact that the

    identity map from

    any finite

    dimensional space

    to itself has

    determinant 1, a

    fact that is often

    used as a part of

    the

    characterization of

    determinants.

    Applications[edit]

    http://en.wikipedia.org/wiki/Computer_algebra_systemhttp://en.wikipedia.org/wiki/Computer_algebra_systemhttp://en.wikipedia.org/wiki/Computer_algebra_systemhttp://en.wikipedia.org/wiki/Empty_producthttp://en.wikipedia.org/wiki/Empty_producthttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=31

  • There are

    numerous

    applications of

    matrices, both in

    mathematics and

    other sciences.

    Some of them

    merely take

    advantage of the

    compact

    representation of a

    set of numbers in

    a matrix. For

    example, in game

    theory and econo

    mics, the payoff

    matrix encodes the

    payoff for two

    players, depending

    on which out of a

    given (finite) set of

    alternatives the

    players

    choose.[71] Text

    mining and

    automated thesaur

    us compilation

    makes use

    of document-term

    matrices such

    as tf-idf to track

    frequencies of

    certain words in

    several

    documents.[72]

    Complex numbers

    can be

    represented by

    particular real 2-

    by-2 matrices via

    http://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Payoff_matrixhttp://en.wikipedia.org/wiki/Payoff_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-74http://en.wikipedia.org/wiki/Text_mininghttp://en.wikipedia.org/wiki/Text_mininghttp://en.wikipedia.org/wiki/Thesaurushttp://en.wikipedia.org/wiki/Thesaurushttp://en.wikipedia.org/wiki/Document-term_matrixhttp://en.wikipedia.org/wiki/Document-term_matrixhttp://en.wikipedia.org/wiki/Tf-idfhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-75

  • under which

    addition and

    multiplication

    of complex

    numbers and

    matrices

    correspond to

    each other.

    For example,

    2-by-2 rotation

    matrices

    represent the

    multiplication

    with some

    complex

    number

    of absolute

    value 1,

    as above. A

    similar

    interpretation

    is possible

    for quaternion

    s[73] and Cliffor

    d algebras in

    general.

    Early encryptio

    n techniques

    such as

    the Hill

    cipher also

    used matrices.

    However, due

    to the linear

    nature of

    matrices,

    these codes

    are

    http://en.wikipedia.org/wiki/Absolute_valuehttp://en.wikipedia.org/wiki/Absolute_valuehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#rotation_matrixhttp://en.wikipedia.org/wiki/Quaternionhttp://en.wikipedia.org/wiki/Quaternionhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-76http://en.wikipedia.org/wiki/Clifford_algebrahttp://en.wikipedia.org/wiki/Clifford_algebrahttp://en.wikipedia.org/wiki/Encryptionhttp://en.wikipedia.org/wiki/Encryptionhttp://en.wikipedia.org/wiki/Hill_cipherhttp://en.wikipedia.org/wiki/Hill_cipher

  • comparatively

    easy to

    break.[74] Comp

    uter

    graphics uses

    matrices both

    to represent

    objects and to

    calculate

    transformation

    s of objects

    using

    affine rotation

    matrices to

    accomplish

    tasks such as

    projecting a

    three-

    dimensional

    object onto a

    two-

    dimensional

    screen,

    corresponding

    to a theoretical

    camera

    observation.[75]

    Matrices over

    a polynomial

    ring are

    important in

    the study

    of control

    theory.

    Chemistry ma

    kes use of

    matrices in

    various ways,

    particularly

    since the use

    of quantum

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-77http://en.wikipedia.org/wiki/Computer_graphicshttp://en.wikipedia.org/wiki/Computer_graphicshttp://en.wikipedia.org/wiki/Computer_graphicshttp://en.wikipedia.org/wiki/Rotation_matrixhttp://en.wikipedia.org/wiki/Rotation_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-78http://en.wikipedia.org/wiki/Polynomial_ringhttp://en.wikipedia.org/wiki/Polynomial_ringhttp://en.wikipedia.org/wiki/Control_theoryhttp://en.wikipedia.org/wiki/Control_theoryhttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Quantum_mechanics

  • theory to

    discuss molec

    ular

    bonding and s

    pectroscopy.

    Examples are

    the overlap

    matrixand

    the Fock

    matrix used in

    solving

    the Roothaan

    equations to

    obtain

    the molecular

    orbitals of

    the Hartree

    Fock method.

    Graph theory[edit]

    An

    undirected

    graph with

    adjacency

    matrix

    The adjacency

    matrix of

    a finite

    graph is a

    basic notion

    of graph

    http://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Chemical_bondhttp://en.wikipedia.org/wiki/Chemical_bondhttp://en.wikipedia.org/wiki/Chemical_bondhttp://en.wikipedia.org/wiki/Spectroscopyhttp://en.wikipedia.org/wiki/Spectroscopyhttp://en.wikipedia.org/wiki/Overlap_matrixhttp://en.wikipedia.org/wiki/Overlap_matrixhttp://en.wikipedia.org/wiki/Fock_matrixhttp://en.wikipedia.org/wiki/Fock_matrixhttp://en.wikipedia.org/wiki/Roothaan_equationshttp://en.wikipedia.org/wiki/Roothaan_equationshttp://en.wikipedia.org/wiki/Molecular_orbitalhttp://en.wikipedia.org/wiki/Molecular_orbitalhttp://en.wikipedia.org/wiki/Hartree%E2%80%93Fockhttp://en.wikipedia.org/wiki/Hartree%E2%80%93Fockhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=32http://en.wikipedia.org/wiki/Adjacency_matrixhttp://en.wikipedia.org/wiki/Adjacency_matrixhttp://en.wikipedia.org/wiki/Finite_graphhttp://en.wikipedia.org/wiki/Finite_graphhttp://en.wikipedia.org/wiki/Graph_theoryhttp://en.wikipedia.org/wiki/File:Labelled_undirected_graph.svg

  • theory.[76] It

    records which

    vertices of the

    graph are

    connected by

    an edge.

    Matrices

    containing just

    two different

    values (1 and

    0 meaning for

    example "yes"

    and "no",

    respectively)

    are

    called logical

    matrices.

    The distance

    (or cost)

    matrix contain

    s information

    about

    distances of

    the

    edges.[77] Thes

    e concepts

    can be applied

    to websites co

    nnected hyperl

    inks or cities

    connected by

    roads etc., in

    which case

    (unless the

    road network

    is extremely

    dense) the

    matrices tend

    to be sparse,

    i.e., contain

    few nonzero

    http://en.wikipedia.org/wiki/Graph_theoryhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-79http://en.wikipedia.org/wiki/Logical_matrixhttp://en.wikipedia.org/wiki/Logical_matrixhttp://en.wikipedia.org/wiki/Distance_matrixhttp://en.wikipedia.org/wiki/Distance_matrixhttp://en.wikipedia.org/wiki/Distance_matrixhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-80http://en.wikipedia.org/wiki/Websitehttp://en.wikipedia.org/wiki/Hyperlinkhttp://en.wikipedia.org/wiki/Hyperlinkhttp://en.wikipedia.org/wiki/Sparse_matrix

  • entries.

    Therefore,

    specifically

    tailored matrix

    algorithms can

    be used

    in network

    theory.

    Analysis and geometry[edit]

    The Hessian

    matrix of

    a differentiable

    function : Rn

    R consists

    of the second

    derivatives of

    with respect

    to the several

    coordinate

    directions,

    i.e.[78]

    At

    the saddle

    point (x = 0,

    y = 0) (red)

    of the

    function f(x,

    y)

    http://en.wikipedia.org/wiki/Network_theoryhttp://en.wikipedia.org/wiki/Network_theoryhttp://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=33http://en.wikipedia.org/wiki/Hessian_matrixhttp://en.wikipedia.org/wiki/Hessian_matrixhttp://en.wikipedia.org/wiki/Differentiable_functionhttp://en.wikipedia.org/wiki/Differentiable_functionhttp://en.wikipedia.org/wiki/Second_derivativehttp://en.wikipedia.org/wiki/Second_derivativehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-81http://en.wikipedia.org/wiki/Saddle_pointhttp://en.wikipedia.org/wiki/Saddle_pointhttp://en.wikipedia.org/wiki/File:Saddle_Point_SVG.svg

  • = x2 y2,

    the Hessian

    matrix

    is indefinite

    .

    It encodes

    informatio

    n about

    the local

    growth

    behaviour

    of the

    function:

    given

    a critical

    point x = (

    x1, ..., xn),

    i.e., a

    point

    where the

    first partial

    derivatives

    of vanis

    h, the

    function

    has a local

    minimum i

    f the

    Hessian

    matrix

    is positive

    definite. Q

    uadratic

    programmi

    ng can be

    used to

    find global

    http://en.wikipedia.org/wiki/Indefinite_matrixhttp://en.wikipedia.org/wiki/Indefinite_matrixhttp://en.wikipedia.org/wiki/Critical_point_(mathematics)http://en.wikipedia.org/wiki/Critical_point_(mathematics)http://en.wikipedia.org/wiki/Partial_derivativeshttp://en.wikipedia.org/wiki/Partial_derivativeshttp://en.wikipedia.org/wiki/Local_minimumhttp://en.wikipedia.org/wiki/Local_minimumhttp://en.wikipedia.org/wiki/Positive_definite_matrixhttp://en.wikipedia.org/wiki/Positive_definite_matrixhttp://en.wikipedia.org/wiki/Quadratic_programminghttp://en.wikipedia.org/wiki/Quadratic_programminghttp://en.wikipedia.org/wiki/Quadratic_programminghttp://en.wikipedia.org/wiki/Quadratic_programming

  • minima or

    maxima of

    quadratic

    functions

    closely

    related to

    the ones

    attached

    to

    matrices

    (see abov

    e).[79]

    Another

    matrix

    frequently

    used in

    geometric

    al

    situations

    is

    the Jacobi

    matrix of a

    differentia

    ble

    map f: Rn

    Rm. If f1,

    ..., fm deno

    te the

    componen

    ts of f,

    then the

    Jacobi

    matrix is

    defined

    as [80]

    If n >

    m,

    and if

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#quadratic_formshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#quadratic_formshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-82http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinanthttp://en.wikipedia.org/wiki/Jacobian_matrix_and_determinanthttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-83

  • the

    rank

    of the

    Jacobi

    matrix

    attains

    its

    maxim

    al

    value

    m, f is

    locally

    inverti

    ble at

    that

    point,

    by

    the im

    plicit

    functio

    n

    theore

    m.[81]

    Partial

    differe

    ntial

    equati

    ons ca

    n be

    classif

    ied by

    consid

    ering

    the

    matrix

    of

    coeffic

    ients

    of the

    highes

    t-order

    http://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Implicit_function_theoremhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-84http://en.wikipedia.org/wiki/Partial_differential_equationhttp://en.wikipedia.org/wiki/Partial_differential_equationhttp://en.wikipedia.org/wiki/Partial_differential_equationhttp://en.wikipedia.org/wiki/Partial_differential_equationhttp://en.wikipedia.org/wiki/Partial_differential_equation

  • differe

    ntial

    operat

    ors of

    the

    equati

    on.

    For ell

    iptic

    partial

    differe

    ntial

    equati

    ons thi

    s

    matrix

    is

    positiv

    e

    definit

    e,

    which

    has

    decisi

    ve

    influen

    ce on

    the

    set of

    possib

    le

    solutio

    ns of

    the

    equati

    on in

    questi

    on.[82]

    The fi

    nite

    eleme

    http://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Elliptic_partial_differential_equationhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-85http://en.wikipedia.org/wiki/Finite_element_methodhttp://en.wikipedia.org/wiki/Finite_element_methodhttp://en.wikipedia.org/wiki/Finite_element_method

  • nt

    metho

    d is an

    import

    ant

    numer

    ical

    metho

    d to

    solve

    partial

    differe

    ntial

    equati

    ons,

    widely

    applie

    d in

    simula

    ting

    compl

    ex

    physic

    al

    syste

    ms. It

    attem

    pts to

    appro

    ximate

    the

    solutio

    n to

    some

    equati

    on by

    piece

    wise

    linear

    functio

    ns,

    http://en.wikipedia.org/wiki/Finite_element_methodhttp://en.wikipedia.org/wiki/Finite_element_methodhttp://en.wikipedia.org/wiki/Finite_element_method

  • where

    the

    pieces

    are

    chose

    n with

    respe

    ct to a

    suffici

    ently

    fine

    grid,

    which

    in turn

    can

    be

    recast

    as a

    matrix

    equati

    on.[83]

    Probability theory and statistics[edit]

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-86http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=34http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=34http://en.wikipedia.org/wiki/File:Markov_chain_SVG.svg

  • Tw

    o

    diff

    ere

    nt

    Ma

    rko

    v

    ch

    ain

    s.

    Th

    e

    ch

    art

    de

    pic

    ts

    the

    nu

    mb

    er

    of

    par

    ticl

    es

    (of

    a

    tot

    al

    of

    10

    00)

    in

    sta

    te

    "2"

    .

    Bot

    h

    limi

  • tin

    g

    val

    ue

    s

    ca

    n

    be

    det

    er

    mi

    ne

    d

    fro

    m

    the

    tra

    nsi

    tio

    n

    ma

    tric

    es,

    whi

    ch

    are

    giv

    en

    by

    (re

    d)

    an

    d

    (bl

    ac

    k).

  • Stoch

    astic

    matric

    es are

    squar

    e

    matric

    es

    whose

    rows

    are pr

    obabili

    ty

    vector

    s, i.e.,

    whose

    entrie

    s are

    non-

    negati

    ve

    and

    sum

    up to

    one.

    Stoch

    astic

    matric

    es are

    used

    to

    define

    Mark

    ov

    chains

    with

    finitely

    many

    states.

    [84] A

    row of

    http://en.wikipedia.org/wiki/Stochastic_matrixhttp://en.wikipedia.org/wiki/Stochastic_matrixhttp://en.wikipedia.org/wiki/Stochastic_matrixhttp://en.wikipedia.org/wiki/Stochastic_matrixhttp://en.wikipedia.org/wiki/Probability_vectorhttp://en.wikipedia.org/wiki/Probability_vectorhttp://en.wikipedia.org/wiki/Probability_vectorhttp://en.wikipedia.org/wiki/Probability_vectorhttp://en.wikipedia.org/wiki/Probability_vectorhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-87

  • the

    stocha

    stic

    matrix

    gives

    the

    proba

    bility

    distrib

    ution

    for the

    next

    positio

    n of

    some

    particl

    e

    curren

    tly in

    the

    state

    that

    corres

    ponds

    to the

    row.

    Prope

    rties

    of the

    Marko

    v

    chain

    like ab

    sorbin

    g

    states,

    i.e.,

    states

    that

    any

    particl

    http://en.wikipedia.org/wiki/Absorbing_statehttp://en.wikipedia.org/wiki/Absorbing_statehttp://en.wikipedia.org/wiki/Absorbing_statehttp://en.wikipedia.org/wiki/Absorbing_state

  • e

    attains

    event

    ually,

    can

    be

    read

    off the

    eigenv

    ectors

    of the

    transiti

    on

    matric

    es.[85]

    Statist

    ics

    also

    makes

    use of

    matric

    es in

    many

    differe

    nt

    forms.[

    86] Des

    criptiv

    e

    statisti

    cs is

    conce

    rned

    with

    descri

    bing

    data

    sets,

    which

    can

    often

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-88http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-89http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-89http://en.wikipedia.org/wiki/Descriptive_statisticshttp://en.wikipedia.org/wiki/Descriptive_statisticshttp://en.wikipedia.org/wiki/Descriptive_statisticshttp://en.wikipedia.org/wiki/Descriptive_statisticshttp://en.wikipedia.org/wiki/Descriptive_statistics

  • be

    repres

    ented

    as dat

    a

    matric

    es,

    which

    may

    then

    be

    subjec

    ted

    to dim

    ension

    ality

    reduct

    ion tec

    hniqu

    es.

    Theco

    varian

    ce

    matrix

    enco

    des

    the

    mutua

    l varia

    nce of

    sever

    al ran

    dom

    variabl

    es.[87]

    Anoth

    er

    techni

    que

    using

    matric

    http://en.wikipedia.org/wiki/Data_matrix_(multivariate_statistics)http://en.wikipedia.org/wiki/Data_matrix_(multivariate_statistics)http://en.wikipedia.org/wiki/Data_matrix_(multivariate_statistics)http://en.wikipedia.org/wiki/Data_matrix_(multivariate_statistics)http://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Dimensionality_reductionhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Variancehttp://en.wikipedia.org/wiki/Variancehttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-90

  • es

    are lin

    ear

    least

    squar

    es, a

    metho

    d that

    appro

    ximate

    s a

    finite

    set of

    pairs

    (x1, y1)

    ,

    (x2, y2)

    , ...,

    (xN, yN

    ), by a

    linear

    functio

    n

    yi axi + b, i = 1, ..., N

    w

    hi

    ch

    ca

    n

    be

    fo

    r

    m

    ul

    at

    ed

    in

    te

    r

    m

    s

    http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)

  • of

    m

    at

    ric

    es

    ,

    rel

    at

    ed

    to

    th

    e

    si

    ng

    ul

    ar

    va

    lu

    e

    de

    co

    m

    po

    sit

    io

    n

    of

    m

    at

    ric

    es

    .[88

    ]

    R

    an

    do

    m

    m

    at

    ric

    es

    http://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Singular_value_decompositionhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-91http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-91http://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrixhttp://en.wikipedia.org/wiki/Random_matrix

  • a

    re

    m

    at

    ric

    es

    w

    ho

    se

    en

    tri

    es

    ar

    e

    ra

    nd

    o

    m

    nu

    m

    be

    rs,

    su

    bj

    ec

    t

    to

    su

    ita

    bl

    e

    pr

    ob

    ab

    ilit

    y

    di

    str

    ib

    uti

    on

    http://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distribution

  • s,

    su

    ch

    as

    m

    at

    rix

    no

    r

    m

    al

    di

    str

    ib

    uti

    on

    .

    B

    ey

    on

    d

    pr

    ob

    ab

    ilit

    y

    th

    eo

    ry,

    th

    ey

    ar

    e

    ap

    pli

    ed

    in

    do

    m

    ai

    ns

    http://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distributionhttp://en.wikipedia.org/wiki/Matrix_normal_distribution

  • ra

    ng

    in

    g

    fr

    o

    m

    nu

    m

    be

    r

    th

    eo

    ry

    to

    ph

    ys

    ic

    s.[

    89][

    90]

    Symmetries and transformati

    http://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Number_theoryhttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-92http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-92http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-93http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-93

  • ons in physics[edit]

    F

    ur

    th

    er

    inf

    or

    m

    ati

    on

    :

    S

    y

    m

    m

    et

    ry

    in

    ph

    ys

    ic

    s

    Li

    ne

    ar

    tr

    an

    sf

    or

    http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=35http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=35http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=35http://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physicshttp://en.wikipedia.org/wiki/Symmetry_in_physics

  • m

    ati

    on

    s

    an

    d

    th

    e

    as

    so

    ci

    at

    ed

    s

    y

    m

    m

    et

    rie

    s

    pl

    ay

    a

    ke

    y

    rol

    e

    in

    m

    od

    er

    n

    ph

    ys

    ic

    s.

    F

    or

    ex

    a

    m

    http://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetryhttp://en.wikipedia.org/wiki/Symmetry

  • pl

    e,

    el

    e

    m

    en

    ta

    ry

    pa

    rti

    cl

    es

    in

    q

    ua

    nt

    u

    m

    fie

    ld

    th

    eo

    ry

    ar

    e

    cl

    as

    sif

    ie

    d

    as

    re

    pr

    es

    en

    tat

    io

    ns

    of

    th

    e

    http://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Elementary_particlehttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theoryhttp://en.wikipedia.org/wiki/Quantum_field_theory

  • Lo

    re

    nt

    z

    gr

    ou

    p

    of

    sp

    ec

    ial

    rel

    ati

    vit

    y

    an

    d,

    m

    or

    e

    sp

    ec

    ifi

    ca

    lly

    ,

    by

    th

    eir

    be

    ha

    vi

    or

    un

    de

    r

    th

    e

    sp

    in

    gr

    http://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Lorentz_grouphttp://en.wikipedia.org/wiki/Spin_grouphttp://en.wikipedia.org/wiki/Spin_grouphttp://en.wikipedia.org/wiki/Spin_group

  • ou

    p.

    C

    on

    cr

    et

    e

    re

    pr

    es

    en

    tat

    io

    ns

    in

    vo

    lvi

    ng

    th

    e

    P

    au

    li

    m

    at

    ric

    es

    a

    nd

    m

    or

    e

    ge

    ne

    ral

    g

    a

    m

    m

    a

    m

    http://en.wikipedia.org/wiki/Spin_grouphttp://en.wikipedia.org/wiki/Spin_grouphttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Pauli_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matrices

  • at

    ric

    es

    a

    re

    an

    int

    eg

    ral

    pa

    rt

    of

    th

    e

    ph

    ys

    ic

    al

    de

    sc

    rip

    tio

    n

    of

    fe

    r

    mi

    on

    s,

    w

    hi

    ch

    be

    ha

    ve

    as

    s

    pi

    no

    rs.

    [91]

    http://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Gamma_matriceshttp://en.wikipedia.org/wiki/Fermionhttp://en.wikipedia.org/wiki/Fermionhttp://en.wikipedia.org/wiki/Fermionhttp://en.wikipedia.org/wiki/Fermionhttp://en.wikipedia.org/wiki/Fermionhttp://en.wikipedia.org/wiki/Spinorhttp://en.wikipedia.org/wiki/Spinorhttp://en.wikipedia.org/wiki/Spinorhttp://en.wikipedia.org/wiki/Spinorhttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-94

  • F

    or

    th

    e

    th

    re

    e

    lig

    ht

    es

    t q

    ua

    rk

    s,

    th

    er

    e

    is

    a

    gr

    ou

    p-

    th

    eo

    re

    tic

    al

    re

    pr

    es

    en

    tat

    io

    n

    in

    vo

    lvi

    ng

    th

    e

    sp

    http://en.wikipedia.org/wiki/Quarkhttp://en.wikipedia.org/wiki/Quarkhttp://en.wikipedia.org/wiki/Quarkhttp://en.wikipedia.org/wiki/Quarkhttp://en.wikipedia.org/wiki/Special_unitary_group

  • ec

    ial

    un

    ita

    ry

    gr

    ou

    p

    S

    U(

    3)

    ;

    fo

    r

    th

    eir

    ca

    lc

    ul

    ati

    on

    s,

    ph

    ys

    ici

    st

    s

    us

    e

    a

    co

    nv

    en

    ie

    nt

    m

    at

    rix

    re

    pr

    es

    http://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_grouphttp://en.wikipedia.org/wiki/Special_unitary_group

  • en

    tat

    io

    n

    kn

    o

    w

    n

    as

    th

    e

    G

    ell

    -

    M

    an

    n

    m

    at

    ric

    es

    ,

    w

    hi

    ch

    ar

    e

    al

    so

    us

    ed

    fo

    r

    th

    e

    S

    U(

    3)

    g

    au

    ge

    http://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gell-Mann_matriceshttp://en.wikipedia.org/wiki/Gauge_grouphttp://en.wikipedia.org/wiki/Gauge_grouphttp://en.wikipedia.org/wiki/Gauge_group

  • gr

    ou

    p t

    ha

    t

    fo

    r

    m

    s

    th

    e

    ba

    si

    s

    of

    th

    e

    m

    od

    er

    n

    de

    sc

    rip

    tio

    n

    of

    str

    on

    g

    nu

    cl

    ea

    r

    int

    er

    ac

    tio

    ns

    , q

    ua

    http://en.wikipedia.org/wiki/Gauge_grouphttp://en.wikipedia.org/wiki/Gauge_grouphttp://en.wikipedia.org/wiki/Gauge_grouphttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamics

  • nt

    u

    m

    ch

    ro

    m

    od

    yn

    a

    mi

    cs

    .

    T

    he

    C

    ab

    ib

    bo

    K

    ob

    ay

    as

    hi

    M

    as

    ka

    w

    a

    m

    at

    rix

    ,

    in

    tu

    rn

    ,

    ex

    pr

    es

    http://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Quantum_chromodynamicshttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrixhttp://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa_matrix

  • se

    s

    th

    e

    fa

    ct

    th

    at

    th

    e

    ba

    si

    c

    qu

    ar

    k

    st

    at

    es

    th

    at

    ar

    e

    im

    po

    rt

    an

    t

    fo

    r

    w

    ea

    k

    int

    er

    ac

    tio

    ns

    a

    re

    no

    http://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interactionhttp://en.wikipedia.org/wiki/Weak_interaction

  • t

    th

    e

    sa

    m

    e

    as

    ,

    bu

    t

    lin

    ea

    rly

    rel

    at

    ed

    to

    th

    e

    ba

    si

    c

    qu

    ar

    k

    st

    at

    es

    th

    at

    de

    fin

    e

    pa

    rti

    cl

    es

    wi

    th

    sp

    ec

  • ifi

    c

    an

    d

    di

    sti

    nc

    t

    m

    as

    se

    s.[

    92]

    Linear combinations of quantum sta

    http://en.wikipedia.org/wiki/Masshttp://en.wikipedia.org/wiki/Masshttp://en.wikipedia.org/wiki/Masshttp://en.wikipedia.org/wiki/Masshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-95http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-95

  • tes[edit]

    T

    he

    fir

    st

    m

    od

    el

    of

    qu

    an

    tu

    m

    m

    ec

    ha

    ni

    cs

    (

    H

    ei

    se

    nb

    er

    g,

    19

    25

    )

    re

    pr

    es

    en

    te

    d

    th

    e

    http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=36http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=36http://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&action=edit&section=36http://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Werner_Heisenberghttp://en.wikipedia.org/wiki/Werner_Heisenberghttp://en.wikipedia.org/wiki/Werner_Heisenberghttp://en.wikipedia.org/wiki/Werner_Heisenberghttp://en.wikipedia.org/wiki/Werner_Heisenberghttp://en.wikipedia.org/wiki/Werner_Heisenberg

  • th

    eo

    ry'

    s

    op

    er

    at

    or

    s

    by

    inf

    ini

    te

    -

    di

    m

    en

    si

    on

    al

    m

    at

    ric

    es

    ac

    tin

    g

    on

    qu

    an

    tu

    m

    st

    at

    es

    .[93

    ] T

    hi

    s

    is

    al

    http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-96http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-96

  • so

    re

    fe

    rr

    ed

    to

    as

    m

    at

    rix

    m

    ec

    ha

    ni

    cs

    .

    O

    ne

    pa

    rti

    cu

    lar

    ex

    a

    m

    pl

    e

    is

    th

    e

    de

    ns

    ity

    m

    at

    rix

    th

    at

    ch

    ar

    ac

    http://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Matrix_mechanicshttp://en.wikipedia.org/wiki/Density_matrixhttp://en.wikipedia.org/wiki/Density_matrixhttp://en.wikipedia.org/wiki/Density_matrixhttp://en.wikipedia.org/wiki/Density_matrixhttp://en.wikipedia.org/wiki/Density_matrixhttp://en.wikipedia.org/wiki/Density_matrix

  • te

    riz

    es

    th

    e

    "

    mi

    xe

    d"

    st

    at

    e

    of

    a

    qu

    an

    tu

    m

    sy

    st

    e

    m

    as

    a

    lin

    ea

    r

    co

    m

    bi

    na

    tio

    n

    of

    el

    e

    m

    en

    ta

    ry,

    "p

  • ur

    e"

    ei

    ge

    ns

    tat

    es

    .[94

    ]

    A

    no

    th

    er

    m

    at

    rix

    se

    rv

    es

    as

    a

    ke

    y

    to

    ol

    fo

    r

    de

    sc

    rib

    in

    g

    th

    e

    sc

    att

    eri

    ng

    ex

    pe

    ri

    http://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Eigenstateshttp://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-97http://en.wikipedia.org/wiki/Matrix_(mathematics)#cite_note-97

  • m

    en

    ts

    th

    at

    fo

    r

    m

    th

    e

    co

    rn

    er

    st

    on

    e

    of

    ex

    pe

    ri

    m

    en

    tal

    pa

    rti

    cl

    e

    ph

    ys

    ic

    s:

    C

    oll

    isi

    on

    re

    ac

    tio

    ns

    su

    ch

  • as

    oc

    cu

    r

    in

    pa

    rti

    cl

    e

    ac

    ce

    ler

    at

    or

    s,

    w

    he

    re

    no

    n-

    int

    er

    ac

    tin

    g

    pa

    rti

    cl

    es

    he

    ad

    to

    w

    ar

    ds

    ea

    ch

    ot

    he

    r

    an

    http://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_acceleratorhttp://en.wikipedia.org/wiki/Particle_accelerator

  • d

    co

    lli

    de

    in

    a

    s

    m

    all

    int

    er

    ac

    tio

    n

    zo

    ne

    ,

    wi

    th

    a

    ne

    w

    se

    t

    of

    no

    n-

    int

    er

    ac

    tin

    g

    pa

    rti

    cl

    es

    as

    th

    e

    re

    su

  • lt,

    ca

    n

    be

    de

    sc

    rib

    ed

    as

    th

    e

    sc

    al

    ar

    pr

    od

    uc

    t

    of

    ou

    tg

    oi

    ng

    pa

    rti

    cl

    e

    st

    at

    es

    an

    d

    a

    lin

    ea

    r

    co

    m

    bi

    na

    tio

  • n