2. Tensor Algebra & Solution Jan 2013
description
Transcript of 2. Tensor Algebra & Solution Jan 2013
Department of Systems Engineering, University of Lagos 1
1. For any tensor , show that, 2. Gurtin 2.6.13. Show that that if the tensor is invertible, for any vector ,
automatically means that .4. Show that if the vectors and are independent and is
invertible, then the vectors and are also independent.5. Show that and that 6. Gurtin 2.8.57. Gurtin 2.9.18. Gurtin 2.9.29. Gurtin 2.9.4Due January 28, 2013
[email protected] 12/30/2012
Homework 2.1
Department of Systems Engineering, University of Lagos 2
11.Gurtin 2.11.1 d&e 12.Gurtin 2.11.313.Gurtin 2.11.414.Gurtin 2.11.515.Let be a rotation. For any pair of independent vectors
show that 16.For a proper orthogonal tensor , show that the eigenvalue
equation always yields an eigenvalue of +1.17.For an arbitrary unit vector , the tensor, where is the skew
tensor whose component is , show that .18.For an arbitrary unit vector and the tensor, defined as
above, Show for an arbitrary vector that has the same magnitude as .
Due Feb 4, [email protected] 12/30/2012
Homework 2.2
Department of Systems Engineering, University of Lagos 3
1. The easiest proof is to observe that is the identity tensor. Consequently, .Alternatively, a longer route is to obtain the components of this result on the base. Recall that this is simply the inner product of this with the dual of the base:
Consequently, we may express in component form as,
[email protected] 12/30/2012
Solutions 2.1
Department of Systems Engineering, University of Lagos 4
2. The tensor Similarly, and . Clearly,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 5
3. First note that if the tensor is invertible, for any vector ,
automatically means that . The proof is easy and all we need to do is to contract with a tensor inverse of
as required.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 6
4. Next we can show that if the vectors and are independent vectors and is invertible, then the vectors and are also independent. We provide a reduction ad absurdum proof of this assertion:Imagine that our conditions are satisfied, and yet and are not independent. This would mean that and not all of which are zero such that,
This means that,
where, in this case, must now necessarily vanish since is invertible. Obviously, this shows that and are not independent. This contradicts our premise! In a similar way, we may prove that the independence of the vectors and implies the independence of vectors and .
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 7
5. We show immediately that by writing one covariantly and the other in contravariant components:
We show that
On account of the symmetry and antisymmetry in and
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 8
6. Gurtin 2.8.5 Show that for any two vectors and , the inner product . Hence show that , . Hence,
The rest of the result follows by setting
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 9
7. Multiplying the two tensors, we find,
And since this product gives the identity, the two tensors are inverses of each other.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 10
9. But . We may therefore write that,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 12
1. Gurtin 2.13.12. If the reciprocal relationship, is satisfied, what
relationship is there between the tensor bases (1) and , and (2) and ?
3. Gurtin 2.14.14. Gurtin 2.14.25. Gurtin 2.14.36. Gurtin 2.14.47. Gurtin 2.14.58. Gurtin 2.15 1-3a, 3b, 3c9. Gurtin 2.16 1-8Due Feb 11, 2013
[email protected] 12/30/2012
Homework 2.3
Department of Systems Engineering, University of Lagos 13
For a given a tensor and its transpose , Write out expressions for the 1. Symmetric Part2. Skew Part3. Spherical Part4. Deviatoric Part.What is the magnitude of ?
[email protected] 12/30/2012
Quiz
Tensor AlgebraTensors as Linear Mappings
Department of Systems Engineering, University of Lagos 15
No Topics From Slide Date0 Home Work & Due dates & Quiz 11 Definitions, Special Tensors 7 21-Jan2 Scalar Functions or Invariants 173 Inner Product, Euclidean Tensors 264 The Tensor Product 29
5Tensor Basis & Component Representation 40
6 The Vector Cross, Axial Vectors 607 The Cofactor 68 28-Jan8 Orthogonal Tensors 88
9Eigenvalue Problem, Spectral Decomposition & Cayley Hamilton 100 4-Feb
[email protected] 12/30/2012
Jan 21, 28 to Feb 4, 2013
Department of Systems Engineering, University of Lagos 16
A second Order Tensor is a linear mapping from a vector space to itself. Given the mapping,
states that such that,
Every other definition of a second order tensor can be derived from this simple definition. The tensor character of an object can be established by observing its action on a vector.
Second Order Tensor
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 17
The mapping is linear. This means that if we have two runs of the process, we first input and later input The outcomes and , added would have been the same as if we had added the inputs and first and supplied the sum of the vectors as input. More compactly, this means,
Linearity
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 18
Linearity further means that, for any scalar and tensor
The two properties can be added so that, given , and , then
Since we can think of a tensor as a process that takes an input and produces an output, two tensors are equal only if they produce the same outputs when supplied with the same input. The sum of two tensors is the tensor that will give an output which will be the sum of the outputs of the two tensors when each is given that input.
Linearity
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 19
In general, , and
With the definition above, the set of tensors constitute a vector space with its rules of addition and multiplication by a scalar. It will become obvious later that it also constitutes a Euclidean vector space with its own rule of the inner product.
Vector Space
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 20
Notation. It is customary to write the tensor mapping without the parentheses. Hence, we can write,
For the mapping by the tensor on the vector variable and dispense with the parentheses unless when needed.
Special Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 21
The annihilator is defined as the tensor that maps all vectors to the zero vector, :
Zero Tensor or Annihilator
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 22
The identity tensor is the tensor that leaves every vector unaltered. ,
Furthermore, , the tensor, is called a spherical tensor.The identity tensor induces the concept of an inverse of a tensor. Given the fact that if and , the mapping produces a vector.
The Identity
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 23
Consider a linear mapping that, operating on , produces our original argument, , if we can find it:
As a linear mapping, operating on a vector, clearly, is a tensor. It is called the inverse of because,
So that the composition , the identity mapping. For this reason, we write,
The Inverse
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 24
It is easy to show that if , then . HW: Show this.The set of invertible sets is closed under composition. It is also closed under inversion. It forms a group with the identity tensor as the group’s neutral element
Inverse
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 25
Given , The tensor satisfying
Is called the transpose of .A tensor indistinguishable from its transpose is said to be symmetric.
Transposition of Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 26
There are certain mappings from the space of tensors to the real space. Such mappings are called Invariants of the Tensor. Three of these, called Principal invariants play key roles in the application of tensors to continuum mechanics. We shall define them shortly. The definition given here is free of any association with a coordinate system. It is a good practice to derive any other definitions from these fundamental ones:
Invariants
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 27
If we write
where and are arbitrary vectors. For any second order tensor , and linearly independent and , the linear mapping
Is independent of the choice of the basis vectors and . It is called the First Principal Invariant of or Trace of
The Trace
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 28
The trace is a linear mapping. It is easily shown that , and
HW. Show this by appealing to the linearity of the vector space.While the trace of a tensor is linear, the other two principal invariants are nonlinear. WE now proceed to define them
The Trace
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 29
The second principal invariant is related to the trace. In fact, you may come across books that define it so. However, the most common definition is that
Independently of the trace, we can also define the second principal invariant as,
Square of the trace
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 30
The Second Principal Invariant, , using the same notation as above is
that is half the square of trace minus the trace of the square of which is the second principal invariant. This quantity remains unchanged for any
arbitrary selection of basis vectors and .
Second Principal Invariant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 31
The third mapping from tensors to the real space underlying the tensor is the determinant of the tensor. While you may be familiar with that operation and can easily extract a determinant from a matrix, it is important to understand the definition for a tensor that is independent of the component expression. The latter remains relevant even when we have not expressed the tensor in terms of its components in a particular coordinate system.
The Determinant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 32
As before, For any second order tensor , and any linearly independent vectors and , The determinant of the tensor ,
(In the special case when the basis vectors are orthonormal, the denominator is unity)
The Determinant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 33
It is good to note that there are other principal invariants that can be defined. The ones we defined here are the ones you are most likely to find in other texts.
An invariant is a scalar derived from a tensor that remains unchanged in any coordinate system. Mathematically, it is a mapping from the tensor space to the real space. Or simply a scalar valued function of the tensor.
Other Principal Invariants
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 34
When the trace of a tensor is zero, the tensor is said to be traceless. A traceless tensor is also called a deviatoric tensor.
Given any tensor , A deviatoric tensor may be created from by the following process:
So that ; is called the spherical part, and as defined here is called the deviatoric part of .Every tensor thus admits the decomposition,
Deviatoric Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 35
The trace provides a simple way to define the inner product of two second-order tensors. Given The trace,
Is a scalar, independent of the coordinate system chosen to represent the tensors. This is defined as the inner or scalar product of the tensors and . That is,
Inner Product of Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 36
The trace automatically induces the concept of the norm of a vector (This is not the determinant! Note!!) The square root of the scalar product of a tensor with itself is the norm, magnitude or length of the tensor:
Attributes of a Euclidean Space
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 37
Furthermore, the distance between two tensors as well as the angle they contain are defined. The scalar distance between tensors and :
And the angle ,
Distance and angles
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 38
A product mapping from two vector spaces to is defined as the tensor product. It has the following properties:
It is an ordered pair of vectors. It acts on any other vector by creating a new vector in the direction of its first vector as shown above. This product of two vectors is called a tensor product or a simple dyad.
The Tensor Product
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 39
It is very easily shown that the transposition of dyad is simply a reversal of its order. (Shown below).The tensor product is linear in its two factors.Based on the obvious fact that for any tensor and It is clear that
Show this neatly by operating either side on a vectorFurthermore, the contraction,
A fact that can be established by operating each side on the same vector.
Dyad Properties
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 40
Recall that for , The tensor satisfying
Is called the transpose of . Now let a dyad.
So that Showing that the transpose of a dyad is simply a reversal of its factors.
Transpose of a Dyad
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 41
If is the unit normal to a given plane, show that the tensor is such that is the projection of the vector to the plane in question.Consider the fact that
The above vector equation shows that is what remains after we have subtracted the projection onto the normal. Obviously, this is the projection to the plane itself. as we shall see later is called a tensor projector.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 42
Consider a contravariant vector component let us take a product of this with the Kronecker Delta:
which gives us a third-order object. Let us now perform a contraction across (by taking the superscript index from and the subscript from ) to arrive at,
Observe that the only free index remaining is the superscript as the other indices have been contracted (it is consequently a summation index) out in the implied summation. Let us now expand the RHS above, we find,
Substitution Operation
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 43
Note the following cases: if , we have , if , we have if , we have . This
leads us to conclude therefore that the contraction, . Indicating that that the Kronecker Delta, in a contraction, merely substitutes its own other symbol for the symbol on the vector it was contracted with. This fact, that the Kronecker Delta does this in general earned it the alias of “Substitution Operator”.
Substitution
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 44
Operate on the vector and let . On the LHS,
On the RHS, we have:
Since the contents of both sides of the dot are vectors and dot product of vectors is commutative. Clearly,
follows from the definition of transposition. Hence,
Composition with Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 45
For We can show that the dyad composition,
Again, the proof is to show that both sides produce the same result when they act on the same vector. Let , then the LHS on yields:
Which is obviously the result from the RHS also.This therefore makes it straightforward to contract dyads by breaking and joining as seen above.
Dyad on Dyad Composition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 46
Show that the trace of the tensor product is .Given any three independent vectors , (No loss of generality in letting the three independent vectors be the curvilinear basis vectors and ). Using the above definition of trace, we can write that,
Trace of a Dyad
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 47
Trace of a Dyad
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 48
It is easy to show that for a tensor product
HW. Show that this is so.We proved earlier that Furthermore, if , then,
Other Invariants of a Dyad
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 49
Given , for any basis vectors
by the law of tensor mapping. We proceed to find the components of on this same basis. Its covariant components, just like in any other vector are the scalars,
Specifically, these components are
Tensor Bases & Component Representation
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 50
We can dispense with the parentheses and write that
So that the vector
The components can be found by taking the dot product of the above equation with :
Tensor Components
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 51
The component is simply the result of the inner product of the tensor on the tensor product . These are the components of on the product dual of this particular product base.This is a general result and applies to all product bases:It is straightforward to prove the results on the following table:
Tensor Components
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 52
Components of
Derivation Full Representation
Tensor Components
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 53
Components of
Derivation Full Representation
IdentityTensor Components
It is easily verified from the definition of the identity tensor and the inner product that: (HW Verify this)
Showing that the Kronecker deltas are the components of the identity tensor in certain (not all) coordinate bases. [email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 54
The above table shows the interesting relationship between the metric components and Kronecker deltas.
Obviously, they are the same tensors under different bases vectors.
Kronecker and Metric Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 55
It is easy to show that the above tables of component representations are valid. For any , and ,
Expanding the vector in contravariant components, we have,
Component Representation
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 56
The transpose of is . If is symmetric, then,
Clearly, in this case,
It is straightforward to establish the same for contravariant components. This result is impossible to establish for mixed tensor components:
Symmetry
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 57
For mixed tensor components,
The transpose,
While symmetry implies that,
We are not able to exploit the dummy variables to bring the two sides to a common product basis. Hence the symmetry is not expressible in terms of their components.
Symmetry
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 58
A tensor is antisymmetric if its transpose is its negative. In product bases that are either covariant or contravariant, antisymmetry, like symmetry can be expressed in terms of the components:
The transpose of is . If is antisymmetric, then,
Clearly, in this case,
It is straightforward to establish the same for contravariant components. Antisymmetric tensors are also said to be skew-symmetric.
AntiSymmetry
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 59
For any tensor , define the symmetric and skew parts , and It is easy to show the following:
We can also write that,
and
Symmetric & Skew Parts of Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 60
Composition of tensors in component form follows the rule of the composition of dyads.
Composition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 61
Addition of two tensors of the same order is the addition of their components provided they are refereed to the same product basis.
Addition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 62
Components
Component Addition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 63
Invoking the definition of the three principal invariants, we now find expressions for these in terms of the components of tensors in various product bases.
First note that for , the triple product,
Recall that
Component Representation of Invariants
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 64
The Trace of the Tensor
The Trace
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 65
{Proof not very illuminating, Ignorable]
Changing the roles of dummy variables, we can write,
Second Invariant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 66
The last equality can be verified in the following way. Contracting the coefficient
with
Second Invariant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 67
Similarly, contracting with , we have,
Hence
Which is half the difference between square of the trace and the trace of the square of tensor .
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 68
The invariant,
From which
Determinant
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 69
Given a vector , the tensor
is called a vector cross. The following relation is easily established between a the vector cross and its associated vector:
The vector cross is traceless and antisymmetric. (HW. Show this)Traceless tensors are also called deviatoric or deviator tensors.
The Vector Cross
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 70
For any antisymmetric tensor , such that
which can always be found, is called the axial vector to the skew tensor.It can be proved that
(HW: Prove it by contracting both sides of with while noting that )
Axial Vector
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 71
Gurtin 2.8.5 Show that for any two vectors and , the inner product . Hence show that , . Hence,
The rest of the result follows by setting HW. Redo this proof using the contravariant alternating tensor components, and .
[email protected] 12/30/2012
Examples
Department of Systems Engineering, University of Lagos 72
For vectors and , show that .The tensor Similarly, and . Clearly,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 73
These two quantities turn out to be fundamentally important to any space that which either of these two basis vectors can span. They are called the covariant and contravariant metric tensors. They are the quantities that metrize the space in the sense that any measurement of length, angles areas etc are dependent on them.
Index Raising & Lowering
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 74
Now we start with the fact that the contravariant and covariant components of a vector , respectively. We can express the vector with respect to the reciprocal basis as
Consequently,
The effect of contracting with is to raise and substitute its index.
Index Raising & Lowering
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 75
With similar arguments, it is easily demonstrated that,
So that , in a contraction, lowers and substitutes the index. This rule is a general one. These two components are able to raise or lower indices in tensors of higher orders as well. They are called index raising and index lowering operators.
Index Raising & Lowering
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 76
Tensor components such as and related through the index-raising and index lowering metric tensors as we have on the previous slide, are called associated vectors. In higher order quantities, they are associated tensors. Note that associated tensors, so called, are
mere tensor components of the same tensor in different bases.
Associated Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 77
We will define the cofactor of a tensor as,
and proceed to show that, for any pair of independent vectors and the cofactor satisfies,
We will further find an invariant component representation for the cofactor tensor. Lastly, in this section, we will find an important relationship between the trace of the cofactor and second invariant of the tensor itself:
Cofactor Definition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 78
First note that if is invertible, the independence of the vectors and implies the independence of vectors and . Consequently we can define the non-vanishing
It follows that must be on the perpendicular line to both and . Therefore,
We can also take a transpose and write,
Showing that the vector is perpendicular to both and . It follows that R such that
Transformed Basis
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 79
Therefore, Let so that and are linearly independent, then we can take a scalar product of the above equation and obtain,
The LHS is also In the equation, , it is clear that
We therefore have that,
Cofactor Transformation
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 80
We therefore have that,
This quantity, is the cofactor of . If we write,
we can see that the cofactor satisfies, We now express the cofactor in its general components.
Cofactor Tensor
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 81
The scalar in brackets,
Inserting this above, we therefore have, in invariant component form,
Cofactor Components
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 82
For any invertible tensor, show that the trace of the cofactor is the second principal invariant of the original tensor:
Trace of the Cofactor
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 83
Show that the determinant of a product is the product of the determinants
so that the determinant of in component form is,
If is the inverse of , then becomes the identity matrix. Hence the above also proves that the determinant of an inverse is the inverse of the determinant.
Determinants
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 84
For any invertible tensor we show that The inverse of tensor ,
let the scalar . We can see clearly that,
Taking the determinant of this equation, we have,
as the transpose operation has no effect on the value of a determinant. Noting that the determinant of an inverse is the inverse of the determinant, we have,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 85
Show that Ans
Show that Ans.
Cofactor
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 86
(d) Show that Ans.
Consequently,
(e) Show that Ans.
So that,
as required.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 87
3. Show that for any invertible tensor and any vector ,
where and are the cofactor and inverse of respectively.
By definition,
We are to prove that,
or that,
On the RHS, the contravariant component of is
which is exactly the same as writing, in the invariant form.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 88
Similarly, so that its transpose . We may therefore write,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 89
We now turn to the LHS;
Now, so that its transpose, so that
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 90
Show that The LHS in component invariant form can be written as:
where so that
Consequently,
On the RHS, . We can therefore write,
Which on a closer look is exactly the same as the LHS so that, as required.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 91
4. Let be skew with axial vector . Given vectors and , show that and, hence conclude that
But by definition, the cofactor must satisfy,
which compared with the previous equation yields the desired result that
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 92
5. Show that the cofactor of a tensor can be written as
even if is not invertible. are the first two invariants of .Ans.The above equation can be written more explicitly as,
In the invariant component form, this is easily seen to be,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 93
But we know that the cofactor can be obtained directly from the equation,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 94
Using the above, Show that the cofactor of a vector cross is
But from previous result,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 95
Show that
In component form,
So that
Clearly,
,
and
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 96
Given a Euclidean Vector Space E, a tensor is said to be orthogonal if,
Specifically, we can allow so that
Or
In which case the mapping leaves the magnitude unaltered.
Orthogonal Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 97
Let
By definition of the transpose, we have that,
Clearly, A condition necessary and sufficient for a tensor to be orthogonal is that be invertible and its inverse equal to its transpose.
Orthogonal Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 98
Upon noting that the determinant of a product is the product of the determinants and that transposition does not alter a determinant, it is easy to conclude that,
Which clearly shows that
When the determinant of an orthogonal tensor is strictly positive, it is called “proper orthogonal”.
Orthogonal
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 99
A rotation is a proper orthogonal tensor while a reflection is not.
Rotation & Reflection
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 100
Let be a rotation. For any pair of vectors show that
This question is the same as showing that the cofactor of is itself. That is that a rotation is self cofactor. We can write that
where
Now that is a rotation, , and
This implies that and consequently,
Rotation
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 101
For a proper orthogonal tensor Q, show that the eigenvalue equation always yields an eigenvalue of +1. This means that there is always a solution for the equation,
For any invertible tensor,
For a proper orthogonal tensor , It therefore follows that,
It is easily shown that (HW Show this Romano 26)Characteristic equation for is,
Or,
Which is obviously satisfied by [email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 102
If for an arbitrary unit vector , the tensor, where is the skew tensor whose component is , show that .
The last term vanishes immediately on account of the fact that is a symmetric tensor. We therefore have,
which again mean that so that
as required.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 103
If for an arbitrary unit vector , the tensor, where is the skew tensor whose component is . Show for an arbitrary vector that has the same magnitude as .
Given an arbitrary vector , compute the vector Clearly,
The square of the magnitude of is
The operation of the tensor on is independent of and does not change the magnitude. Furthermore, it is also easy to show that the projection of an arbitrary vector on as well as that of its image are the same. The axis of rotation is therefore in the direction of
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 104
If for an arbitrary unit vector , the tensor, where is the skew tensor whose component is . Show for an arbitrary , that .
It is convenient to write in terms of their components: The ij component of
Consequently, we can write,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 105
Use the results of 52 and 55 above to show that the tensor is periodic with a period of .
From 55 we can write that . But from 52, . We therefore have that,
which completes the proof. The above results show that is a rotation along the unit vector through an angle .
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 106
Define as the set of all tensors with a positive determinant. Show that is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor .(G285)
Since we are given that , the determinant of is positive. Consider . We observe the fact that the determinant of a product of tensors is the product of their determinants (proved above). We see clearly that,
. Since is a rotation, . Consequently we see that,
Hence the determinant of is also positive and therefore .
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 107
Define as the set of all symmetric tensors. Show that is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor every . (G285)
Since we are given that , we inspect the tensor . Its transpose is, . So that is symmetric and therefore . so that the transformation is invariant.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 108
Central to the usefulness of tensors in Continuum Mechanics is the Eigenvalue Problem and its consequences. • These issues lead to the mathematical representation of
such physical properties as Principal stresses, Principal strains, Principal stretches, Principal planes, Natural frequencies, Normal modes, Characteristic values, resonance, equivalent stresses, theories of yielding, failure analyses, Von Mises stresses, etc.
• As we can see, these seeming unrelated issues are all centered around the eigenvalue problem of tensors. Symmetry groups, and many other constructs that simplify analyses cannot be understood outside a thorough understanding of the eigenvalue problem.
• At this stage of our study of Tensor Algebra, we shall go through a simplified study of the eigenvalue problem. This study will reward any diligent effort. The converse is also true. A superficial understanding of the Eigenvalue problem will cost you dearly.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 109
Recall that a tensor is a linear transformation for
states that such that,
Generally, and its image, are independent vectors for an arbitrary tensor . The eigenvalue problem considers the special case when there is a linear dependence between and .
The Eigenvalue Problem
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 110
Here the image where The vector , if it can be found, that satisfies the above equation, is called an eigenvector while the scalar is its corresponding eigenvalue. The eigenvalue problem examines the existence of the eigenvalue and the corresponding eigenvector as well as their consequences.
Eigenvalue Problem
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 111
In order to obtain such solutions, it is useful to write out this equation in its component form:
so that,
the zero vector. Each component must vanish identically so that we can write
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 112
From the fundamental law of algebra, the above equations can only be possible for arbitrary values of if the determinant,
Vanishes identically. Which, when written out in full, yields,
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 113
Expanding, we have,
or
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 114
This is the characteristic equation for the tensor . From here we are able, in the best cases, to find the three eigenvalues. Each of these can be used in to obtain the corresponding eigenvector.
The above coefficients are the same invariants we have seen earlier!
Principal Invariants Again
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 115
A tensor is Positive Definite if for all ,
It is easy to show that the eigenvalues of a symmetric, positive definite tensor are all greater than zero. (HW: Show this, and its converse that if the eigenvalues are greater than zero, the tensor is symmetric and positive definite. Hint, use the spectral decomposition.)
Positive Definite Tensors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 116
We now state without proof (See Dill for proof) the important Caley-Hamilton theorem: Every tensor satisfies its own characteristic equation. That is, the characteristic equation not only applies to the eigenvalues but must be satisfied by the tensor itself. This means,
is also valid. This fact is used in continuum mechanics to
obtain the spectral decomposition of important material and spatial tensors.
Cayley- Hamilton Theorem
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 117
It is easy to show that when the tensor is symmetric, its three eigenvalues are all real. When they are distinct, corresponding eigenvectors are orthogonal. It is therefore possible to create a basis for the tensor with an orthonormal system based on the normalized eigenvectors. This leads to what is called a spectral decomposition of a symmetric tensor in terms of a coordinate system formed by its eigenvectors:
Where is the normalized eigenvector corresponding to the eigenvalue .
Spectral Decomposition
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 118
The above spectral decomposition is a special case where the eigenbasis forms an Orthonormal Basis. Clearly, all symmetric tensors are diagonalizable.
Multiplicity of roots, when it occurs robs this representation of its uniqueness because two or more coefficients of the eigenbasis are now the same.
The uniqueness is recoverable by the ingenious device of eigenprojection.
Multiplicity of Roots
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 119
Case 1: All Roots equal. The three orthonormal eigenvectors in an
ONB obviously constitutes an Identity tensor . The unique spectral representation therefore becomes
since in this case.
Eigenprojectors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 120
Case 2: Two Roots equal: unique while In this case,
since in this case.The eigenspace of the tensor is made up of the projectors:
and
Eigenprojectors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 121
The eigen projectors in all cases are based on the normalized eigenvectors of the tensor. They constitute the eigenspace even in the case of repeated roots. They can be easily shown to be:
1. Idempotent: (no sums)2. Orthogonal: (the anihilator)3. Complete: (the identity)
Eigenprojectors
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 122
For symmetric tensors (with real eigenvalues and consequently, a defined spectral form in all cases), the tensor equivalent of real functions can easily be defined:
Trancendental as well as other functions of tensors are defined by the following maps:
Maps a symmetric tensor into a symmetric tensor. The latter is the spectral form such that,
Tensor Functions
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 123
Where is the relevant real function of the ith eigenvalue of the tensor .
Whenever the tensor is symmetric, for any map,
As defined above. The tensor function is defined uniquely through its spectral representation.
Tensor functions
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 124
Show that the principal invariants of a tensor satisfy Rotations and orthogonal transformations do not change the Invariants
Hence
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 125
Show that, for any tensor , and
So that,
By the Cayley-Hamilton theorem,
Taking a trace of the above equation, we can write that,
so that,
As required.
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 126
Suppose that and are symmetric, positive-definite tensors with , write the invariants of C in terms of U
By the Cayley-Hamilton theorem,
which contracted with gives,
so that,
and
[email protected] 12/30/2012
Department of Systems Engineering, University of Lagos 127
But,
The boxed items cancel out so that,
as required.
[email protected] 12/30/2012