681 Appendix Math

12

Click here to load reader

Transcript of 681 Appendix Math

Page 1: 681 Appendix Math

1

MecE 681-Elasticity (Dr.CQ Ru) : Mathematical Preliminaries 1. Matrices A matrix can be introduced as the coefficient matrix of a system of linear algebraic equations - the coefficient matrix, denoted by bold capital letters, A, B,...their components have two indices, i and j (on the row i, and the column j ) : for an m×n matrix, i takes 1, 2..up to m, and j takes 1,2...up to n. When n=1, we have a column. When m=n, we have a square matrix. Symmetric square matrices : Aij=Aji for all i and j Anti-symmetric square matrices : Aij= -Aji for all i and j (diagonal-free) The transpose of a matrix A denoted by AT : (AT )ij=Aji Other concepts : zero or null matrix, diagonal matrix,... Example : any square matrix can be written uniquely as the sum of a symmetric matrix and an anti-symmetric matrix : A=(A+AT )/2 +(A-AT )/2 Uniqueness : let A=S+L=(A+AT )/2 +(A-AT )/2, then S-(A+AT )/2 =(A-AT )/2 -L - the only matrix which is both symmetric and anti-symmetric is the zero matrix Product of Two Matrices : (AB)ij= AikBkj (the summation convention for the repeated index k) - the product AB is well-defined only if the number of columns of the prefactor matrix A is equal to the number of rows of the postfactor matrix B. Product of a square matrix and a column. Product of two square matrices A and B. Remark : AB is not necessarily equal to BA ! Identity matrix I (or called the Kronecker delta δij, symmetric and diagonal) : 1) IA=A; 2) AI=A. Some Important Basic Properties : 1) (AB)C=A(BC) 2) (AB)T =BTAT Determinant of a Square Matrix A : det A A minor of det A is the determinant of a smaller matrix Mij obtained by deleting the i-th row and the j-th column of A. The cofactor of the element Aij of A is defined by

Thus, det A=the sum of all elements on either a row or a column multiplied by their corresponding cofactors. For instance, if det A is expanded by the i-th row, we have (for any given i)

...n ,2 ,1 =j ,i ,|M| )(-1=A ijj+ic

ij

Page 2: 681 Appendix Math

2

On the other hand, if det A is expanded by the j-th column, we have (for any given j)

Some properties of determinants 1) the interchange of any two rows (or columns) of a determinant causes only a sign change in its value (the absolute value remains unchanged). Therefore, det=0 if any two rows or two columns are identical. 2) det (AB)= det (BA) =det A det B. 3) det A =det (AT). Inverse matrix A-1 of A : A-1A =AA-1 =I Existence condition of an inverse matrix : det A of the matrix A is non-zero. Remark : the right inverse is equal to the left inverse : LA=I, AR=I, then L=R. Properties : 1) (AB)-1 =B-1A-1 2) (AT )-1 =(A-1 )T (For a proof, see I=(AA-1 )T =(A-1)T AT). Orthogonal matrix : a matrix A is called orthogonal if its inverse is equal to its transpose : A-1 =AT Remark : any symmetric orthogonal matrix is its own inverse. Two basic properties of orthogonal matrices : 1) det A=+1 or -1 ( because AAT =AA-1 =I); 2) the product of two orthogonal matrices is orthogonal (proof : AB(AB)T =ABBT AT =I). 2. Coordinate Transformation Vector : its representation in a rectangular coordinate system is a column. Although a vector (such as a force, or a velocity) as a whole is independent of the coordinate system, its components depends on the chosen system, and changes when the coordinate system is changed to another one. Rotation of rectangular coordinate systems Let us consider two rectangular right-hand systems O x1 x2 x3 and O x1

'x2'x3

' of the common origin O, one of which can be obtained from the other through a rotation or/and a reflection. Thus, the unit base vectors in the latter can be expressed by the base vectors in the former system as

i for n summationo AA + AA+... + AA + AA = A cinn i

c1)-i(n1)-i(n

ci2i2

ci1i1det

j for n summationo AA + AA...+ + AA + AA= A cnjnj

c1)j-(n1)j-(n

c2j2j

c1j1jdet

Page 3: 681 Appendix Math

3

where all components of the Atransformation matrix M are direction cosines, and it can be verified that MMT =MT M=I, that is, M is an orthogonal matrix. Example : a reflection about the x2-x3 plane, then M=diagonal (-1, 1, 1), and det M=-1. Transformation of the components of a vector : the components of any vector v in these two systems can be transformed by

Remark : if a right-hand system is changed to another right-hand system, the orthogonal M is called “proper”, and defined by det M=+1 (not -1). Note : The inverse transformation is given by

3. Cartesian Tensors (in Rectangular Coordinate Systems) Apart from scalars (such as temperature, density), or vectors (such as velocity, forces), some physical quantities require two vectors for its representation. For instance, stresses can be characterized as traction vector for any given directional vector. Such quantities are called (second-order) tensors. Tensor can be represented by a square matrix in any given coordinate system, and are defined through transformation properties of their components between the different coordinate systems. Second-order tensor as a linear transformation of vectors Transformation of second-order tensors : Let T is defined as a vector-transformation by

Note that

e

e

e

MMM

MMM

MMM

=

e

e

e

3

2

1

333231

232221

131211

/3

/2

/1

)

v

v

v

( )

MMM

MMM

MMM

( =

v

v

v

3

2

1

333231

232221

131211

/3

/2

/1

)

v

v

v

(M = )

v

v

v

( , )

e

e

e

( M = )

e

e

e

/3

/2

/1

T

3

2

1

/3

/2

/1

T

3

2

1

(

,v T = u ,Tv =u ///

Page 4: 681 Appendix Math

4

it is found that

This means that

where M is the transformation matrix of the two coordinate systems. This can be understood as the definition of a second-order tensor ! As will be seen below, the stress satisfies this transformation property and then is a second-order tensor. Remark : 1) any symmetric (or anti-symmetric) tensor remains symmetric (or anti-symmetric) with respect to any rotation of the coordinate system. 2) any tensor can is decomposed uniquely into symmetric part and anti-symmetric part. 3) Isotropic tensors : =pI (where p is a scalar) 4. Eigenvalues and Eigenvectors The eigenvalue problem of a square matrix A is defined by

Our problem is to find some special (real or complex) values of λ so that the above linear algebraic equations have non-zero solution. According to a basic result of linear algebra, non-zero solution x exists iff det (A-λI)=0, which leads to a polynomial equation of degree n in the unknown number λ, and then there are n roots (called the eigenvalues of A) and the associated n non-zero eigenvectors x (which is determined to within a scale multiplier). Real symmetric matrices 1) all eigenvalue of a real symmetric matrix must be real (then the eigenvectors are real too). Proof : Ax=λx , conjugating and transposing gives another equality. Multiplying these two by conjugate x and x respectively, and the difference leads to : (lambda-lambda con)con xT x=0 (the second factor on LHS is positive if x is nonzero), which means that λ=its conjugate. 2) the eigenvectors associated with any two distinct eigenvalues of a symmetric (not necessarily real) matrix are orthogonal (in other words, their scale product =0).

Mu = u , Mv= v //

MMT= T implies v M MT= u T//T/

MMT = T , MTM=T T//T

x =Ax

Page 5: 681 Appendix Math

5

Proof : Transposing the second one, and thus multiplying these two by x1 and the transpose of x2, respectively, and the difference leads to : (λ1-λ2) x2

T x 1=0, then x 2Tx 1=0.

Diagonalization of a real symmetric matrix A through change of the coordinate system Choose three normalized (unit) eigenvectors of A : 1) xT x=1 (not unique, because the direction is not fixed yet ! 2) x1, x2 and x3 can be arranged as a right-had system) Thus, for a 3×3 real symmetric matrix A, if we define P by : PT =(x(1) , x(2) , x(3) ), then it is easy to verify that PPT =I, that is, P is a real orthogonal matrix, and further

In other words, in the new system defined by P, the matrix A has a diagonal form. Principal values and invariants of a second-order tensor Since a tensor defines a matrix, for a given tensor T, a similar eigenvalue problem is defined by : Tv= λv , where the eigenvalue λ are called principal values of T, and the associated vectors v are called the principal directions (with undetermined sign and magnitude). It is easily verified that the principal values remain unchanged for any rotation (defined by an orthogonal matrix M) of the coordinate system. For a given coordinate system, the eigen-equation is given by

Remark : all principal directions of any real symmetric tensor are perpendicular to each other (see related properties of real symmetric matrices) Invariants : The coefficients of the above eigen-equation are independent of the coordinate system and then called the invariants of the second-order tensor T. Note : tr T=Trace of a square matrix (the sum of all elements on the diagonal) Remark : a basic properties of Atrace@: tr(MTMT )=tr(T) Proof : Since MTM=I, and

)

00

00

00

( = PAP , )x , x , x

3

2

1

T33221

1T ( = PA

= Tdet = I , )]Ttr(-)[(trT2

1 = I , ++= T

det

321322

2321

tr = I

,0 = I - I+ I- = I)-(T

1

322

13

Page 6: 681 Appendix Math

6

then

and tr(T) is an invariant ! Similarly we can prove that tr(T2) and tr(T3) are also invariant. Remark : all invariants can also be proved directly from the transformation properties of a second-order tensor. Other set of (three) independent invariants are :

Notes : the three eigen-values are basic invariants of a real symmetric tensor in the sense that any other invariant can be expressed as a function of these three invariants. 5. Vector and Tensor Calculus Gradient (vector) of a 3D scalar field λ(x1, x2, x3) is defined by

Divergence of a vector field v =(v1, v2, v3) : div v =v1,1+v2,2+v3,3 Curl of a vector field

where the determinant is expanded by the first row. Obviously

C B A =tr(ABC) ,B A = tr(AB) kijkijjiij

trT=] )T( tr[=] )T( M M tr[= M T M =)Mtr(MT jkjkjkikijikjkijT

])(trT2

1+ TtrTtr

2

3 - T

;

323[tr3

1 = I ,++ = I

++ ,++ ,trT

33231212

33

32

31

23

22

21

) , , ( = x,x,x, 321

vvv

xxx=

321

321

eee

v

321

curl

0)()(,0)()( gradcurlcurldiv vv

Page 7: 681 Appendix Math

7

Gauss’s theorem and Stokes theorem 1) Gauss (divergence) theorem (for any scalar field λ and vector field v)

where V and S are any volume and its (closed) surface, and dS=ndS where n is the outward normal vector of S. In particular, for a tensor Tjk, we have

2) Stokes theorem (for any vector field v)

where C is a closed curve and S an open surface (or called cap) bounded by the curve C, the tangential t of C is defined so that (n, t) to form a right-hand system. Note : For two different S bounded by the same curve C, the surface integral on the LHS is equal - because div (Curl v)=0 for any vector field (namely, Curl gives a divergence-free vector field). Remark : for a 2D vector field=Pi+Qj+0k, the 2D Green theorem for any area S bounded by a curve C is given by

6. Complex Variable Functions Analytic function of a complex variable z : A complex-valued function Ω(z) of the complex-variable z=x+iy=reiθ (r≥0) is analytic around a point z if the following limit exists (independently of Δz) around that point :

S

iii

V dSnx

v i d SSv d = dV , d v = dV SVS

S

ijkijki

jk dSnTTx

T)(d = dV SV S

dl = d ) C t vSv Curl ( S

C

QdyPdx =dxdy x

Q

S y

P

z

(z)- z)+(z limit = (z) 0 |z |

/

Page 8: 681 Appendix Math

8

The limit value is called the derivative of Ω(z). The conditions for the existence of the derivative of Ω(z) is given by the Cauchy-Riemann conditions

Remark : For a given harmonic function, the Cauchy-Riermann equation can be used to construct its conjugate harmonic function and the corresponding analytic function. Remark : The existence of the complex derivative, defined by the above Cauchy-Riemann equations, has many very strong consequences, such as 1) The real and imaginary parts, φ(x,y) and ψ(x,y), of Ω(z) must be harmonic functions. 2) If the derivative exists, then the derivative itself will be also analytic. 3) Analytic continuation : if a complex function is analytic on both sides of a smooth curve, respectively, and continuous at every point of the curve, then this function can be analytically continued across the curve and becomes analytic around the curve including every point of the curve. Furthermore, if Ω(z) is an analytic function in a domain S+, then for some special domains S+, we can define a new function that is analytic in the image domain of S+. For example : 1) If Ω(z) is an analytic function in the upper half-plane S+, a new function defined by

is well-defined and analytic in the lower half-plane S-. 2) If Ω(z) is an analytic function in a circular domain S+ centered at the origin and bounded by a radius R, a new function given by

is well-defined and analytic in the outside S- of the circular domain S+. Laurent series : if Ω(z) is analytic in an annulus centered at z0 and bounded by R1 and R2, then it can be expressed as a unique convergent series (Laurent series) in that annulus as

].Im[],Re[,

;,,,,

y)(x, i + y)(x, = (z)

- = ,= xyyx

.,)( Szz= (z)

.,)(22

Rzz

R= )

z

R(

Page 9: 681 Appendix Math

9

From this property, we have Liouville’s theorem : any function which is analytic in the entire plane, including the origin and the point at infinity, must be identically a constant. Elementary complex functions (log=ln means the natural logarithm)

Multiple-valued functions have important applications to 2D elasticity. 1) Log[z] is a basic multiple-valued function which may have different values at the same point z=reiθ in the z-plane because of different values for the angle θ. A single-valued branch can be obtained by, say, cutting along the positive/negative real axis and defining

Obviously, these single-valued branches are analytic (except the origin and infinity) in the cut plane, but with a straight line of singularity cross which they suffer a constant discontinuity. 2) Another basic multiple-valued function is defined around a point [y=0, x=c] on the real axis and in the z-plane cut along the negative real axis (note that although this function suffers discontinuity along the cut-line, the sum of its upper and lower limits along the cut-line is zero !)

3) Furthermore, for the following function (on which Westergaard’s functions are based)

.,)( 2010 RzzRzza kk

1// ],logexp[;1

log,]exp[log,loglog

;2

sin,2

cos),sin(cos),sin(cosexp

cciccc

izziizziixiyx

czzzcerzz

zzzirz

i

eez

eezieyiyeeez

izirz

izirz

xy

xy

2][log;,loglog

;2][log;20,loglog

0,0

0,0

0,,0

;exp

ycxczcz

c)-Arg(z - c)] ,-Arg(z 2

i[ |c-z| = c-z

Page 10: 681 Appendix Math

10

we have

In summary, √[z2-a2] has the following basic properties a) it is continuous across the real axis except the cut line [-a, a]; b) the sum of its upper limit and lower limit on the cut line [-a, a] is zero

Cauchy-Coursat theorem : if f(z) is analytic inside and on a closed curve C, then

Remark : This theorem ensures that for any analytic function f(z), the variable-limit integral

defines a new analytic function F(z) so that F/(z)=f(z).

a)+Arg(z - , a)-Arg(z -a)] +Arg(z 2

i + a)-Arg(z

2

i[ | a-z| =

=a)] +Arg(z 2

i[ |a+z|a)] -Arg(z

2

i[ |a-z| = a+z a-z = a -z

22

22

,exp

expexp

0/0

0/0exp

0/0

0/00

y ,a - <x ,1 -

=y ,a < x < a- ,i

0=y ,a > x ,1

=a)] +Arg(z 2

i + a)-Arg(z

2

i[

=y ,a - < x , 2

2

=y ,a < x < a- , 2

0=y ,a > x ,0 = 0+ 0

=a)] +Arg(z 2

1 + a)-Arg(z

2

1[

axayazaz

,0,02222

0)( dzzfC

z

z

dttfzF0

)()(

Page 11: 681 Appendix Math

11

Remark : A basic result is that for any circle C centered at a point z0 (such as the origin)

Cauchy integral along a closed curve C : Based on this result and the Cauchy-Coursat theorem, it is easy to prove that for a finite domain S+ enclosed by a curve C (defined anticlockwise), if Ω(z) is an analytic function in S+ and continuous on C, we have (S- is the exterior of the curve C)

Actually, the first is the S&N condition for a continuous function Ω(t) defined on C to be the boundary value of a function Ω(z) which is analytic in S+. Similarly, if Ω(z) is an analytic function in S- (including infinity) and continuous on C, we have

Actually, the second is the S&N condition for a continuous function Ω(t) defined on C to be the boundary value of a function Ω(z) which is analytic in S-. Cauchy integral defined on an open arc L is a very useful analytic function of point z, defined by a function f(t) given on L of given direction, as follows

where t represents point on L. The Cauchy integral is well defined for any function f(t) meeting moderate continuity conditions (such as the Holder continuity) on L, and is analytic for any z which is not on L. In particular, F(z) exhibits logarithmic singularity around the ends of L. One basic property of the Cauchy integral is about its limit values (“+” from the left side, and “-“ from the right side) as z approaches any point t0 on L (not its ends), given by the well-known Plemelj formulas (the integral on RHS is the so-called “integral principal value”)

1,0

1,2)()( )1(

2

0

12

0

0 n

niderirederdzzz niniinn

C

n

Szz

Szdt

zt

t

i C ),(

;,0)(

2

1

Sz

Szzdt

zt

t

i C ),(

;),()()(

2

1

dtzt

tf

izF

L

)(

2

1)(

Page 12: 681 Appendix Math

12

Riemann-Hilbert problem on an open arc L : for a given function f(t) defined on L, and a constant κ, find a function F(z) that is analytic in the entire z-plane except on L and has a finite degree at infinity, to meet the condition on L

Homogeneous solution X(z) : The homogeneous equation can be re-written as

Let the end points of L be a and b, the solution is given by the Plemelj function X(z)

In particular, P(z) is a constant if F(z) is required to be zero at infinity. Non-homogeneous equation can be solved, on using the Plemelj function X(z), by

Conformal mapping techniques offer a powerful method to solve 2D plane elasticity with non-circular boundaries. For example, for an ellipse having two semimajor axis a and b<a, then the foci is 2d=√(a2-b2), and centered at the origin, the following mapping function

maps the exterior of the ellipse in the z-plane to the exterior of the unit circle in the ξ–plane. END.

L

dttt

tf

itFtFtftFtF

000000

)(1)()();()()(

LttftFtF ),()()(

)(log;,log)(log)(log zFLttFtF dtzti L

1

2

log

.,,)()(

;2]arg[0],arg[log2,)()()(),()()(

21212121

))1(log)1(log1

1

iiiiRR eRbzeRazeebzaz

iibzazzXzXzPzF

)()()(

)(

2

)()(;,

)(

)(

)(

)(

)(

)(zXzPdt

zttX

tf

i

zXzFLt

tX

tf

tX

tF

tX

tF

L

12

,2

112

)(),1

()(2

1

d

baR

z

d

dR

zz

RRdz