Linear Algebra and Matrices Methods for Dummies FIL November 2011

28
Linear Algebra and Matrices Linear Algebra and Matrices Methods for Dummies Methods for Dummies FIL FIL November 2011 November 2011 Narges Bazargani and Sarah Jensen Narges Bazargani and Sarah Jensen

description

Linear Algebra and Matrices Methods for Dummies FIL November 2011 Narges Bazargani and Sarah Jensen. ONLINE SOURCES Web Guides http://mathworld.wolfram.com/LinearAlgebra.html http://www.maths.surrey.ac.uk/explore/emmaspages/option1.html http://www.inf.ed.ac.uk/teaching/courses/fmcs1/ - PowerPoint PPT Presentation

Transcript of Linear Algebra and Matrices Methods for Dummies FIL November 2011

Page 1: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Linear Algebra and Matrices Linear Algebra and Matrices

Methods for DummiesMethods for DummiesFILFIL

November 2011November 2011

Narges Bazargani and Sarah JensenNarges Bazargani and Sarah Jensen

Linear Algebra and Matrices Linear Algebra and Matrices

Methods for DummiesMethods for DummiesFILFIL

November 2011November 2011

Narges Bazargani and Sarah JensenNarges Bazargani and Sarah Jensen

Page 2: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

ONLINE SOURCES

Web Guides

– http://mathworld.wolfram.com/LinearAlgebra.htmlhttp://mathworld.wolfram.com/LinearAlgebra.html

– http://www.maths.surrey.ac.uk/explore/emmaspages/option1.htmlhttp://www.maths.surrey.ac.uk/explore/emmaspages/option1.html

– http://www.inf.ed.ac.uk/teaching/courses/fmcs1/http://www.inf.ed.ac.uk/teaching/courses/fmcs1/

Online introduction::- http://www.khanacademy.org/video/introduction-to-matrices?- http://www.khanacademy.org/video/introduction-to-matrices?playlist=Linear+Algebraplaylist=Linear+Algebra

Page 3: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

MATLAB = MATrix LABoratory

Typical uses include:• Math and computation• Algorithm development• Modelling, simulation, and prototyping• Data analysis, exploration, and visualization• Scientific and engineering graphics• Application development, including Graphical User

Interface building

What Is MATLAB?- And why learn about matrices?

Page 4: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Everything in MATLAB is a matrix !

Zero-dimentional matrixA Scalar - a single number is really a 1 x 1 matrix in Matlab!

1 dimentional matrixA vector is a 1xn matrix with 1 row [1 2 3]

A matrix is an mxn matrix

Even a picture is a matrix!

4

2 7 4

3 8 9

m

n

Page 5: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

2 7 4

2

7

4

2 7 4

3 8 9

Building matrices I MATLAB with [ ]:

A = [2 7 4]

A = [2; 7; 4]

A = [2 7 4; 3 8 9]

; separates the different rows; separates the different rows

: separates collums: separates collums

Page 6: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Subscripting – each element of a matrix can be addressed with a pair of numbers; row first, column second

X(2,3) = 6

X(3,:) = ( 7 8 9 )

X( [2 3], 2) =

X = [1 2 3; 4 5 6; 7 8 9] =

Matrix formation in MATLAB

Submatrices in MATLAB

Page 7: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

NB Only matrices of the same size can be added and substractedNB Only matrices of the same size can be added and substracted

AdditionAddition

SubtractionSubtraction

Matrix addition and subtraction

Page 8: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Scalar multiplication

Matlab does all this for you!: 3 * AMatlab does all this for you!: 3 * A

Matrix Multiplication I

Different kinds of multiplication I MATLAB

Page 9: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Matrix multiplication IISum product of respective rows and columnsSum product of respective rows and columns

Matlab does all this for you!: C = A * BMatlab does all this for you!: C = A * B

Matrix multiplication rule:Matrix multiplication rule:

A x B is only viable if n=k.

aa1111 aa1212 aa1313 bb1111 bb1212

aa2121 aa2222 aa2323 XX bb2121 bb2222

aa3131 aa3232 aa3333 bb3131 bb3232

aa4141 aa4242 aa4343

bb1111 bb1212 aa1111 aa1212 aa1313

bb2121 bb2222 xx aa2121 aa2222 aa2323

bb3131 bb3232 aa3131 aa3232 aa3333

aa4141 aa4242 aa4343

n l

mk

Page 10: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Elementwise multiplication

Matlab does all this for you!: A .* BMatlab does all this for you!: A .* B

aa1111 aa1212 aa1313 bb1111 bb1212

aa2121 aa2222 aa2323 XX bb2121 bb2222

aa3131 aa3232 aa3333 bb3131 bb3232

aa4141 aa4242 aa4343

bb1111 bb1212 aa1111 aa1212

bb2121 bb2222 xx aa2121 aa2222

bb3131 bb3232 aa3131 aa3232

Matrix multiplication rule:Matrix multiplication rule:

MaMatrixes need the exact same ‘m’ and ‘n’

Page 11: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

column → row row → column

In Matlab: AT = A’ In Matlab: AT = A’

Transposition – reorganising matrices

Page 12: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Identity matrices

Tool to solve equationThis identity matrix Is a matrix which plays a similar role as the number 1 in number multiplication

11 22 33 11 00 00 1+0+01+0+0 0+2+00+2+0 0+0+30+0+3

44 55 66 XX 00 11 00 == 4+0+04+0+0 0+5+00+5+0 0+0+60+0+6

77 88 99 00 00 11 7+0+07+0+0 0+8+00+8+0 0+0+90+0+9

Worked exampleA In = A

for a 3x3 matrix:

100

010

001

In Matlab: eye(r, c) produces an r x c identity matrixIn Matlab: eye(r, c) produces an r x c identity matrix

Page 13: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Definition: Matrix A is invertible if there exists a matrix B such that:

Notation for the inverse of a matrix A is A-1

If A is invertible, A-1 is also invertible A is the inverse matrix of A-1.

11 11 XX2 2 33

-1-1 33

==22 + + 11 3 33 3

-1-1 + + 11 3 33 3 == 11 00

-1-1 22 1 1 33

1 1 33

-2-2+ + 22 3 3

33

11 + + 22 3 3 3 3 00 11

• In Matlab: A-1 = inv(A)• In Matlab: A-1 = inv(A)

Inverse matrices

Page 14: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

DeterminantsDeterminantsDeterminantsDeterminants

• In Matlab: det(A) = det(A)• In Matlab: det(A) = det(A)

Determinant is a function: Determinant is a function:

A Matrix A has an inverse matrix (A Matrix A has an inverse matrix (AA-1-1)if and only )if and only if if det(A) ≠0det(A) ≠0

Page 15: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Can use solution from the single Can use solution from the single equation to solve equation to solve

For exampleFor example

In matrix formIn matrix form

bax 1

12

532

21

21

xx

xx

AX = B

1

5

21

32

2

1

x

x

With more than 1 equation and more

than 1 unknown

Page 16: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

21

32

bcaddc

baA )det(

21

32

7

1

21

32

)7(

11A

4

1

if B isif B is

1

2

7

14

7

1

4

1

21

32

7

1

XSoSo

Need to find determinant of matrix A (because X =A-1B)

From earlier

(2 x -2) – (3 x 1) = -4 – 3 = -7So determinant is -7

To find A-1:

Page 17: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

scalars, vectors and matrices in scalars, vectors and matrices in SPM SPM

scalars, vectors and matrices in scalars, vectors and matrices in SPM SPM

e

n

v

vv

•Scalar: Variable described by a single number – e.g. intensity of each voxel in MRI scan

•Vector: Physics vector is Variable described by magnitude and direction – Here we talk about column of numbers e.g. voxel intensity at a different times or different voxels at the same time

•Scalar: Variable described by a single number – e.g. intensity of each voxel in MRI scan

•Vector: Physics vector is Variable described by magnitude and direction – Here we talk about column of numbers e.g. voxel intensity at a different times or different voxels at the same time

MatrixMatrix: : Rectangular array of vectors defined by number of rows and columns. Rectangular array of vectors defined by number of rows and columns.

xn

x

x

2

1

x11 x12 ………x1n..xn1……………xnn

Page 18: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Vectorial Space and Matrix RankVectorial Space and Matrix RankVectorial Space and Matrix RankVectorial Space and Matrix Rank

Vectorial space: is a space that contains vectors and all the those that can be obtained by multiplying vectors by a real number then adding them (linear combination). In other words, because each column of the matrix can be represented by a vector, the ensemble of n vector-column defines a vectorial space for a matrix.

Rank of a matrix: corresponds to the number of vectors that are linearly independents from each other. So, if there is a linear relationship between the lines or columns of a matrix, then the matrix will be rank-deficient (and the determinant will be zero). For example, in the graph below there is a linear relationship between X1 and X2, and the determinent is zero. And the Vectorial space defined will has only 1 dimension.

2x

1x

y

x

4

21

2

Page 19: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Eigenvalues et eigenvectorsEigenvalues et eigenvectorsEigenvalues et eigenvectorsEigenvalues et eigenvectors

Eigenvalues are multipliers. They are numbers that represent how much linear transformation or stretching has taken place. An eigenvalue of a square matrix is a scalar that is represented by the Greek letter λ (lambda).

Eigenvectors of a square matrix are non-zero vectors, after being multiplied by the matrix, remain parallel to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by which the eigenvector is scaled when multiplied by the matrix.

All eigenvalues and eigenvectors satisfy the equation Ax = λx for a given square matrix A. i.e. Matrix A acts by stretching the vector x, not changing its direction, so x is an eigenvector of A. One can represent eigenvectors of A as a set of orthogonal vectors representing different dimensions of the original matrix A.(Important in Principal Component Analysis, PCA)

Eigenvalues are multipliers. They are numbers that represent how much linear transformation or stretching has taken place. An eigenvalue of a square matrix is a scalar that is represented by the Greek letter λ (lambda).

Eigenvectors of a square matrix are non-zero vectors, after being multiplied by the matrix, remain parallel to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by which the eigenvector is scaled when multiplied by the matrix.

All eigenvalues and eigenvectors satisfy the equation Ax = λx for a given square matrix A. i.e. Matrix A acts by stretching the vector x, not changing its direction, so x is an eigenvector of A. One can represent eigenvectors of A as a set of orthogonal vectors representing different dimensions of the original matrix A.(Important in Principal Component Analysis, PCA)

Page 20: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Matrix Representations of Neural Connections

Matrix Representations of Neural Connections

• Can create a mathematical model of the connections in a neural system

• Connections are the excitatory or inhibitory

Excitatory Connection Inhibitory Connection

Input Neuron Output Neuron Input Neuron Output Neuron

Page 21: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Matrix Representations of Neural Matrix Representations of Neural ConnectionsConnections

Matrix Representations of Neural Matrix Representations of Neural ConnectionsConnections

#1 #3

#2

-1

+1

Excitatory = Makes it easier for the Excitatory = Makes it easier for the post synaptic cell to firepost synaptic cell to fire

Inhibitory = Makes it harder for the Inhibitory = Makes it harder for the post synaptic cell to firepost synaptic cell to fire

Excitatory = Makes it easier for the Excitatory = Makes it easier for the post synaptic cell to firepost synaptic cell to fire

Inhibitory = Makes it harder for the Inhibitory = Makes it harder for the post synaptic cell to firepost synaptic cell to fire

We can translate this information into a set of vectors (1 row matrices)We can translate this information into a set of vectors (1 row matrices)

Input vector = (1 1) Input vector = (1 1) relates to activity (#1 #2)relates to activity (#1 #2) Weight vector = (1 -1)Weight vector = (1 -1) relates to connection weight (#1 #2relates to connection weight (#1 #2))

0)1()1()11()11(1

1.11

Activity of Neuron 3Activity of Neuron 3Input x weightInput x weight

Cancels out! But it is more complicated than this!

Page 22: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

How are matrices relevant to fMRI data?How are matrices relevant to fMRI data?Basics of MR PhysicsBasics of MR Physics

Angular MomentumAngular Momentum: Neutrons, protons and electrons spin about their axis. : Neutrons, protons and electrons spin about their axis. The spinning of the nuclear particles produces angular momentum.The spinning of the nuclear particles produces angular momentum.

Certain nuclei exhibit Certain nuclei exhibit magnetic propertiesmagnetic properties. A proton has mass, a positive . A proton has mass, a positive charge, and spins, it produces a small magnetic field. This small magnetic charge, and spins, it produces a small magnetic field. This small magnetic field is referred to as the magnetic moment that is a vector quantity with field is referred to as the magnetic moment that is a vector quantity with magnitude and direction and is oriented in the same direction as the angular magnitude and direction and is oriented in the same direction as the angular momentum. momentum.

Under normal circumstances these magnetic moments have no fixed Under normal circumstances these magnetic moments have no fixed orientation (so no overall magnetic field). However, when exposed to orientation (so no overall magnetic field). However, when exposed to an an external magnetic field (external magnetic field (BB00), nuclei begin to align. To detect net ), nuclei begin to align. To detect net

magnetisation signal a second magnetic field is introduced magnetisation signal a second magnetic field is introduced ((BB11) which is ) which is

applied applied perpendicularperpendicular to B to B00, and it has to be at the resonant frequency., and it has to be at the resonant frequency.

How are matrices relevant to fMRI data?How are matrices relevant to fMRI data?Basics of MR PhysicsBasics of MR Physics

Angular MomentumAngular Momentum: Neutrons, protons and electrons spin about their axis. : Neutrons, protons and electrons spin about their axis. The spinning of the nuclear particles produces angular momentum.The spinning of the nuclear particles produces angular momentum.

Certain nuclei exhibit Certain nuclei exhibit magnetic propertiesmagnetic properties. A proton has mass, a positive . A proton has mass, a positive charge, and spins, it produces a small magnetic field. This small magnetic charge, and spins, it produces a small magnetic field. This small magnetic field is referred to as the magnetic moment that is a vector quantity with field is referred to as the magnetic moment that is a vector quantity with magnitude and direction and is oriented in the same direction as the angular magnitude and direction and is oriented in the same direction as the angular momentum. momentum.

Under normal circumstances these magnetic moments have no fixed Under normal circumstances these magnetic moments have no fixed orientation (so no overall magnetic field). However, when exposed to orientation (so no overall magnetic field). However, when exposed to an an external magnetic field (external magnetic field (BB00), nuclei begin to align. To detect net ), nuclei begin to align. To detect net

magnetisation signal a second magnetic field is introduced magnetisation signal a second magnetic field is introduced ((BB11) which is ) which is

applied applied perpendicularperpendicular to B to B00, and it has to be at the resonant frequency., and it has to be at the resonant frequency.

Page 23: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

How are matrices relevant to fMRI How are matrices relevant to fMRI data?data?

How are matrices relevant to fMRI How are matrices relevant to fMRI data?data?

YY = X= X . . ββ + ε+ ε

Observed = Predictors * Parameters + ErrorObserved = Predictors * Parameters + Error

BOLD = Design Matrix * Betas + ErrorBOLD = Design Matrix * Betas + Error

YY = X= X . . ββ + ε+ ε

Observed = Predictors * Parameters + ErrorObserved = Predictors * Parameters + Error

BOLD = Design Matrix * Betas + ErrorBOLD = Design Matrix * Betas + Error

Y is a matrix of BOLD signalsY is a matrix of BOLD signals Each column represents a single Each column represents a single

voxel sampled at successive time voxel sampled at successive time points.points.

Each voxel is considered as Each voxel is considered as independent observationindependent observation

So, we analysis of individual voxels So, we analysis of individual voxels over time, not groups over spaceover time, not groups over space

Tim

e

Intensity

Y

Page 24: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Response variable A single voxel sampled at successive time points. Each voxel is considered as independent observation.

Response variable A single voxel sampled at successive time points. Each voxel is considered as independent observation.

=

+

= +Y X

data

vecto

rdes

ign

mat

rixpar

amete

rs

erro

r

vecto

r

Explanatory variablesThese are assumed to be measured without error.May be continuous, indicating levels of an experimental factor.

Observed Predictors

Solve equation for β – tells us how much of the BOLD signal is explained by X

Solve equation for β – tells us how much of the BOLD signal is explained by X

Page 25: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

PseudoinversePseudoinverse

In SPM, design matrices are NOT square matrices (more lines than columns, especially for fMRI).

So, there is not a unique solution, i.e. there is more than one solution possible.

SPM will use a mathematical trick called the pseudoinverse, which is an approximation, where the solution is constrained to be the one where the values that are minimum.

Page 26: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

NormalisationNormalisation

Statistical Parametric MapStatistical Parametric MapImage time-seriesImage time-series

Parameter estimatesParameter estimates

General Linear ModelGeneral Linear ModelRealignmentRealignment SmoothingSmoothing

Design matrix

AnatomicalAnatomicalreferencereference

Spatial filterSpatial filter

StatisticalStatisticalInferenceInference

RFTRFT

p <0.05p <0.05

How are matrices relevant to fMRI?How are matrices relevant to fMRI?How are matrices relevant to fMRI?How are matrices relevant to fMRI?

Page 27: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

In PracticeIn PracticeIn PracticeIn Practice Estimate Estimate MAGNITUDEMAGNITUDE of signal changes and of signal changes and

MR INTENSITY MR INTENSITY levels for each voxel at various levels for each voxel at various time pointstime points

Relationship between experiment and voxel Relationship between experiment and voxel changes are established changes are established

Calculation require linear algebra and matrices Calculation require linear algebra and matrices manipulationsmanipulations

SPM builds up data as a matrix.SPM builds up data as a matrix. Manipulation of matrices enables unknown Manipulation of matrices enables unknown

values to be calculated.values to be calculated.

Page 28: Linear Algebra and Matrices  Methods for Dummies FIL November 2011

Thank you!Thank you!Thank you!Thank you!

Questions?Questions?