ECE 650 – Lecture #1 D. van Alphen
Transcript of ECE 650 – Lecture #1 D. van Alphen
1
ECE 650 – Lecture #6
Random Vectors: 2nd Moment Theory
and the Noise Coloring Problem
D. van Alphen
(Based in part on EE 562a Lecture Notes, by Dr.
Robert Scholtz (University of Southern California)
2
Lecture Overview: Random Vectors
and Second-Moment Descriptions
• 2nd Order R Vectors and Their Second Moment Descriptions
• Linear Transformations of R Vectors
(Effect of Engr. Systems on Random Vector Inputs)
• Definition of a White Noise Vector
• Coloring White Noise
3
Random Vectors
and Second-Moment Descriptions
• Assume: all vectors are real column vectors
• Defn: A second-order random vector (R Vector) is one in
which each RV component has finite mean and finite variance.
• Second-Order Theory: applies when 1st and 2nd moments of
the component RV’s are known
– Often the complete description (joint pdf for all component
RV’s) is not known, or just difficult to work with
– 2nd-order statistics can always be estimated if not known
4
2nd Moment Description, continued
• Recall:
– Correlation matrices and covariance matrices are non-
negative definite;
– Correlation matrix: RX = E[XXT]
• Element Rij = E[Xi Xj]
• Diagonal Elements Rii = E[Xi2], 2nd moment of Xi
– Covariance matrix: CX = E[(X- mX)(X- mX)T ]
• Element Cij = E[(Xi – mXi)(Xj – mXj
)]
• Diagonal Elements Cii = E[(Xi – mXi)2], var(Xi)
• Note: CX = Rx – mXmXT
• A 2nd moment description of the real R Vector X is given by:
}C,{ηor}R,{η XXXX
5
Linear Transformation of R Vectors
• Say we define a new R Vector Y = HX (in terms of some other
R Vector X), so in expanded form we have:
• Note: Each component of R Vector Y is a weighted sum of the
R Vector X components, with weights coming from the
corresponding row of H:
n
2
1
mn2m1m
n22221
n11211
m
2
1
X
X
X
hhh
hhh
hhh
Y
Y
Y
Y
m...,,2,1iXhY k
n
1kiki
6
Linear Transformations of R Vectors,
continued
• Now find the mean of R Vector Y, one component at a time:
• Similarly for the correlation matrix: RY = E{YYT}
= E{ (HX) (HX)T} = E{ (H X) ( XTHT } = H E(X XT) HT = H Rx HT
so: RY = H Rx HT
m...,,2,1iηhXhE]Y[E k
n
1kikk
N
1kiki
(E acts on X, not on H)
E{Y} = H E{X}
Note: Expectation (E) commutes with constant
matrices (e.g., H)
7
Linear Transformations of R Vectors,
continued
• For arbitrary R vector X, define the centered equivalent of X
as:
X0 = X - mX
• Note: CX = E{ X0 X0T} = RX0
CY = H Cx HT
If Y = HX, then CY = RY0= H RX0
HT = H CX HT
8
White Noise Vectors
• Real R Vector W is white if it is composed of 0-mean,
uncorrelated RV’s with equal variance; i.e., real W:
W = [W1 W2 … Wn]T
is white iff:
E[W] = [0 0 … 0] T and E[Wi Wj] = 0, i j
s2, i = j
or: Rij = s2 d(i - j)
i, j = 1, …, n
RW = s2 In where
In is the nxn identity matrix
If s2 = 1, we say
the vector is
elementary
white.
9
Noise “Coloring” Problem
• Problem: Most software packages , including MATLAB, will
generate (via random number generators, or RGN’s) “white”
random vectors, W.
– This is good if you are trying to simulate “white noise.”
• Occasionally we wish to generate some specific form of
“colored noise” for use in some simulation
• Suppose (specifically) that we want to generate (for use in
some simulation) random vector Z with mean vector hZ, and
covariance CZ, using a linear transformation of the white noise
vector W that is readily available in software packages:
Z = HW + C
Problem: find the required H, C
10
Noise Coloring Problem, continued
• Assume W: elementary white; transform: Z = HW + C
• Note: CZ = cov(Z) = cov(HW) = H RW HT= H In HT = = H HT
• Block Diagram of Coloring Process:
• In this problem, we are given the desired CZ and the desired
offset or bias is clearly C = hZ; the question becomes:
• Find H such that H HT = CZ
SComputer/
SoftwareH
hW = 0
CW = RW = I
(elt. white)
hZo = 0
CZo = H HT
Z
hZ = C
Cz = H HT
S
C
W Z0
11
Covariance Matrix Factorization
• Repeating the problem: Find H such that: H HT = CZ
• This is a linear algebra problem (called covariance matrix
factorization) with a well-known solution
Solution to Factorization for Covariance Matrix CZ
• Let ei be a unit-length eigenvector of CZ, with eigenvalue li.
Then
CZ ei = li ei
• Combining the equations above for all e-value/e-vector pairs, in
matrix form, we obtain:
n
2
1
n21n21z
00
000
00
00
eeeeeeC
l
ll
E E
L
Defining equation for
e-values, e-vectors
12
Covariance Matrix Factorization
• Repeating:
• Or: Cz E = E L Cz = E L ET
• Notes:
– Matrix E has orthogonal columns, unit-length columns
(making it an “orthogonal matrix”); hence, ET = E-1.
– Matrix L can be factored (as shown on next page).
n
2
1
n21n21z
00
000
00
00
eeeeeeC
l
ll
E E
L
13
Covariance Matrix Factorization, continued
• Factoring L:
L
n
2
1
n
2
1
n
2
1
00
000
00
00
00
000
00
00
00
000
00
00
l
l
l
l
l
l
l
l
l
L1/2 L1/2
14
Covariance Matrix Factorization, continued
• So (from p. 12) Cz = E L ET Cz =( E L1/2) (L1/2 ET)
Cz = ( E L1/2) (E L1/2)T
• Generating Colored Noise - Summary: To generate colored
noise vector Z with mean vector hZ, and covariance CZ,
starting with elementary white noise vector W:
– Perform linear transform: Z = HW + C
• where C = hZ,
• where H = E L1/2;
– where the columns of E are the unit-length
eigenvectors of CZ;
– and where L is diagonal, with the eigenvalues of CZ
on the diagonal
H HT
15
Generating Colored Noise: An Example
Say we want to simulate a 0-mean vector Y with covariance matrix
CY =
Assume that we have access to an elementary white vector W. Find the required H such that Y = H W. (Note: bias C = 0)
Solution: Find H such that CY = H HT; i.e., H = E L1/2
• Step 1: Find the eigenvalues of CY: det(CY – lI) = 0
l1 = 0, l2 = 3/2, l3 = 3/2
(Note that CY is non-negative definite, as required.)
15.5.
5.15.
5.5.1
16
Generating Colored Noise:
Example, continued
• Step 2: Find the eigenvectors ei for each eigenvalue li, solving
for each: (CY – li In) ei = 0
1. For l1 = 0: eigenvector e has 3 equal entries unit-
length eigenvector is:
2. For l2 = 3/2: (CY – (3/2) In) e2 = 0 -e21 = e22 + e23
(Possible unit-length solution)
1
1
1
3
11e
0
1
1
2
12e
17
Generating Colored Noise:
Example, continued
• Step 2: Continuing
3. For l3 = 3/2 (repeated eigenvalue): find the unit-length
vector orthogonal* to e1 and e2:
• Step 3: H = E L1/2
*Symmetric or Hermitian Symmetric matrices (and therefore covariance
matrices) always have a complete set of orthogonal eigenvectors.
1
2/1
2/1
3
2e3
3/203/1
6/12/13/1
6/12/13/1
2/300
02/30
000
100
2/12/30
2/12/30
18
Generating Colored Noise:
Example, continued
• Checking the answer with MATLAB:
– Verifying that H HT = CY
• MATLAB Code:
>> Our_H = [0 sqrt(3)/2 1/2; 0 -sqrt(3)/2 1/2; 0 0 -1]
>> Our_H * Our_H'
ans =
1.0000 -0.5000 -0.5000
-0.5000 1.0000 -0.5000
-0.5000 -0.5000 1.0000
= CY,
19
Same Example:
Solved Completely with MATLAB
>> CY = [1 -.5 -.5; -.5 1 -.5; -.5 -.5 1];
>> [E Lambda] = eig(CY)
E =
0.5774 0.2673 0.7715
0.5774 -0.8018 -0.1543
0.5774 0.5345 -0.6172
Lambda =
-0.0000 0 0
0 1.5000 0
0 0 1.5000
>> H = E * sqrt(Lambda)
Note that the eigenvectors are not the same as the ones we found manually; however, they still meet the conditions:
-e21 = e22 + e23
-e31 = e32 + e33
H =
-0.0000 0.3273 0.9449
-0.0000 -0.9820 -0.1890
-0.0000 0.6547 -0.7559
(A different H, but it still meets the requirement: HH’ = CY)