ECE 544 Basic Probability and Random Processesjvk/544/handouts/544...E[Xi] = .Then Pr ˆ lim n!1 X1...
Transcript of ECE 544 Basic Probability and Random Processesjvk/544/handouts/544...E[Xi] = .Then Pr ˆ lim n!1 X1...
ECE 544 Basic Probability and Random Processes
J. V. Krogmeier
August 26, 2014
Contents
1 Probability [1] 2
1.1 Discrete Distributions . . . . . . . . . 2
1.2 Continuous Distributions . . . . . . . 2
1.3 Gaussian Properties . . . . . . . . . . 3
1.4 Useful Theorems . . . . . . . . . . . . 3
2 Random Processes [2] 4
2.1 Second Order RPs . . . . . . . . . . . 4
2.2 And LTI Systems . . . . . . . . . . . . 4
2.2.1 Special Lemma on Correlation 4
2.2.2 The Standard Correlation For-mula . . . . . . . . . . . . . . . 4
2.3 Wide Sense Stationary RPs . . . . . . 4
2.4 WSS and LTI Systems . . . . . . . . . 5
2.5 Theorem on Modulation . . . . . . . . 5
2.5.1 Gaussian RPs . . . . . . . . . . 5
2.5.2 AWGN . . . . . . . . . . . . . 6
3 Communication Link Parameters [3] 6
3.1 Nyquist’s Noisy R Model . . . . . . . 6
3.2 Communication System ComponentModels . . . . . . . . . . . . . . . . . . 6
3.2.1 Noiseless Components . . . . . 6
3.2.2 Noisy Components . . . . . . . 7
3.2.3 Combining Basic Blocks . . . . 8
3.2.4 Cascades of Blocks . . . . . . . 9
3.3 Signal Power via Friis Equation . . . . 9
4 Linear Analog Communications inAWGN [4] 9
4.1 Performance Metric . . . . . . . . . . 9
4.2 Message and Noise . . . . . . . . . . . 9
4.3 Baseband Transmission . . . . . . . . 9
4.4 Generic Passband Transmission . . . . 10
4.5 SNR at IF for Specific Passband Mod-ulations . . . . . . . . . . . . . . . . . 10
4.5.1 AM DSB-SC . . . . . . . . . . 10
4.5.2 AM-LC . . . . . . . . . . . . . 10
4.6 Coherent Demodulators for LinearModulations . . . . . . . . . . . . . . . 11
4.6.1 AM-DSB . . . . . . . . . . . . 11
4.6.2 AM-LC . . . . . . . . . . . . . 11
A Basic Math 12
A.1 Trig. Identities . . . . . . . . . . . . . 12
A.2 Expansions/Sums . . . . . . . . . . . . 12
A.3 Taylor Series . . . . . . . . . . . . . . 12
A.4 Integration by Parts . . . . . . . . . . 12
A.5 Convolution . . . . . . . . . . . . . . . 12
B Spectral Analysis of Continuous TimeSignals 12
B.1 Fourier Series . . . . . . . . . . . . . . 12
B.2 Continuous Time Fourier Transform(CTFT) . . . . . . . . . . . . . . . . . 12
C Deterministic Autocorrelation andPower Spectral Density 12
C.1 Energy Signals . . . . . . . . . . . . . 12
C.2 Power Signals . . . . . . . . . . . . . . 15
D Hilbert Transform 15
1
1 Probability [1]
1.1 Discrete Distributions
• Bernoulli: A random variable (r.v.) X is said tobe a Bernoulli r.v. with parameter p (0 ≤ p ≤1) if it only takes two values 0 and 1 and itsprobability mass function (pmf) is of the form
pX(1) = Pr(X = 1) = p
pX(0) = Pr(X = 0) = 1− p.
• Binomial: A r.v. X is said to be a Binomialr.v. with parameters (N, p) where N is a positiveinteger and 0 ≤ p ≤ 1 if its pmf is of the form
pX(k) =
(Nk
)pk(1− p)N−k
for k = 0, 1, 2, . . . , N . For such a r.v. X
E(X) = Np
Var(X) = Np(1− p).
• Poisson: A r.v. X is said to be a Poisson r.v.with parameter λ > 0 if its pmf is of the form
pX(k) = e−λλk
λ!
for k = 0, 1, 2, . . .. For such a r.v. X
E(X) = λ
Var(X) = λ.
1.2 Continuous Distributions
• Uniform: A r.v. X is said to be uniform on aninterval a ≤ x ≤ b if its probability density func-tion (pdf) is
fX(x) =
1b−a for a ≤ x ≤ b0 for x < a or x > b
.
For such a r.v.
E(X) = (a+ b)/2
Var(X) = (b− a)2/12.
• Exponential: A r.v. X is said to be an exponen-tial r.v. with parameter λ > 0 if its pdf is
fX(x) =
λe−λx for x ≥ 00 for x < 0
.
For such a r.v.
E(X) = 1/λ
Var(X) = 1/λ2.
• Rayleigh: A r.v. R is said to be Rayleigh dis-tributed with parameter σ if its pdf is
fR(r) =
rσ2 e−r2/2σ2
for r ≥ 00 for r < 0
For such a r.v.
E(R) =
√π
2σ
Var(R) =1
2(4− π)σ2.
• Gaussian (single variate): A r.v. X is said tobe a normal (or Gaussian) r.v. with parameters(µ, σ2), written X ∼ N (µ, σ2), if its pdf is
fX(x) =1√2πσ
e−(x−µ)2/2σ2
.
For such a r.v.
E(X) = µ
Var(X) = σ2.
The Gaussian Q function gives the tail probabil-ity of a N (0, 1) r.v.,
Q(x) =1√2π
∫ ∞x
e−z2/2dz.
Note that Q(−x) = 1−Q(x). A table of valuesof the Q function is given on the next page.
• Gaussian (two variable): Two r.v.s X and Y aresaid to be bivariate normal (or Gaussian) if theirjoint pdf is
fX,Y (x, y) =1
2πσxσy√
1− ρ2
× exp
− 1
2(1− ρ2)F (x, y)
where F (x, y) is the quadratic form:(x− µxσx
)2
+
(y − µyσy
)2
−2ρ(x− µx)(y − µy)
σxσy.
For such r.v.s
E(X) = µx
E(Y ) = µy
Var(X) = σ2x
Var(Y ) = σ2y
Cov(X,Y ) = ρσxσy
where −1 ≤ ρ ≤ +1. (The bivariate normal dis-tribution can be generalized to an arbitrary num-ber of random variables. Such are called jointlyGaussian r.v.s.)
2
1.3 Gaussian Properties
• Jointly Gaussian r.v.s X and Y are statisticallyindependent if and only if (iff) they are uncorre-lated, i.e., ρ = 0.
• A linear combination of an arbitrary number ofjointly Gaussian r.v.s is a Gaussian r.v.
• Conditional Gaussian: Let r.v.s X and Y bejointly Gaussian with the pdf given in the previ-ous bullet. Then the conditional pdf of X givenY = y is a single variable Gaussian pdf with
E(X|Y = y) = µx + ρσxσy
(y − µy)
Var(X|Y = y) = σ2x(1− ρ2).
• Gaussian Moments: Let X be a Gaussian ran-dom variable with mean zero and variance σ2,i.e., N (0, σ2). Then
E(X2n) = 1× 3× · · · × (2n− 1)σ2n
andE(X2n−1) = 0
for n = 1, 2, 3, . . ..
• Connection between Gaussian and Rayleigh:Let X and Y be zero mean jointly Gaussianr.v.s with equal variances σ2 and ρ = 0 (i.e., theyare statistically independent). Then the derivedr.v.s R =
√X2 + Y 2 and Θ = arctan(Y/X)
(four quadrant inverse tangent) are themselvesstatistically independent and R is Rayleigh withparameter σ and Θ is uniform on [0, 2π).
1.4 Useful Theorems
• Markov’s Inequalitiy: X a r.v. taking only non-negative values. Then for any a > 0
PrX ≥ a ≤ E[X]
a.
• Chebyshev’s Inequalitiy: X a r.v. with finite
mean µ and variance σ2, then for any value k > 0
Pr|X − µ| ≥ k ≤ σ2
k2.
• Weak Law of Large Numbers: X1, X2, . . . asequence of independent and identically dis-tributed (i.i.d.) r.v.s, each having a finite meanE[Xi] = µ. Then, for any ε > 0
Pr
∣∣∣∣X1 + · · ·+Xn
n− µ
∣∣∣∣ > ε
→ 0
as n → ∞. (Sample mean converges to truemean in probability.)
• Central Limit Theorem: X1, X2, . . . a sequenceof i.i.d. r.v.s, each having mean µ and varianceσ2. Then the cdf of
X1 + · · ·+Xn − nµσ√n
tends to the cdf of the standard unit normal asn→∞. (Convergence in distribution.)
• Strong Law of Large Numbers: X1, X2, . . . asequence of independent and identically dis-tributed (i.i.d.) r.v.s, each having a finite mean
3
E[Xi] = µ. Then
Pr
limn→∞
X1 + · · ·+Xn
n= µ
= 1
(i.e., the sample mean converges to the truemean with probability one.)
2 Random Processes [2]
2.1 Second Order RPs
Assume all signals, impulse responses, and randomprocesses X(t), Y (t) are real-valued in this section.Assume that all random variables have finite variance(hence also have finite means). Define moment func-tions:
• Mean: µX(t) = E[X(t)].
• Cross-Correlation: RX,Y (t, s) = E[X(t)Y (s)].
• Cross-covariance
CX,Y (t, s) = RX,Y (t, s)− µX(t)µY (s).
• We get auto-correlation RX,X(t, s) and auto-covariance CX,X(t, s) when Y ≡ X in the def-initions above.
2.2 And LTI Systems . . .
Let the impulse response of an LTI system be BIBOstable. Then if a second order rp is input to thesystem, the output is also second order:
LTIh(t) ↔ H(f)
X(t) Y (t)
The mean of the output rp is equal to theresult of passing the input mean through the LTIsystem:
LTIh(t) ↔ H(f)
µX(t) µY (t)
The cross-correlation of input and outputand the auto-correlation of the output can be com-puted via application of the LTI filter as well. First,we give a general lemma.
2.2.1 Special Lemma on Correlation
Let A(t) and B(t) be 2nd order rps. Let h1(t) andh2(t) be BIBO stable impulse responses. Generatetwo additonal rps via:
C(t) = h1 ∗A(t)
D(t) = h2 ∗B(t).
Then the cross-correlation of the outputs isobtained via:
RA,B(t, s)
RC,D(t, s)RC,B(t, s)
h1(t) h2(s)
RA,D(t, s)h1(t)h2(s) RC,D(t, s)
2.2.2 The Standard Correlation Formula
Let A ≡ Bdef= X and h1 ≡ h2
def= h. Then the
correlation formula for the standard case reduces to:
RX,X(t, s)
RX,Y (t, s)
RY,Y (t, s)RY,X(t, s)
RY,Y (t, s)
h(t) h(s)
h(s) h(t)
2.3 Wide Sense Stationary RPs
To the assumption of finite variance used in the pre-vious sections we here add the assumption that themean functions are independent of time and that cor-relations and cross correlations depend only upon thetime difference or time lag. A single process with thisproperty is called wide sense stationary (WSS); for apair of rps we use the term jointly wide sense station-ary (JWSS).
In symbols, rps X(·), Y (·) are JWSS ifµX(t) ≡ µX , µY (t) ≡ µY for all times t ∈ R and
RX,Y (t, s) = RX,Y (t+ λ, s+ λ)
for all times t, s, λ ∈ R. This means that the auto-correlation function really only depends upon thetime lag τ = s− t between the two time samples.
When JWSS one typically redefines the no-tation as shown below:
• Mean: µX = E[X(t)].
• Autocorrelation: RX(τ) = E[X(t)X(t+ τ)].
4
• Autocovariance: CX(τ) = RX(τ)− µ2X .
• Cross-correlation RX,Y (τ) = E[X(t)Y (t+ τ)].
• Cross-covariance CX,Y (τ) = RX,Y (τ)− µXµY .
Then the following definitions make sense:
• Power:
power[X(t)]def= RX(0)
= CX(0) + µ2X
= ac power + dc power
• Power spectral density (SX(f)):
RX(τ)↔ SX(f);
power[X(t)] = RX(0) =
∫ ∞−∞
SX(f)df.
2.4 WSS and LTI Systems
WSS X(t) input to LTI system with h(t) ↔ H(f).Then output Y (t) = X ∗ h(t) is WSS, X(t) and Y (t)are jointly WSS, and:
• µY = µX∫∞−∞ h(t)dt = µXH(0).
• Crosscorrelation / cross spectral density
RX,Y (τ) = h ∗RX(τ)
lSX,Y (f) = H(f)SX(f).
• Autocorrelation / psds:
RY (τ) = h ∗ h ∗RX(τ)
lSY (f) = |H(f)|2SX(f).
where h(t)def= h(−t).
LTIh(t) ↔ H(f)
h(t) ↔ H(f)LTI
h(t) ↔ H∗(f)LTI
RX(t)
SX(f)
RX,Y (t)
SX,Y (f)
RY (t)
SY (f)
X(t) Y (t)
2.5 Theorem on Modulation
A(t), B(t) jointly WSS. Θ r.v. uniform on [0, 2π),statistically independent of A(t), B(t):
Thm Part A: Then X(t) = A(t) cos(2πfct + Θ) isWSS with µX = 0 and
RX(τ) = 0.5RA(τ) cos(2πfcτ)
lSX(f) = 0.25[SA(f − fc) + SA(f + fc)]
Thm Part B: If RA(τ) = RB(τ) and RA,B(τ) =−RB,A(τ), then
X(t) = A(t) cos(2πfct)−B(t) sin(2πfct)
has
RX(τ) = RA(τ) cos(2πfcτ)
− RA,B(τ) sin(2πfcτ)
lSX(f) = 0.5[SA(f − fc) + SA(f + fc)]
+ j0.5[SA,B(f − fc)− SA,B(f + fc)].
Moreover, if A(t), B(t) are zero mean, then X(t) haszero mean and is WSS.
Thm Part C: Then
X(t) = A(t) cos(2πfct+ Θ)−B(t) sin(2πfct+ Θ)
is zero mean, WSS with
RX(τ) = 0.5[RA(τ) +RB(τ)] cos(2πfcτ)
− 0.5[RA,B(τ)−RA,B(−τ)] sin(2πfcτ)
lSX(f) = 0.25[SA(f − fc) + SB(f − fc)
+ SA(f + fc) + SB(f + fc)]
+ j0.25[SA,B(f − fc)− SA,B(f + fc)
− SA,B(−f + fc) + SA,B(−f − fc)]
2.5.1 Gaussian RPs
• X(t) is a Gaussian r.p. if any finite col-lection of time samples from the process:X(t1), X(t2), . . . , X(tN ) is a set of jointly Gaus-sian random variables.
• If input to LTI system is WSS Gaussian r.p.,then output is WSS Gaussian. Moreover, inputand output are jointly Gaussian r.p.s.
• X(t) WSS and Gaussian. If CX(τ∗) = 0, thenX(t) and X(t+ τ∗) are statistically independentfor any t.
5
• X(t), Y (t) jointly WSS and jointly Gaussian. IfCX,Y (τ∗) = 0, then X(t) and Y (t + τ∗) are sta-tistically independent for any t.
2.5.2 AWGN
• WSS Gaussian r.p. N(t) with zero mean andautocorrelation / psd
RN (τ) =N0
2δ(τ)
l
SN (f) =N0
2for −∞ < f <∞
said to be a white Gaussian noise (WGN). WhenN(t) appears in a problem added to a desiredsignal, we call it additive white Gaussian noise(AWGN).
3 Communication Link Param-eters [3]
Noise arises in communications systems from twomain sources: 1) Thermal noise1 associated with therandom motion of electrons in a resistor or other lossydevices, and 2) shot noise associated with the dis-crete nature of charge carriers in tubes and semicon-ductors. Thermal noise voltages are proportional tothe square root of temperature while shot noise is in-dependent of temperature. Both are wideband andoften modeled as white.
3.1 Nyquist’s Noisy R Model
• Resistor of value R Ω at temperature T K hasacross its terminals a random noise voltage withGaussian distribution, zero mean, and flat psdgiven in the below model.
R
Noisy R at temp TNoiseless R in series with noisevoltage source v(t):
• WSS, zero-mean.
• Sv(f) = 2kTR (−∞ < f < ∞)
(k = 1.38 × 10−23 J/K)
+−
R
v(t)≡
1Also called Johnson or Nyquist noise.
• Maximum power from a noisy resistor is deliv-ered to a matched load and the maximum powerthat can be delivered over a one-sided bandwidthB is
Pmax =kT
2· 2B = kTB.
• Therefore, the two-sided available power spec-tral density (psd) of a resistive noise source isflat with height = kT/2, depending only on thephysical temperature T .
3.2 Communication System Compo-nent Models
Important components in communications systemsinclude antennas, filters, oscillators, noise sources,mixers, amplifiers, and attenuators. These blocks areused to implement modulators, demodulators, anddetectors. They are used to process signals in thepresence of noise with the goal of minimizing noiserelated impairments. However, since the blocks arecontructed from electronics and lossy elements, theyintroduce noise in their own right.
The tables below show the basic blocks. Thefirst contains the fundamental noiseless componentsand the second contains the fundamental noisy com-ponents. Basic parameters are given.
All noise psds in the equations of this sectionwill be assumed to be flat (white) over the frequencyband of interest. All filters are assumed to be idealand all impedances are assumed to be matched2.
3.2.1 Noiseless Components
Component Parameters
Antenna Gant
noiseless LPF fH
noiseless BPF fL fH
noiseless HPF fL
Oscillator Posc fosc• Antenna. An ideal noiseless antenna is charac-
terized by its gain, Gant, which is the peak of itspower gain pattern vs. direction. A secondaryparameter is the effective area, Aant, related to
2Non-ideal filters can be accomodated under this assump-tion provided that bandwidths are defined as noise equivalentbandwidths.
6
gain via
Gant =4πAantλ2
where λ is the operating wavelength.
• Filters. Ideal noiseless filters will have symmet-ric, brickwall responses and unity passband gain.Therefore, the only parameters are passbandcutoff frequencies.
f
· · ·
f
· · ·
LPF BPF
1.0
0 fL fH
1.0
0fH
• Oscillators. An ideal sinusoidal oscillator pro-duces a signal
vosc(t) =√
2Posc cos(2πfosct+ Θ)
where Θ may be modeled as either a determin-istic or random phase offset.
3.2.2 Noisy Components
Component Parameters
Noise Source Te
Amplifier Te Gamp
Attenuator T L
Mixer Te• Noise Sources. Modeled as though they were
noisy resistors, i.e., they are zero mean,white, Gaussian random processes with two-sided power spectral density
Sv(f) =kTe2−∞ < f <∞.
The only parameter is the equivalent noise tem-perature, Te, given in K. Boltzman’s constant isk = 1.38× 10−23 J/K.
• Amplifiers. Parameters are power gain Gamp andequivalent temperature Te. Ideal amplifiers areassumed of infinite bandwidth.
* Voltage gain is Vamp =√Gamp.
* Effect of internal noise sources modeled byplacing an additive white noise source atthe amplifier input. Parameter is equivalentnoise temperature Te. Then the noise powerin the amplifier output in a one-sided BWB due to internal sources is
Pout,internal = kTeGampB.
Gamp
(noiseless)
noisy amp model
nw(t) : Snw(f) = kTe/2
* Standard noise figure defined by comparingoutput noise power due to internal sourcesto output noise power due to an externalnoise source at standard temperature T0 =290 K. That is in a band of one-sided BWB:
Pout = kT0GampB︸ ︷︷ ︸Pout,external
+ kTeGampB︸ ︷︷ ︸Pout,internal
.
Then
F =Pout
Pout,external
= 1 +Pout,internalPout,external
= 1 +TeT0≥ 1
Also Te = T0(F − 1).
* May also interpret noise figure and equiv-alent noise temperature in terms of thedegradation in signal-to-noise ratio occur-ring due to the addition of internal noise.In below figure
SNRin =PsPn
SNRout =GampPs
Gamp(kTeB + Pn)
assuming that the system does not filter orreduce the bandwidth of either the inputsignal or input noise. Can show that in aband of one-sided BW B:
SNRin
SNRout= 1 +
kTeB
Pn≥ 1.
7
s(t); Ps
n(t); Pn
Gamp
(noiseless)
noisy amp model
nw(t) : Snw(f) = kTe/2
• Attenuators. Can be treated in the same way asamplifiers with the exception that the two inde-pendent parameters are the power loss Latten =1/Gatten and the physical temperature of the at-tenuator T . In these terms
Te = (Latten − 1)T
F = 1 + (Latten − 1)T/T0
• Mixers. From the random process ModulationTheorem (Part A) the mixer in the block di-agram below looks to a white input as a sim-ple power gain Gmixer = 1/2. Hence, thereis only one fundamental parameter Te. AlsoF = 1 + Te/T0.
cos(2πf0t + Θ)
noisy mixer model
nw(t) : Snw(f) = kTe/2
3.2.3 Combining Basic Blocks
• More complex blocks can be created from theabove by combining them with noiseless blocks.
• Any noisy block can be made noiseless by settingTe = 0 K in the model.
• Noisy bandlimited amplifier:
Gamp
(noiseless)
noisy amp model
nw(t) : Snw(f) = kTe/2
f
· · ·
BPF
fL fH
1.0
0
noiseless BPF
noisy bandpass amplifier
• Lossy Filter: This can be made using the noisybandlimited amplifier block. Here, however,Te = T , the physical temperature of the filter,and Gamp < 1 since a passive filter attenuatesan input signal.
• Receiver Antenna: Reciprocity holds. There-fore, antenna properties such as pattern, gain,impedance, loss must be the same for an antennabe it used in receive or transmit mode.
nw(t) : Snw(f) = kTant/2
Gant
Rx antenna model
* However, since signal powers input to anantenna in receive mode are typically manyorders of magnitude (i.e., many tens of dB)smaller than signal powers output from anantenna in transmit mode, thermal noiseassociated with antenna losses are only sig-nificant in receive mode.
* Furthermore, in receive mode an antennawill pick up background noise3 from the ex-ternal environment along with any desiredsignal.
* Noisy receive antenna model is cascade ofideal antenna with noise source referencedto the output of the receive antenna. If Tantis the antenna noise temperature then
kTantB = Pow. input to rest of Rx
in one-sided BW B Hz.
* If combine background black-body noisewith atmospheric attenuation effects4, thenthe noise temperature Tant in the modelwould approximately follow the figure be-low from Pozar [3]. The angle parameteris the elevation of the antenna above thehorizon.
3Sources of natural and man-made background noise in-clude: cosmic remnants of “big bang”, sun and stars, thermalnoise from ground, lightening, high voltage lines, interferencefrom electronics and lighting, and other undesired radio trans-missions.
4Note that if we wish also to model the attenuation ofthe signal due to atmospheric loss, then a noiseless attenua-tor should follow the noisy antenna model. Attenuation woulddepend strongly on frequency particularly near water and oxy-gen resonances and it would also depend weakly on elevation.
8
Sky noise temperature vs. frequency7 shows that low angles and high frequencies aredisadvantaged relative to frequencies below 10 GHz.
The microwave window8 is defined as frequencies lying above galactic noise and belowabsorption noise, as shown here.
7 Pozar, D., Microwave and RF Design of Wireless Systems, J. Wiley, 2001,, pg. 1278 Sklar, pg. 225
- 7 -
3.2.4 Cascades of Blocks
• For a cascade of input-output blocks the over-all noise figure and equivalent noise temperatureare:
F = F1 +F2 − 1
G1+F3 − 1
G1G2+ · · ·
Te = Te1 +Te2G1
+Te3G1G2
+ · · ·
3.3 Signal Power via Friis Equation
PR = GTGR
(λ
4πD
)2
PT .
DPT PR
GRGT
4 Linear Analog Communica-tions in AWGN [4]
m(t) x(t) xr(t) yD(t)
nw(t)
modulator demodulator
channel
Hchan(f)
4.1 Performance Metric
• Assume decomposition: (demodulator outputequals sum of components due to message andnoise)
yD(t) = yD(t|m) + yD(t|nw).
• Possibilities: mean-squared error (MSE), signal-to-noise ratio (SNR), etc.
• We choose SNR; goal is to maximize
SNRDdef=
power[yD(t|m)]
power[yD(t|nw)]
• Issues with SNR as metric:
* Decompostion of yD(t) into sum of messageand noise components must be unambigu-ous (true for linear demodulators, an ap-proximation for non-linear demodulators).
* Must maximize SNR under constraint ofbounded transmit power.
4.2 Message and Noise
• Model message m(t) as WSS random processwith Rm(τ)↔ Sm(f) and bandlimited
Sm(f) = 0 for f > W.
• nw(t) modeled as AWGN with psd height N0/2.
• Message and noise are assumed statistically in-dependent.
4.3 Baseband Transmission
• Corresponds to analog comm. model with x(t) =m(t), Hchan(f) = 1, and the demodulator is anideal LPF of bandwidth W . Then
xr(t) = m(t) + nw(t)
yD(t) = hlpf ∗m(t) + hlpf ∗ nw(t)
= m(t) + n(t)
(assume LPF passes m(t) without distortion).
• Transmitted signal power PT = Pm.
• Received signal component: identical to trans-mitted signal component ⇒ power in receivedsignal component = PT .
• Noise component n(t): Assume ideal LPF withgain 1.0 and bandwidth set to the minimumneeded to pass the signal undistorted, then thepsd Sn(f) is
W−W
f
0
Sn(f)
N0/2
which has power N0W .
9
• Therefore SNRD = Pm/(N0W ) = PT /(N0W ).
4.4 Generic Passband Transmission
• Zero SNR: For simplicity set Hchan(f) = 1.Then xr(t) = x(t) + nw(t). The SNR at thispoint
SNRrdef=
power[x(t)]
power[nw(t)]
is always zero since white Gaussian noise has in-finite power.
• Nonzero SNR: In order to have a non-zero SNRat passband we need to apply a predetectionBPF which passes the signal component and lim-its the power in the noise component.
m(t) x(t) xr(t)
nw(t)
modulator
yD(t)demodulator
channel
Hchan(f)
HBP (f)
1.0
f0−f0
f
2B
xr(t) xif (t)
• Now SNRif > 0 since the noise component inxif (t) will have finite power. In fact
xif (t) = x ∗ hBP (t) + nw ∗ hBP (t)
= x(t) + n(t)
where assume BPF does not distort the signaland n(t) is a bandpass Gaussian noise with zeromean and psd of height N0/2 and shape identicalto that of HBP (f). Then
SNRifdef=
power[x(t)]
power[n(t)]
=Rx(0)
2N0B.
4.5 SNR at IF for Specific PassbandModulations
In order to make x(t) WSS, let m(t) be the messagewith baseband BW = W . Assume it is a WSS r.p.
with zero mean and power Pm = Rm(0). Also as-sume that the r.v. Θ below is uniform on [0, 2π) andstatistically independent of all other r.v.s.
4.5.1 AM DSB-SC
x(t) = Acm(t) cos(2πfct+ Θ).
• Autocorrelation:
Rx(τ) = 0.5A2cRm(τ) cos(2πfcτ).
• Power spectrum:
Sx(f) = 0.25A2c [Sm(f − fc) + Sm(f + fc)] .
• Computing SNR at IF:
* power[x(t)] = Rx(0) = 0.5A2cPm.
* Transmission BW = 2W ⇒ minimum noisepower = 2N0W .
* Ratio:
SNRif =A2cPm
4N0W=
PT2N0W
.
f−W W
M0
fc−fc fc − W fc + W
f
Sm(f)
Sx(f)0.25M0A
2c
4.5.2 AM-LC
x(t) = Ac[1 + kam(t)] cos(2πfct+ Θ).
• Autocorrelation:
Rx(τ) = 0.5A2c [1 + k2aRm(τ)] cos(2πfcτ).
• Power spectrum:
Sx(f) = 0.25A2c [δ(f − fc) + δ(f + fc)]
+ 0.25k2aA2c [Sm(f − fc) + Sm(f + fc)] .
• Computing SNR at IF:
* power[x(t)] = Rx(0) = 0.5A2c(1 + k2aPm).
* Transmission BW = 2W ⇒ minimum noisepower = 2N0W .
10
* Ratio:
SNRif =A2c(1 + k2aPm)
4N0W=
PT2N0W
.
f−W W
M0
fc−fc fc − W fc + W
f
Sm(f)
Sx(f)0.25M0k
2aA2
c
(0.25A2c)
4.6 Coherent Demodulators for Lin-ear Modulations
The coherent demodulator for linear modulations isa mixer followed by a low pass filter. With xif (t) =x(t) + n(t) :
xif (t) yD(t)HLP (f)
cos(2πfct + Θ)
4.6.1 AM-DSB
x(t) = Acm(t) cos(2πfct+ Θ).
• Signal component of yD:
Acm(t) cos2(2πfct+ Θ) = 0.5Acm(t)
+ 0.5Acm(t) cos(2π(2fc)t+ 2Θ).
* Passes LPF ⇒ 0.5Acm(t).
* Power in signal component: 0.25A2cPm.
• Noise component of yD:
n(t) cos(2πfct+ Θ)
* Above is WSS with
R(τ) = 0.5Rn(τ) cos(2πfcτ)
lS(f) = 0.25Sn(f − fc) + 0.25Sn(f + fc)
* Power in noise component: 0.5N0W .
• Demodulated SNR:
SNRD =0.25A2
cPm0.5N0W
=PTN0W
.
fc−fc
f
N0/2
Sn(f)
2fc−2fc
2W
fc−fc
f2fc−2fc
2W
N0/4N0/8
Passes LPF to output
4.6.2 AM-LC
x(t) = Ac[1 + kam(t)] cos(2πfct+ Θ).
• Signal component of yD:
* Passes LPF ⇒ 0.5Ac[1 + kam(t)].
* Passes dc block ⇒ 0.5Ackam(t).
* Power in desired signal component:0.25A2
ck2aPm.
• Noise component of yD:
* Identical to DSB case.
* Power in noise component: 0.5N0W .
• Demodulated SNR:
SNRD =0.25A2
ck2aPm
0.5N0W=
k2aPm1 + k2aPm
PTN0W
.
11
A Basic Math
A.1 Trig. Identities
ejα = cos(α) + j sin(α)
cos(α) =1
2(ejα + e−jα)
sin(α) =1
2j(ejα − e−jα)
sin(α+ β) = sin(α) cos(β) + cos(α) sin(β)
cos(α+ β) = cos(α) cos(β)− sin(α) sin(β)
sin(α) sin(β) =1
2cos(α− β)− 1
2cos(α+ β)
cos(α) cos(β) =1
2cos(α− β) +
1
2cos(α+ β)
sin(α) cos(β) =1
2sin(α− β) +
1
2sin(α+ β)
sin2(α) =1
2[1− cos(2α)]
cos2(α) =1
2[1 + cos(2α)]
A.2 Expansions/Sums
exp(x) = 1 + x+x2
2!+x3
3!+ · · ·
cos(x) = 1− x2
2!+x4
4!− · · ·
sin(x) = x− x3
3!+x5
5!− · · ·
∞∑n=0
an =1
1− a if |a| < 1
N∑n=0
an =1− aN+1
1− a if a 6= 1
N∑k=0
k =N(N + 1)
2
A.3 Taylor Series
f(x) = f(a) + f ′(a)(x− a) +f ′′(a)
2!(x− a)2 + · · ·
· · ·+ f (n)(a)
n!(x− a)n + · · ·
A.4 Integration by Parts
∫udv = uv −
∫vdu
A.5 Convolution
x ∗ y(t) =
∫ +∞
−∞x(τ)y(t− τ)dτ
B Spectral Analysis of Contin-uous Time Signals
B.1 Fourier Series
x(t) periodic, e.g., x(t) = x(t + T0) for all t, and
f0def= 1/T0. Then
x(t) =
∞∑n=−∞
Xnej2πnf0t
l
Xn =1
T0
∫ T0
0
x(t)e−j2πnf0tdt
B.2 Continuous Time Fourier Trans-form (CTFT)
x(t) =
∫ ∞−∞
X(f)ej2πftdf
l
X(f) =
∫ ∞−∞
x(t)e−j2πftdt
• CTFT properties in Figure 1.
• CTFT pairs in Figure 2.
• x(t), periodic in time with Fourier Series Xk hasCTFT which is a weighted impulse train in fre-quency: X(f) =
∑kXkδ(f − kf0).
C Deterministic Autocorrela-tion and Power SpectralDensity
C.1 Energy Signals
• x(t)↔ X(f) of finite energy.
• Autocorrelation: Rx(τ)def=∫∞−∞ x(t)x(t+ τ)dt.
• Energy Density Spectrum: Sx(f) = |X(f)|2.
• Fact: Rx(τ)↔ Sx(f).
12
Figure 1: Some Properties of Fourier Transforms. (From [5])
13
Figure 2: Some Fourier Transform Pairs (From [5])
14
m(t) m(t)hHilbert(t) =
1
πt
HHilbert(f) = −jsgn(f)
Figure 3: Hilbert transform as LTI system.
C.2 Power Signals
• If CTFT exists denote it: x(t)↔ X(f).
• Time Average Operator: For an arbitrary func-tion f(t)
〈f(t)〉 def= limT→∞
1
2T
∫ T
−Tx(t)x(t+ τ)dt
(〈f(t)〉 is not a function of t, notation is usedonly to show the averaging variable).
• Autocorrelation: Rx(τ)def= 〈x(t)x(t+ τ)〉.
• Properties of autocorrelation:
* Rx(0) ≥ |Rx(τ)|.* Rx(τ) = Rx(−τ).
* lim|τ |→∞Rx(τ) = 〈x(t)〉2 if x(t) does notcontain periodic components.
* If x(t) is periodic with period T0 then so isRx(τ).
* CTFT of Rx(τ) is non-negative for all f .
• Power Density Spectrum: Defined to be theCTFT of the autocorrelation:
Rx(τ)↔ Sx(f).
D Hilbert Transform
The Hilbert Transform of a real-valued finite energysignal m(t) is the the output of the LTI system shownin Fig. 3 below (which is a −π/2 phase shifter).
Notes:
• m(t) and m(t) have equal energy.
• m(t) and m(t) are orthogonal, i.e.,∫ ∞−∞
m(t)m(t)dt = 0.
• The complex-valued signal m(t) + jm(t) has aFourier Transform that is identically zero on thenegative frequencies. Such are called analyticsignals.
• HT can also be defined for power signals.
References
[1] S. M. Ross. A First Course in Probability.Prentice-Hall, Upper Saddle River, NJ, fifth edi-tion, 1998.
[2] Athanasios Papoulis. Probability, Random Vari-ables, and Stochastic Processes. McGraw-Hill,New York, 1965.
[3] D. M. Pozar. Microwave and RF Design of Wire-less Systems. Wiley, New York, 2001.
[4] Roger E. Ziemer and William H. Tranter. Prin-ciples of Communications. Wiley, Hoboken, NJ,sixth edition, 2009.
[5] W. M. Siebert. Circuits, Signals, and Systems.The MIT Press, Cambridge, MA, 1986.
15