Generation Of Correlatedcollectionscanada.gc.ca/obj/s4/f2/dsk2/ftp04/mq22432.pdf · The Generation...
Transcript of Generation Of Correlatedcollectionscanada.gc.ca/obj/s4/f2/dsk2/ftp04/mq22432.pdf · The Generation...
The Generation Of Correlated Rayleigh Random Variates By
Discrete Fourier Transform and
Quality Measures for Random
Variate Generat ion
David J. Young
A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the degree of
Master of Science (Engineering)
QUEEN'S UNIVERSITY, KINGSTON, CANADA
September 1997
@ David J. Young, 1997
National Library ($1 of Canada Bibliothèque nationale du Canada
Acquisitions and Acquisitions et Bibliographic Services services bibliographiques
395 Wellington Street 395. me Wellington OttawaON K1A ON4 OîtawaON K1A ON4 Canada Canada
your iUe vom reuma?
Our m? Nane réfémœ
The author has granted a non- exclusive licence allowing the National Library of Canada to reproduce, 10% distribute or sell copies of th is thesis in microform, paper or electronic formats.
The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts from it may be p ~ t e d or otherwise reproduced without the author's permission.
L'auteur a accordé une licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la fome de microfiche/fh, de reproduction sur papier ou sur format électronique.
L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation.
Abstract
The fading caused by rnultipath propagation in wireless sgstems is well modelled in some practical cases by the Rayleigh distribution funct ion. It is well-knorvn t hat discrete-tirne samples of a realistic Rayleigh fading process rnust necessarily be cor- related. and thus digital simulation of the signal fading requires efficient generation of correlated Rayleigh random variates.
Tiiere is also a need for a well-defined and meaningful measure of the quality of these generated random variates. The lack of such a measure compiicates design and cornparison of generation algorithms for communication system simulation applica- tions.
This thesis addresses the problem of generation of correlated Rayleigh random variates as well as the probiem of evaluating quality of generated variates. First considered is an algorithm due to Smith[l] which has gained wide application in digital simulation of wireless sptems. A modification of Smith's algorithm is presented. The nem method is superior to the rnethod of [l] in that it requires esactly one-half of the number of IDFT operations and roughly two-thirds of the computer memory required by the latter. A second contribution of this thesis is the provision of an analysis of the statistical properties of IDFT-based methods for correlated Rayleigh saniple generation. Such an analgsis is not given in [l] though it is needed to understand and justify the statistical properties of IDFT methods.
Also presented in this thesis is a finite impulse response (FIR) filter specification esactly realizing the important JO ( 0 ) autocorrelation as the number of filter t aps goes to infinit. This filter can be used in an alternate Rayleigh fading generator to produce correlated Rayleigh variates matching the JO(+) autocorrelation mith arbitrary accuracy.
In addressing the problem of assessing the statistical quality of random rariates. quantitative quality measures of random variate generation are proposed t hat are. in particular, rneaningful and useful for digital communication system simulation. Cornparisons are provided betiveen the IDFT method, the FIR method, and a sum- of-sinusoids met hod for generating correlated Rayleigh variates, and this evaluation shows the IDFT met hod to compare very favorably for equivalent computational effort.
Acknowledgment s
1 would like to express my appreciation to the supervisor of this work' Dr. Xorman
C. Beaulieu. for his guidance and encouragement.
The financial support of the Telecommunications Research Institute of Ontario.
under the research thrust of Dr. Peter SlcLane' and the School of Graduate Studies.
Queen's University, is gratefully acknowledged.
I would also like to thank my mife Leah for her patience and support during the
completion of this thesis.
Contents
Abstract
Acknowledgments
List of Figures
List of Tables
List of Abbreviations
1 Introduction
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . - . .
1.2 Outline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Literature Review
2.1 Introduction . . . . . . . . . . . . . . - . . . . . . . . . . . . . . . . .
2.2 The IDFT Method and Related Simulators . . . . . . . . . . . . . . .
2.3 O t her Rayleigh Fading Simulators . . . . . . . . . . . . . . . . . . . .
2.4 Quality Measures for Random Variate Generation . . . . . . . . . . .
2.4.1 Techniques Employed for Cornparison of Random Variate Gen-
1
ii
xi
xii
xiii
1
1
4
6
6
6
- 1
11
erators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
- 4 . 2 Techniques for Comparison of Multivariate Gaussian Probabil-
ity Density Functions . . . . . . . . . . . . . . . . . . . . . . . 13
3 Statisticai Analysis of Smith Algorithm 20
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction 20
3.2 Mathematical Description of the FFT Output Sequence . . . . . . . . 32
3.3 Statistical Properties of the FFT Output Sequence . . . . . . . . . . 34
. . . . . . . . . . . . . 3.3.1 Means of the Real and Imaginary Parts 36
. . . . . . . . . . . . . . . . 3.3.2 Autocorrelation of the Real Part 36
. . . . . . . . . . . . . 3.3.3 Autocorrelation of the Irnaginary Part 39
. . . . . 3.3.4 Cross-Correlation Between Real and Imaginary Parts 42
. . . . . . . . . . . . . . . . . . . 3.3.5 Ergodicity of the Sequences 4.3
. . . . . . . . . . . . . . . . . . . . . . 3.4 Distribution of the Amplitude -50
. . . . . . . . . . . . . . . . 3.4.1 Obtaining a Rayleigh Distribution -50
. . . . . . . . . . . . . . . . . 3.4.2 Obtaining a Ricean Distribution -51
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Summary 53
4 Modification of the Algorithm to Use a Single FFT Cal1 55
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction cm
. . . . . . . . . . . . . . . . . . . . . 4.2 The Modified Filter Coefficients 56
. . . . . . . . . . . . . . . . . . . . 4.3 Advantages to the New Approach 65
. . . . . . . . . . . . . . . 4.4 The Algori t hm Lising Real-Sequence FFT's 68
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -4.5 Surnmary 73
5 Determination of Filter Coeficients for the IDFT-Based Methods 74
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
5.2 The Discrete-Time Problem . . . . . . . . . . . . . . . . . . . . . . . 19
5.3 The Filter Given in the Algorithm of Smith . . . . . . . . . . . . . . 8-1
5.4 Autocorrelation Derivat ive Continuity . . . . . . . . . . . . . . . . . . 88
. . a . a Improvement in Calculation of Last Filter Point . . . . . . . . . . . . 93
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Surnrnaq 97
6 A Finite Impulse Response Digital Filtering Approach to the Gen- eration of Correlated Rayleigh Random Variates 99
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 Forming the Theoretical Filter Frequency Response . . . . . . . . . . 101
6.3 The Impulse Response of the Continuous-Time Filter . . . . . . . . . 102
. . . . . . . . . . . . . . 6.4 Forming The Discrete-Time Filter Response 105
6.5 Some Cornparisons with the IDFT Slethod . . . . . . . . . . . . . . . 106
7 A Goodness-Of-Fit Test to Quantitatively Assess the Quality of Ran- dom Variate Generation 109
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 . 1 Introduction 109
7.2 Definition of the Gocllness-Of-Fit Problem . . . . . . . . . . . . . . . 110
7.3 Development of the Test: The One-DimensionalCase . . . . . . . . . I l 4
. . . . . . . . . . . . . . . . 7.4 Defining the SIultiwriate Power Slargin 119
- " ( .a Discussion of the Multivariate Power Uargin . . . . . . . . . . . . . . 122
. . . . . . . . . . . . . . . . . . . . . . . . 1.6 Definition of the Measures 136
7.6.1 -1 Transformation to Represent the Components of X and x
. . . . . . . . as Linear Combinations of Independent Variates 128
6 The Power Margins of the Independent Variates . . . . . . . 134
. . . . . . . . . . . 7.6.3 The Power Margins in the Original Basis 136
-. .-. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.. An Example 111
. . . . . . . . . . . . . . . . . . . 7.8 A Yote on the Computation of CC' 145
8 A Quantitative Evaluation of Output Sample Sequences from Rayleigh Fading Simulator Routines 152
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction 152
. . . . . . . . . . . . . . . . . . . 8.2 The Routines Under Consideration 153
. . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 The IDFT 4Iethod 153
. . . . . . . . . . . . . . . . . . . . 8.2.2 The FIR Filtering llethod 134
. . . . . . . . 8.2.3 -1 Method Based on Superposition of Sinusoids 156
. . . . . . . . 8.3 Cornparison of the Routines Based on Esecution Time 159
. . . . . . . . . . . . 8.4 Comparison Based on Floating-Point Operations 159
. . . . . . . . . . . . . . . . 8.5 Comparison Based on Quality Measures 166
. . . . . . . . . . . . . . . . . . . 8.6 Conclusions from the Comparisons 168
9 Conclusions 172
A Program Code in C for Generation of Correlated Rayleigh Random Variates by Inverse Discrete Fourier Transform 175
B Program Code in MATLAB for Generation of Correlated Rayleigh Randorn Variates by Inverse Discrete Fourier Transform 179
References
Vita
vii
List of Figures
3-1 Block diagram of the analog multipath fading simulator for mobile
radio published by drredondo, Chriss, and Walker, as given in reference
pj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-2 Block diagram of the algorithm of Smith[l] to generate correlated
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rayleigh samples.
4-1 -4 block diagram of the improved algorithm using a single cornples
FFT to generate correlated Rayleigh samples. . . . . . . . . . . . . .
4-2 (a) The filter frequency response coefficients given in Smith routine.
(b) The corijugate-symmetric part of the filter coefficients. ( c ) The
. . . . . . . . . conjugate-antisymmetric part of the filter coefficients.
4-3 Normalized experimental autocorrelation of the real part of { z [ n ] } . plotted with Jo(Lxfmd) , for Doppler frequency f, = 0.05 per sample.
. . . . . The number ~f samples used in the experimental case is 216.
4-4 Normalized experimental cross-correlation between real and imaginary
parts of {~[n]), the output of a single FFT operation, for both the orig-
inal Smith method and the modified method presented in this chapter.
The maximum Doppler frequency is f, = 0.05 per sample, and 216
samples were used to obtain each curve. . . . . . . . . . . . . . . . .
4-5 Run times for the original and the modified FFT algorithms. . . . . .
4-6 Block diagram of routine to generate correlated Rayleigh samples using
. . . . . . . . . . . . . . . . . . . two real-sequence FFT operations.
viii
The theoretical autocorrelation function Jo(2ii f m r ) plotted againsc
(2xfm7) . . . . . - . . . . . . . . . . . . . . . - . . . . . . . . . . - . .
Theoretical power spectrum at carrier frequency f, with ma.simum
Doppler frequency f,. . . . . . . . . . . . . . . . . . . . . . . . . . .
.An illustration of time aliasing. The infinite- tirne au tocorrela t ion func-
tion is shown (solid line) along ivith a time-shifted and overlapping
autocorrelation function (dotted line) from an adjacent period of the
IDFT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
An illustration of the periodicitp of a length-1000 autocorrelation se-
quence obtained by inverse DFT. . . . . . . . . . . . . . . . . . - . .
(a) The theoretical autocorrelation füliçtion in the case of a continuous
first derivatiw (f, = 0.01 per sample). (b) The corresponding power
spectra, obtained by (i) taking the DFT of the correlation function.
and (ii) squaring the Smith filter. . . . . . . . . . . . . . . . . . . .
(a) The theoretical autocorrelation function in the case of a discontin-
uous first derivative (f, = 0.01 per sample). (b) The corresponding
power spectra. obtained by (i) taking the DFT of the correlation func-
. . . . . . . . . . . . . . . . . tion, and (ii) squaring the Smith filter.
(i) The magnitude of the difference between the power spectrum ob-
taiiied as the square of the Smith filter coefficients and the power spec-
trum obtained as the DFT of the truncated autocorrelation function.
relative to the sum of the coefficients of either spectrum. The maxi-
mum Doppler frequency is /, = 0.01, and the first derivative of the
autocorrelation is continuous. (ii) The power spectrum obtained as the
DFT of the truncated autocorrelation function, plotted for reference.
Block diagram of simulator to generate correlated Ray-leigh samples by
. . . . . . . . . . . . . FIR filtering of white Gaussian noise samples.
Theoretical bit error rates for BPSK and coherent FSIi digital trans-
mission systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 i
The use of the transformations of Section 7.6.1 to form power rnargin
measures. (a) The observed and reference variates expressed as a linear
operation on independent variates. (b) The effect of these linear opera-
tions on the covariance matrix of each distribution. ( c ) The application
. . . . of the congruence transformation to the power margin matrix. 133
Histograms showing the nurnber of States having observed poiver mar-
gin within the indicated 0.03 dB intervals. The spacing between cor-
relation function samples satisfies (a) o,r = 0.3. (b) o,r = 0.33. and
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (c)o,r=O.i.. 1-44
The singular values of the theoretical covariance matris Cx in the
case of Ji(-) autocorrelation (5.3) with normalized maximum Doppler
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fm =O.Os. 1-17
The time to generate 216 complex samples using different generation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . methods.. 160
The time to generate 2'l complex samples using different generation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . methods.. 161
The number of floating-point operations to generate samples using the
direct FIR filtering method. plotted mith the number of floating-point
operations to generate samples using the IFFT method, as a function
of the number of samples generated. The nurnber of points in the IFFT
. . . . . . . . . . . . . . . . . . method case is always a power of two. 163
The number of floating-point operations to generate samples using
overlap-add FIR filtering method via fftfilt .m, plotted with the
number of floating-point operations to generate samples using the IFFT
method, as a function of the number of samples generated. The number
of points in the IFFT method case is always a power of two. . . . . . 164
8-5 The number of Aoating-point operations to generate samples using the
SOS methodo plotted with the number of floating-point operations to
generate samples using the IFFT method! as a function of the nurnber
of samples generated. The number of points in the IFFT method case
is almays a power of two. . . . . . . . . . . . . . . . . . . . . . . . . . 165
8-6 The computed quality measure Ç,,, in dB as a function of filter length
LI: for the FIR filtering met hod. A theoretically-determined au tocor-
relation sequence length of L = '200 was used. . . . . . . . . . . . . . 169
List of Tables
8.1 -4 cornparison between the IFFT method, a sum-of-sinusoids method
and an FIR filtering method using the developed quality measures. for
covariance sequence length 200. . . . . . . . . . . . . . . . . . . . . . 167
sii
List of Abbreviations
CAS
CS
dB
DFT
FIR
FFT
IDFT
IFFT
i.i.d.
IIR
Conjugate-antisymmetric
Conjugate-symmetric
Decibels
Discret e Fourier Transform
Finite impulse response
Fast Fourier Transform
Inverse Discrete Fourier Transform
Inverse Fast Fourier Transform
Independent and ident ically distributed
Infinite impulse response
For every
Time average
Largest integer less t han or equal to x
Gamma function.
Partition width in Riemann sum
Dirac delta function
... Slll
Realized maximum Doppler frequency in Hertz.
Eigenvalue mat ris of subscripted vector
Deterministic continuous- time fil ter autocorreIat ion
Variance
Sampling rate of discrete-tirne system
Analog frequency in Hertz
'Ilauimum Doppler frequency in Hertz
L-dimensional region of integrat ion
Analog radian frequency.
hlaximum Dopper frequency in radians
Vect or of independent zero-mean Gaussian varia tes
drea under power spectrurn frorn zero frequency to o
Vector of independent zero-mean Gaussian variates
Cont inuous-time autocovariance func t ion
Covariance matris of subscripted vector
Diagonal matris of singular values
it h singular value
Sample lag
Expected value. or ensemble average
Transmit ted pomer
Eigenvector matris of subscripted vec tor
Filter frequency coefficients
Cornplex filter frequency coefficients
Imagina- part of Fc[k]
Real part of F,[k]
Filter coefficients of Smith algorithm
Filter coefficients of modified algorithm
Carrier frequency, normalized by sample rate
Masirnum Doppler frequenc- norrnalized by sarnple rate
Joint probability density function of SI and Sz
Probability density function of' X
Power rnargin
Discrete power spectrum coefficients
Imaginary part of g[d]
Real part of g[d]
IDFT coefficients of discrete power spect rum
Filter frequency response
FIR filter coefficients
Infinite-length filter impulse response coefficients
Cont inuous-time filter impulse response
Zero-order modified Bessel function of the first kind
Imaginary part
uth order Bessel function of the first kind
Index of sample at realized rnasimum Doppler frequency
Length of test random vector
FIR filter length
Ensemble average of imagina- part
Time average of imaginary part
Ensemble average of real part
Time average of reai part
Number of generated variates
Order of time cornplexity
The Q-function
Rayleigh random variable
Real part
Continuous- time autocorrelation
Discrete autocorrelation of imaginary part
Discrete cross-correlation between real and imaginary parts
Discrete autocorrelation of real part
Power spectral density
Analog power spectrum mith realized maximum Doppler
Noncentrali ty parameter of Rice distribution
Length of signal record in seconds
Transformation mat ris
Reference random vector
Random vector under test
Discrete Fourier Transform coefficients
Cornplex Gaussian random variable
Real Gaussian random variable
Imagina- part of IDFT output
Real part of IDFT output
Complex sequence of correlated variates
Real sequence of correlated variates a t output of ith branch
Comples IDFT output for ith branch
Chapter 1
Introduction
1.1 Background and Motivation
Digital cornputer simulation is widely used to design and develop wireless transmission
systems and the components of wireless transmission systems. Receiver demodulator
structures and error correcting codes are esamples of components often developed or
verified using simulation. System performances such as coverage and outage are also
frequent ly assessed by compu ter simulation. The fading caused by multipat h propa-
gation in wireless systems is well modelled in some practical cases by the Rayleigh dis-
tribution function. It is well-known that discrete-time samples of a realistic Rayleigh
fading process must necessarily be correlated. and thal the correlation function is
dependent upon the Doppler frequency corresponding to the relative motion of the
receiver and transmitter as well as other factors such as antennae characteristics. Dig-
ital siniulat ion of the signal fading t hus requires the generation of correlat ed Rayleigh
random variates.
Rayleigh-distribu ted variat es are obtained as the magnitude of variates having
a zero-mean comples Gaussian distribution. hlany simulation packages and designs
require the input of random variates having a rnultivariate Gaussian distribution
or random variates having a multivariate distribution that derives from the mul-
tivariate Gaussian distribution. While in this thesis we consider Rayleigh random
variates needed in simulations of land mobile and macrocellular systems, Ricean and
Yakagami-m random variates are required for simulation of microcellular and indoor
propagation environments? and these are also derived from rnultivariate Gaussian
distributions. There is a lack of a well-defined and meaningful measure of the quai-
ity of these generated random variates which complicates design and comparison
of generation aigorithms for communication system simulation applications. Con-
ventional goodness-of-fit tests are dominated by events of high probabi l i - and do
not suficiently weight important events (for esample, error events) which typically
have small probabili t ies of occurrence. In evaluations of various varia te generat ion
met hods, qualitative visual inspection of random sample actocorrelation functions.
first-order empirical distribution functions, or Rayleigh time sequences is often em-
ployed. Quantitative measures, when used, are not consistent in the literat ure. and
often the relationship of these measures to system simulation accuracy is unclear or
misleading. h quantitative quality measure relevant to communication system design
would provide much more specific information about the available variate generation
met hods.
This thesis addresses the problem of generation of correlated Rayleigh randorn
variates as well as the problem of evaluating the quality of generated Rayleigh vari-
ates or other variates derived from the muhivariate Gaussian distribution. 'CVe first
consider an algorit hm due to Smith[l] for the generation of correlated Rayleigh ran-
dom variates. This algorithm has gained Nide application in digital simulation of
wireless systems[3]-[7]. The method is based on using an Inverse Discrete Fourier
Transforrn (IDFT) on independent Gaussian input variates to generate two indepen-
dent vectors of correlated Gaussian samples. which are then combined in quadrature
to produce Rayleigh samples. The discrete correlation corresponding to a specified
fading spectrum is achieved by multiplication in the frequency domain by appropriate
filter weights prior to the IDFT operation. In this tliesis, a modification of Smitli's
algorit hm for the generation of correlated Rayleigh random variates is present ed. The
new method is superior to the method of [l] in that it requires esactly one-half of
the number of IDFT cperations and roughly tw-thirds of the cornputer mernorp re-
quired by the latter. This thesis also provides an analysis of the statistical properties
of IDFT-based met hods for correlated Rayleigh sample generation. Such an analy-
sis is not given in [l] though it is needed to understand and justify the statistical
properties of IDFT methods.
Another contribution of this thesis is the provision of a finite impulse response
(FIR) digital filter specification ivhich, as the number of filter taps tends to infinit?.
esactly realizes the discrete autocorrelation funct ion of a vertical monopole antenna
in isotopic scattering. This autocorrelation function, given by the zero-order Bessel
function J O ( - ) , is often used in the multipath fading channel rnodel. Cornparison is
made between this FIR approach and the IDFT method.
To address the problern of assessing the statistical quality of a sequence of random
variates, quantitative quality meosures of random variate generation are proposed
that are, in particular. rneaningful and useful for digital communication system sim-
ulation. These measures are applied to the IDFT method of generating correlated
Rayleigh variates, the FIR filtering method. and a method based on superposition
of cornples sinusoids. This evaluation shows that the IDFT rnethod compares very
favorably for equivalent computational effort.
1.2 Outline
This thesis has the following structure. Chapter 2 reviews some of the published lit-
erature on the topics of Rayleigh variate generation and quality measures for random
variate generation. Chapter 3 presents an esplanation of the Smith algorithm. and a
complete analysis of the statistical properties of this algorithm. Chapter 1 gives the
modification to the algorit hm reducing the required computer resources. Chapter 5
presents a detailed esplanation of the filter coefficients used in the Smith routine. and
investigates the possibili ty of irnproving the filter. The FIR filter approach allowing
generation of Rayleigh variates wit h arbit ary autocorrelation accuracy is present ed
in Chapter 6. Chapter 7 derives quality measures to be used in the evaluation of
random variate generation algorithms. Following t his derivation, the application of
these measures to routines generating correlated Rayleigh variates is @yen in Chapter
8, along with cornparisons of computational effort. Finall- Chapter 9 presents some
concluding remarks.
Chapter 2
Literat ure Review
2.1 Introduction
Key references on the general theory of the Rayleigh multipath fading channel consid-
ered in this thesis and related fading channels include references [8]-[12]. Other rele-
vant literature includes articles which use or refer to the IDFT routine by Smith found
in [l] ; references discussing ot her met hods of generat ing Rayleigh random variat es.
and literature which addresses the issue of evaluating the quality of the distribution
of Gaussian randorn sequences.
2.2 The IDFT Method and Related Simulators
Smith's routine [Il is based on an analog hardware design of Arredondo, Chriss and
Walker(21, which will be outlined in Chapter 3. This analog hardware simulator has
been widely accepted. The hardware design has been used as a basis for simulations
in several papers [13]-[El. A hardware simulator using this design and based on
the 8058 chip is given in [16]. Karirn[lï] used the simulator in [2] to proride fading
statistics for analytical results. The general method of [2] has also been usecl as a
basis for analytical results by Hattori and Hirade(l81.
Software simulators based on the hardware simulator of [2] other than the IDFT
method of reference [II are also found in the published literature. Chung[lS] gives re-
sults of simulations based on the model given in [2], but no routine is given. Karim[2O]
provides a different software simulator, but the method does not well approsimate
the autocorrelation and the implementation is inefficient.
Smith's routine itself is referenced in a nurnber of papers as the basis of R-deigli
fading channel simulations. These include references [3] - [ i l . Surprisingly. a st atist i-
cal analysis of the algorithm, absent from [l]. has not been found in the published
literature despite the widespread use of the method.
2.3 O t her Rayleigh Fading Simulat ors
O ther solutions to the problem of simulating the Rayleigh multipath fading channel
have been suggested. Reference [8] contains the popular approach due to Jakes (see
ais0 the correction provided in [21])? which is a deterministic model that sirnulates the
channel by a superposition of the outputs from sinusoidal generators. the amplitudes
and frequencies of which are chosen such that the output signal models the correct
fading statistics.
Recent 1- ot her generators based on the superposition of sinusoids t eclinique liave
been publislied. .A random mode1 is given in reference [22] (credit for the method is
given to Schulze[23]). In this modei the frequeacy of each sinusoid is initialized at the
start of the routine? randomly distributed according to the desired power spectruni.
and the amplitude of each wave is constant. The paper gives some evaluation in
terms of statistical accuracy and compu tational effort. However? the comparison
between the rnethod and other generation methods is not clear due to the fact that
computational effort is evaluated mit h only 10 sinusoids. while stat ist ical accuracy
is evaluated (by plot t ing the two-dimensional frequenc-select ive fading scat tering
function) using 500 sinusoids. So comparison is made with the DFT algorithm in
this paper. An evaluation of this method for modelling flat Rayleigh fading channels
is given in Chapter 8.
Alternate deterministic methods based on superposition of sinusoids have been
in~estigated in (241. To consideratiou of generation efficiency is made iri the pa-
per. Performance evaluation of the sum of sinusoid methods is made by considering
the frequency average of the power spectrum. the frequency variance of the powver
spectrurn. and the mean-square error of the autocorrelation function over a speci-
fied interval. in addition to plot ting empirical autocorrelation sequences. empirical
first-order probability density functions. and realized Rayleigh output.
Reference [25] presents a method of generating the Rayleigh process based on
realizing a harrnonic representation of the desired compies Gaussian process using an
IDFT operation. Interpolation is used between selected elements of the transformed
sequence to form the channel samples. Use of interpolation is based on the fact tliat
the mavirnum Doppler frequency is usually much less than the sample rate. and.
in theory, the fading process can be reconstructed from samples taken at twice the
maximum Doppler frequency. Interpolation can be used with any method whicli
accurately generates the Rayleigh samples. including the methods of Chapters 3-6.
when the maximum Doppler rate is low compared to the sample rate. References
1261-[27] also utilize interpolation. To compute the samples to be interpolated. the
method of [25] in general requires a larger size IDFT operation than the rnethods of
Chapter 3 and Chapter 4. For esample? the authors of [25] present a case in whicli
a particular random fading sequence is formed by interpolating between 200 sarnples
contained in an IDFT output sequence. To produce 200 samples. the IDFT method
of Chapter 4 requires one IDFT operation on a sequence of length 900. In practice.
this IDFT ~vouid be computed on a length-Y56 sequence to take adwiitage of the
greater computational efficiency of the inverse Fast Fourier Transform (IFFT) for
sequence lengtlis which are powers of two. The method of [25] , on the other hand.
using the parameters given in the paper. requires an IDFT of length 6296 (~vhich
would be increased to length 213 = 8192), from which the 200 samples are taken
for interpolation. One advantage of the method of [25] is the ability to control the
accuracy of the approximation using a parameter. This parameter determines the
upper bound of
mhere B(t) is the theoretical channel response and ~ ( t ) is the realized channel response.
Since the most severe Fading phenornena is observed when /G(t) 1 is sniall. and the
relative error in large j ( t ) is emphasized in (2.1): the value of this measure to ascertain
the quality of the generated samples is in doubt. Nonetheless, the method does offer
control over the approximation accuracy, a t the expense of increased IFFT sequence
lengths. 'lo cornparison is made to the algorithm of Smith in the paper.
Wickert and Jaycobsmeyer[27] simulate the mobile channel by filtering a white
Gaussian noise source using tivo cascaded infinite impulse response (IIR) filters. uti-
lizing interpolation between generated samples. The autocorrelat ion propert ies of t lie
simulator output presented in the paper are unimpressive. hoivever.
Reference [28] explores the use of autoregressive-moving average (ARIIA) mod-
elling to obtain IIR filter coefficients which achieve more accurate results than con-
ventional IIR filter approximations nhile maintaining good computational efficiency.
Hoivever. redetermination of the filter coefficients for an autocorrelation sequence
other than that modelled in the paper is nontrivial. This other ivork is ongoing and
IIR filter designs are not considered Eurther in this thesis.
Some recent literature has proposed modelling the Rayleigh channel by a hlarkov
process (291-(311. The hfarkov approach is esarnined in detail in [32].
>luch literature e'usts proposing simulators that model mobile channels otlier
than the Rayleigh fading channe1[33]-[36]. Yet. the Rayleigh fading channel model
has remained in common use, in part because channel simulation routines for the
Rayleigh channel both have widespread availability and are computationally practical.
a situation which is not true for al1 channel models.
2.4 Quality Measures for Random Variate Gener-
ation
We will examine met hods commonly used to evaluat e Rayleigh fading generators. and
also techniques in the statistical literature for cornparison of multivariate Gaussian
distributions.
2.4.1 Techniques Employed for Cornparison of Random Vari-
ate Generators
Evaluations of and cornparisons between random variate ge~ieration methods often are
provided using qualitative methods. Such qualitative techniques. in addition to being
imprecise, often yield information that does not necessarily haïe a direct relationsliip
to the quality of the variates.
For esample. references [8] @1] $41 ?[31] appeal to qualitative visual inspection of
random sample autocorrelation functions. However' the relative significance of error
in the autocorrelation function close to the zero-lag point (that is. at closely spaced
samples) cornpared to error at large sample lags ( that is, at widely spaced samples)
has not been well defined. Also? error near the zero-crossings of the autocorrelation
function has different importance than error near the masima and minima of this
function. These effects are not obvious from qualitative observation.
.As ano t her esample, references [8] (241' (371 compare plots of first-order empirical
cumulative distribution functions or first-order empirical probability density hnc -
tions. in addition to autocorrelation function cornparisons. This approacli also does
not offer a complete assessment of the quality of the sarnples. Errors in the cumula-
tive distribution function in regions of low probability density rnay be more significant
than errors in regions of high probabi l i t~ since in communication system application
we typically are concerned with events of low probability. such as error events. How-
ever, the distribution in these regions can be difficult to estimate with finite sample
sizes. Furthermore. if linear avis scaling is used (as in ['JI); differences in the taiis
of the distribution are not apparent in the plotted function. Also. since qualitative
analysis of higher-order distribution functions is not possible. correlation properties
are ignored by this method.
References [l].[24] plot actual generator output for direct inspection. Cse of the
plot of a single realization of the randorn process to evaluate the quality of the process
prorides only a rough estimate that the observed output matches the specified random
process? and is of limited value in the comparison of the better variate generation
methods.
Quantitative techniques have been used in the published literature. These also
often do not have a clear relationship to the quality of wriates for simulation appli-
cations. For esample, in reference [24] the mean-square error of the autocorrelation
function is computed over a specified interval. but it is not demonstrated tliat the ab-
solute autocorrelation error is equally significant over its entire dornain with respect
to wriate quality or that absolute autocorrelation error is more significant chan the
relative autocorrelation error at a particular sample lag. Reducing to a single number
the difference between the autocorrelation function of the generator output and a
theoretical autocorrelation function does not provide a meaningful rneasure of variate
quality unless the uniform weighting of autocorrelation error? which ignores important
differences between autocorrelation funct ions evident even in the qualitative compar-
ison, can be justified.
Reference [25] uses the measure
10 log
where rLij) is the element at the ith row and j t h column of the theoretical covariance
matris? and r p ) is the corresponding element in an estimated covariance matris.
This is a iveighted rnean square autocorrelation error. placing greater emphasis on
autocorrelation error for closely-spaced samples. The use of t his measure is not justi-
fied in [25] , and, like the unweiglited mean square autocorrelation error measure. the
relationship to variate quality is not evident.
2.4.2 Techniques for Cornparison of Multivariate Gaussian
Probability Density Functions
It is necessary also to consider techniques esisting in statistical literature to coni-
pare multivariate Gaussian probability distributions. which have not been applied to
wriate generation problerns in the published literature. Techniques available for esti-
mation and comparison of probability distributions and densities can be dirided into
parametric and nonparamecric methods. In parametric met hods. an assumption is
made that the probability density function is well modelled by a particular family of
densities which depend on certain parameters. For example, most common in para-
metric rnodelling is the assumption that data have a Gaussian distribution. mhich is
fully specified by the parameters of mean p and variance 0'. in the univariate case. or
by the mean vector p and covariance matrix C in the multivariate case. Estimation
of the multivariate Gaussian probability density function is equivalent to estimation
of the parameters p and C' and two densities can be compared by comparison of p
and C.
Nonpararnetric methods, on the other hand. make no assumptions about the dis-
tribution of the data. Every point in the probability density function or cumulative
distribution function of a particular set of data ~nust therefore be est imated. and corn-
parison of two density functions or distribution lunctions requires comparison mer
the entire domain of each function.
Parametric models are preferred when the model is knomn to be sufficiently accu-
rate. Homever, the enor in using a poorly specified parametric model can be sel-ere.
and in ma- applications accurate specification of a model is difficult. This is es-
pecially true in social science and psychology applications. for esample. where the
processes underlying certain observations are unknown and the observations iiiay be
limited or incomplete. 'rfany of the published techniques for estimating and compar-
ing probability densities are thus of the nonparametric type. These include multimri-
ate formulations (such as found in [38]) of univariate tests such as the Kolrnogorov-
Smirnov statist ic[39]-[40], Cramér-von Mises statistic[401, and Shapiro-Wilk test for
normality [41]. Other nonparametric tests can be found in reference [42].
However? in cases where a parametric model can be well specified. parametric
methods are to be preferred. Since in a parametric model the probability density
function of a set of data is precisely determined over its entire domain by a relatively
small number of parameters. it is possible to accurately compare two probability den-
sity functions over their entire domain by comparing this small number of paramet ers.
Also. nonparametric models often do not well define the .'tails'' of the density. t tie
probability density of rare-occurring events. These small probabilities are difficult
to estimate a i t h acceptable certainty due to the infrequent nature of the events. Spec-
ifying the parameters of a Gaussian noise distribution. for example, allows accurat e
assessrnent of error events occurring mit h low probability, Say, 10- 'O. Sonparamet ric
metliods require a very large amount of data to specify such an event probabiiity
rvit h acceptable certainty. Consequently, nonparametric met hods of comparing two
distribut ion functions tend not to empliasize adequately the differences between the
probability densities in regions of low probability. f i t , in many communication sys-
tem applications? this is precisely the region of the probability density function tliat is
most important to system performance. Therefose a pararnetric approach is to be fa-
vored, in contrast to many published goodness-of-fit tests. The choice of a parametric
approach is very reasonable in the case of evaluating the output of a randorn variate
generator. since we have access to the underlying process generating the variates. and
thus can be assured of accurate mode1 selection.
In Chapter 7 we propose quality measures for random variate generation. The
quality measures assume that the generated variates have joint normal distribut ion.
or a distribution related to the joint normal distribution. Further. the mean vector
is assumed to be zero. Thus. comparison of the probability density functiori of the
output of the randorn variate generator can be cornpared to a reference probability
densitp function by a comparison of the parameters in the covariance matris of each
distribution.
TIVO tests for comparing two multivariate normal probability density functions
using the covariance rnatris are found in the classic testbook by Anderson [43. Section
10.61. These both take the form of hypothesis tests. That is, the purpose of the test
is to determine mhether the two distributions are the same (the nul[ Iiypothesis). or
different (the alternate hypothesis): based on the d u e of a computed criterion. This
criterion is defined in terms of the covariance matris parameters in the case where
the mean vector is known? and the result is a number which can be cornpared with
a threshold to determine whether the hypothesis is true or false. Other hypothesis
tests for covariance matris equality are found in reference [41].
One property of these test criteria which affects their usefulness as quality mea-
sures is the invariance of the tests to transformation of both random vectors by the
same known affine transformation. Such transformations are of the form
and
where X and x are respectively the reference and test L x 1 randorn vecton. X' and
x are the corresponding transfoirned L x 1 rectors. T is a known nonsingular L x L
rnatrk: and p' is a known L x 1 vector. Since X and x are equal if and only if X' and
x are also equal. the affine transformation does not alter the hypothesis, and thus the
invariance property of the hypothesis test is a typical requirement for hypothesis tests
on the covariance matrix. Hoivever, such a transformation does not necessarily Ieave
the pvality of the vector x unaltered. That is. the inaccuracy of system simulation
results or specific event probabilities resulting from use of the distribution of the
vector X' instead of the perfect distribution of X' may be more or less severe than
the inaccuracy due to use of x instead of X in a different simulation situation. This
will be illustrated with a simple example.
Suppose the vector X = (Si. s * ) ~ is a two-dimensional vector of independent zero-
mean normal random variates, distributed according to some desired specification.
The vector x = (TL, at the output of a random variate generator. is a two-
dimensional vector of independent zero-mean normal variates, the variances of which
differ somewhat from the perfect case represented by X. Yow, a particular simulation
requires as input a random vector Y = (Yl' where
YI = ,Y1 + 20S2
and
Yl = x1 - 20x2.
X second simulation requires as input a random vector Z=(Z1' z * ) ~ . mhere
Z l =si; 10-'S2
and
Vectors Y and Z represent two diiferent affine transformations of the vector X. In-
variant tests performed on vectors Y and 2. against the respective vectors 'I and
z forrned by the çame transformation on X. should yield the same result as a test
directly performed on X and X. The hypothesis that Y = Y is equivalent to the
hypothesis that Z = 2, and equivalent to the hypothesis that X = X. and thus this
invariance property is desirable.
Yet. the qualit! of the wriates is not imariant to the transformation. Suppose the
distribution of the generated wriate S, is est remel? accurate. d i i l e the distri but ion of
SÎ contains some error in the variance parameter. The vector Y! heavily dependent on
the distribution of X2, may yield very inaccurate results in a simulation application.
On the other hand, the vector 2' with small dependence on X2, may yield results
that. while imperfect. deviate only slightly from the true results. Yeither Y or 2
represent ideally-distnbuted vectors? but the quality of 2 can be considered to be
better than the quality OEY in a typical application. This quality is not well measured
by conventional hypothesis test criteria for covariance matrk equality. due to their
invariance properties, and hence the need for a new measure to quantitatiwiy assess
t his quali ty for generated variates.
When testing variate generation rnethods. it is often knomn before any test is
applied that equality of the sample covariance ma t rk with the reference matr is does
not esist. The prima- purpose of the test is therefore to provide a quantitative mea-
sure of the degree to which the difference between the distributions affects a typical
simulation result . Unlike some criteria for hypothesis testing, the measures proposed
in Chapter 7 directly quantify this difference. Sforeover. the rneasures relate closely
to a univariate measure often used to compare performance among communication
systems, and thus the impact of using the approsimate probability density function of
generated sarnples in a typical simulation application is quantitatively espressed in a
forrn intuitive to designers of communication systems. Such measures are believed to
be lacking in the published literature, and are needed to make meaningful evaluation
of competing variate generation methods.
Chapter 3
Statistical Analysis of Smith Algorit hm
3.1 Introduction
-1s indicated in Chapter 2. the algorithm due to Srnit h has been n-idely used for digital
communication system simulation. In the original paper[l], the algorithm is presentetl
in FORTRAX code without explanation or justification of its statistical properties.
We present in this chapter both a full description of the method and an anal-sis
of the distribution of the output samples from the routine. In the original paper
the algorit hm specifically models mult ipat h fading due to isotropic scat tering wi t li
a vertical monopole antenna at the receiver; this is accomplished by the particular
definition of the filter coefficients in the algorithm. We 'evill postpone discussion of
this specific fading channel model. and the filter coefficient sequence definition. until
Chapter 5 . The algorithm as discussed in this chapter and Chapter 4, subject to the
limitations of the algorithm itself which will be noted, is general with regard to the
autocorrelation of the output samples.
We begin with a description of the algorithm. The desired output is a Rayleigh-
distributed sequence mith specified correlation properties. This Rayleigh-distributed
output sequence is formed in the Smith algorithm by taking the magnitude of a zero-
mean cornplex Gaussian sequence. A cornplex Gaussian sequence has elements of the
form
Sc = SL i- jSz,
where Xl: the real part denoted Re {Sc): and S2> the imagina- part denoted Irn{S,}.
are jointly normal random wriates. In the case in ahich SI and X2 are independent
and identically distributed (i.i.d.) with zero rnean and variance a'. the joint proba-
bility density function of the two random variates is given by[45]
The magnitude of Sc is the random variable
and with SL and X2 having density (3. l), the probability density function for R = IScI
is given by
the Rayleigh density func t ionl-161.
To generate N correlated Rayleigh-distributed variates: the Smith algorithm gen-
erates two sequences {xi [n] ) and {x2[n]} tvhere the range of n is [O. .V - 11. The
elernents of each sequence have a joint normal distribution. tvith zero mean and spec-
ified correlation properties (discussed in detail in Section 3.3). The two secpences are
independent. Therefore. the two sequences are also uncorrelated: chat is.
for every m and n in the range [O. .V - 1). where E { - } is the statistical espectation
operator. This independence is assured because {xi [n]} and {x2[n]) are formed froni
distinct output sequences of an independent random number generator. Thus. the
output sequence is formed as (lzc[n]l) . where
and the quality of the distribution of the Rayleigh output samples depends completely
on the qualit? of the Gaussian sequences {x! [n]) and {x2[n]} . TO accomplisii the
analysis of the statistical properties of the Rayleigh output samples: therefore. ive
will show that the sequences {xi[n]) and {x2[n]} have a common probability density
function that closely approxirnates the desired one.
Smith used as a basis for his algorit hm a continuous-time multipath fading simula-
tor for mobile radio[2] and we briefly look at the theory behind this analog simulator
before moving to the discrete-time computer algorithm of Smith. .A block diagrani of
the analog fading simulator, as provided in 121, is given in Figure 3-1. The output
1 R.F. / j Source I
i
7r/2 Phase ___)
Rayleigh Fading R.F.
l v
Signal Fi 7 E l I Source LMOdYlatDr
Figure 3-1. Block diagram of the analog multipath fading simulator for mobile raclio published by Arredondo. Chriss. and Kalker? as given in reference [2].
- i
of this system is a fading signal centered a t the carrier frequency produced by the
diagram block .'RF source". The output of the lower shaping filter is the quadrature
component of the baseband fading signal. mhile the output of the upper shaping filter
is the in-phase component of the baseband fading signal. The output of Smith's algo-
rithm is, rather, a sequence of samples of the baseband signal itself. wit h tlie in-phase
component as the real part of the output sequence and the quadrature component as
Balanced bIodulator
the imagina- part of the output sequence.
We consider in detail one branch of Figure 3-1. say the branch producing tlie
in-phase fading signal. We denote the power spectral density of the output from tlie
- +
Gaussian Noise Source
Shaping Filter
white Gaussian noise source S(w). m-here
The time correlation of the noise generator output? r(.r)? is therefore
where b ( r ) is the Dirac delta function and T denotes the separation between obser-
vation times. \ l e denote the frequency response of the shaping filter as H (ju) . and
the impulse response of the filter as h( t ) . The power spectral density a t the output
of the filter will therefore be given by [G. equation (10-138)]
In the time domain. the autocorrelation of the random process at the output of the
filter is given by
( T ) = (2) p(.). -
where
is the deterministic autocorrelation of the filter response. The last expression indicates
that the function ~ ~ ( j w ) l ~ is the Fourier transform of the function p ( r ) . lvhich is
proportional to the correlation of the shaping filter output. Thus. to achieve a fading
output with a particular autocorrelation using this sustem. ive may find the Fourier
transforni of this correlation. take the square root. and attempt to design a filter
wit h the resulting frequency response. In most cases. including t hat of [2]' this is not
possible and the frequency response can only be approsimated by the analog filter.
There is more than one way to transfer these concepts to the digital domain. An
obvious way is to insert a sampler after the white Gaussian noise source in the block
diagram of Figure 3-1, or equivalently generate the white Gaussian noise in discrete
time. and implement the shaping filter as a digital filter. The continiious-time filter
used in [2] could readily be realized as an infinite impulse response digital filter (using
the methods of [47 Section 7.11, for example) and the discrete-time system would be
complete. Homever, the analog filter of [2] is a poor match to the square mot of
the theoretical power spectrum of the fading signal (this specific spectrum will be
presented in Chapter 5 ) . The desired power spectrum is non-rational. so a method to
realize the ideal power spectrum esactly in an infinite-impulse response design does
not e'ùst.
The shaping filter can also be implemented as a finite impulse response (FIR)
design. -4 suggestion for this design is presented in Chapter 6. -4s the filter length
goes to infinity this filter design approaches the ideal filter response. Homever, the
comput ational requirements for time-domain implement ations of such a fil ter are
impractical. The ideal correlation function decays slowly, and consequent ly FIR filters
of short length do not result in good approximations to the ideal correlation function.
The algorithm in Smith's paper presents a t hird way to perform the filtering, rvith-
out esplanation or justification. hotvever. The filtering operation in the continuous-
time case can be represented as a multiplication of the Fourier transforrn of the input
signal by the frequency response of the filter. followed by an inverse Fourier transform
to obtain the time-domain output. This is an exact representation. In discrete-time.
with continuous frequency spectrum and continuous filter frequency response. this
representation also holds. assuming both signals are bandlimited to less than half
the sampling frequency. This bandlimiting requires an infinite-lengtli input signal
record[48]. When we introduce the restriction that the input signal is of finite du-
ration. as must be the case in a computer simulation. this representation becomes
an approximation to the infinite-time case. because the bandlimited assurnption no
longer holds.
The continuous spectrurn associated with the discrete-time signal will be perioclic.
a i th period equal to the sampling rate. If Ive take one such period. say that from
zero frequency to the sampling rate: and sample this in frequency. Ive rvill obtain
the discrete Fourier transforrn (DFT). The DFT of a data sequence {~[n]). n =
O: 1, . . . . ,V - 1. is defined as
while the inverse DFT (IDFT) operation on sequence { S [ k ] } k = 0,1, . . . ? -V - 1 is
given by
The operation transforms a discrete time domain sequence into a discrete frequency
domain sequence. The IDFT operation can be implemented on a digital cornputer
as the inverse Fast Fourier Transform (IFFT). so we are in a position to realize the
frequency-domain filtering process as multiplication of the DFT of the input signal
with the DFT of the filter impulse response followed by an inverse DFT. It should be
noted that multipIication of the discrete Fourier transform of each of the input signal
and the filter impulse response does not yield the linear convolution of the input and
the filter response, but rather the circular convolution of the two signais. In other
words, ive obtain the linear convolution of periodic time-domain sequences (the time
sequences repeated with period equal to number of samples), and ive will observe
aliasing in the time domain. Hoivever, the magnitude of the filter impulse response
elements in the typical filter i d be small near the endpoints of the sequence. so 11-e
espect that with suitable signal parameters it is possible to keep this error small and
indeed Ive will observe in Chapter 8 that the approximation in the case of Smith's
algorithm is very good.
It must be further noted that the frequency-domain input signal must be cornples
in this method. as a consequence of a well-known property of the IDFT. It will be
shown in Section 4.2 that the inverse DFT of a pure!^ real sequence is conjugate-
symmet ric, meaning
x [n] = x' [iV - n] .
Due to this symmetry, if the sequence { X [ k ] ) is purely real! the real part of {x[n]) will
contain fewer than .V unique random variates (rather. only $ + 1 unique variates').
Hence, the frequency-domain sequence used as input to the IDFT rnust be cornples.
This point will be made clearer in Section 3.3. mhere a full statistical analysis of the
output sarnples is presented.
Since the filtering operation is performed in the frequency domain? there is no
need to specify a time-dornain filter impulse response, and hence Smith specifies only
the discrete frequency response of the filter (which is computat ionally much easier) .
Further. the time-domain sequence of uncorrelated zero-mean cornples Gaussian wri-
ates corresponds to a frequency-domain sequence of uncorrelated zero-mean cornples
Gaussian variates ivith a different variance parameter. Hence, the input sequence can
be directly generated in the discrete frequency domain also. (This will not be shown
here. since the statistical analysis in Section 3.3 is sufficient to prove the vaiidity of
the algorit hm).
The Smith algorithm is presented as a block diagram in Figure 3-2. We start ~vitli
a sequence of :V complex Gaussian variates. formed from two sequences which ive will
denote { A [ k ] } and { B [ k ] ) . k = O, 1: . . . , :V - 1. each composed of N independent real
normal random variates. Each real variate has a mean of zero,
'iV is assurned ewn. We note that z[O] and x [$] are pureh real in a conjugate-syrnrnetric sequence. The imaginary part thus contains % - - 1 unique variates.
Sequence N-pint i Cornplex 1 { X ( ~ ' [ ~ J }
Inverse k = OJ, ..., N-1 FFT
-
Rayleigh Fading Sequence
Figure 3-2. Block diagram of the algorithm of Smith[l] to generate correlated Rayleigh samples.
and variance 0'. given by
E { .4'[k]) = E { ~ ~ [ k ] ) = 0'.
The independence between variates implies
E {.-l[k].-l[l]} = E {B[k]B[Z]} = O . k # 1
and
Several well-tested routines to generate such variates are found in the literature[-k9].[50]-
[ 5 il.
The real variates are used to form the comples Gaussian sequence { A [ l ; ] + j B [ k ] } .
This sequence of uncorrelated cornples Gaussian variates is multiplied with the se-
quence of filter frequency coefficients7 and then an inverse DFT is taken of this com-
plex sequence to form cornples time samples. which for the in-phase brancli will here
be denoted {dl)[n]}. We will see in Section 3.3 that the statistics of the real and
imaginary parts of the complex sequence are identical. and each approsimates the
real or imagina- part of the baseband fading signal. The two parts at the output of
the FFT are correlated, however, so they cannot botli be used to form a sequence of
Rayleigh variates. The real part, denoted {xl [n]): is taken and the imagina- part
discarded. This real sequence {xi [n]) is added in quadrature wit h the real part from
a second identical and independent branch. {x2 [n] } thus producing complex sani-
ples which mode1 the fading channel accuratel- Smith takes the magnitude of t liese
comples samples. espressing the result in decibels at the output of the routine.
In the cornputer routine, the sequence {xL [ni} is generated first. stored. and then
the identical set of operations is repeated to generate {x2[n]} . The comples FFT
output sequence {dl) [n]) in the in-phase branch is statistically identical to that in
the quadrature branch, denoted {d2) [n] ) . Since,
11 [n] = ~e {x(l)[n])
and
the statistics of {xi [n]} and {xr [n]} are also identical. To justify the use of Smith's
algorithm. ive must find the probability density function of the sequences {xl [ni) and
{x2 [nl } and compare t his t O the correct probability density function. Each sequence
is normall-distributed, and thus the distribution is fully specified by the mean and
au tocorrelation of each sequence.
In Chapter 4 we mil1 show that a proper redefinition of the filter coefficients will
allow the formation of both sequences simultaneously with a single IDFT operation.
The remainder of this chapter presents the statistical analpis of Smith's original
algorithm. We d l defer discussion of the filter coefficients until Chapter 5. leaving
the results in this chapter applicable to any correlation function specification.
3.2 Mathematical Description of the FFT Output
Sequence
The sequences at the output of the in-phase and quadrature branch comples FFT's.
{d '1 [n] ) and {d2) [n] ) respectivelx are statistically identical. so we will use the n o t a
tion {x[n]) to represent either sequence since it is not necessary to distinguish the par-
t icular branch in question. Considering the descript ion of t his sequence { ~ [ n ] }. t here-
fore, we start by forming its discrete Fourier transform sequence. denoted { S [ k ] } .
k = O? 1. . . . : -V - 1. where
The elements of sequences { A[k] } and { B [k] ) are independent. ident ically-dist ribu teci
zero-mean 0'-variance normal variates: meaning the? satisfy the properties giwn in
equations (3.8) to (3.11). The elements of sequence { F [ k ] } are the real-valued filter
coefficients7 the values of which will be specified in Chapter 5 .
CVe note that restricting the filter to be real does not make the analysis less general.
In the case of real filter coefficients! the real part of each S [ k ] : F[k]-L[k] , is a niultiple
of a zero-mean normal wriate. Therefore the real part of S [ k ] will have a zero-niean
normal distribution and variance giwn by
Sow. if the filter coefficients are comples. with real part &[k] and imaginary part
F I [ k ] ? the real part of each S [ k ] d l instead be given by
FR [k] ,i [k] + Fr [k] B [k] . (3.13)
As a linear combination of zero-mean normal variates? (3.13) will have normal dis-
tribution with zero mean, as in the real filter case. Hoivever. the variance of this
quant ity in the complex filter case becomes
Csing equations (3.9) and (3.11). this variance reduces to
> 0-
~ ' h e imagina. part can be shown in the same rnanner to be zero-mean normal with
variance o2 (Fi[k] + Ff[k]). Hence. the cornplex filter coefficients Fc[k] are esactly
equivalent in the algorithm to real filter coefficients given by
and the analysis of the algorit hm remains unchanged.
Returning to the description of the algorit hm. after formation of the frequency-
domain sequence (3.12), an inverse discrete Fourier transform (IDFT) is taken to form
Expanding and separating the summand into real and imagina- components. me
obtain
1 A r - 1 2irkn %kn x[n] = - (F[k]A[k] - jF[k]B[k]) + j s in - -v ,=, *\
1 !v-1 27ïkn 2nkn - - - [ ( ~ [ k ] --k[k] COS + F[k] B [k] sin - ,=O *v
2nkn F[k].l[k] sin - -V - F[k]B[k] cos -
The output of the IDFT can now be espressed as z[n] = xR[n] + jxr[n]. nhere
xR[n] = Re {x[n]) and xi[n] = Im {~[n]}. Thus:
and l LV-1 2nkn l .v-L 2rkn
r r [n] = - 1 F[k].'[k] sin - - - C F [ k ] B [ k ] cos -- -\- ,.O LV -v p.0 ?d
We mish to examine the joint statistical properties of these two sequences.
3.3 Statistical Properties of the FFT Output Se-
quence
Observing xR[n] and xi [n] in (3.13) and (3.16): we see each is composed of a weighted
sum of Z N jointly Gaussian random variables. Therefore, xR[n] and q[nj each also
have a joint Gaussian distribution. This is a consequence of the fact that a linear
operation, such as the IDFT. performed on jointly Gaussian sarnples d l yield sarnples
that also have a joint Gaussian distribution. It is therefore sufficient to find the means
of each sequence:
and
mr = E{rr[nJ}.
the autocorrelations of each sequence.
and
~ [ m ? n] = E{xr [m]~r[nJ ) :
and the cross-comelation between the sequences?
to fully determine the joint distribution of { x R [ n ] } and {xr [n]}. Each of these prop-
erties will be considered in turn.
3.3.1 Means of the Real and Imaginary Parts
Taking the ensemble average of the real part { x R [ n ] ) . we observe
since E { -4[k]} and E { B [ k ] } are both zero from (3.8). The ensemble average of the
imagina- part {xr [ n ] ) is also zero. which can be shown similarly.
3.3.2 Autocorrelation of the Real Part
Considering first the autocorrelation of the real part. n-e write this quantity as
1 1v-1 2irk-rn + - F [ k ] B [k] sin - k=O ,v
I
1 "-1 . î d n 1 Y-'
F[l].-l[l] cos - :Y- + - X' F[Z]B[l] sin - [=O
2 :V t erms J
Espanding this product. and making use of the fact that the espectation operator is
linear, we obtain
1 Y-'
- 1 F [k] A[k] cos -v ,=O
1 "-1 [- 1 F[k]A[k] cos (y)]} M ,=O 1 iv-1
- F [k] B [ k ] sin N +*
Let us consider the first of these terms.
Al1 terms in this product for which k # 1 will be zero.iince
The rernaining terms can be expressed as
The last of the terms in (3.17),
1 "-1
- ~ [ k ] B [ k ] sin 1 iv-1 - F[l]B[I]sin - -v i=o (y)]} ?
is simplified sirnilarly. Since
al1 terms for which k # 1 are zeroo and the remaining terms can be expressed as
02 N-1 %km î ~ k n - Î V ~ (~[k]) ' s in (-) iV sin (-) iV . - k=O
Both of the middle terms in (3.17) are zero. since
for every k and 1.
If we define
and
then the autocorrelation sequence (3.17) can be expressed as
No~v. using well-known trigonornetric identities [52. (5.65): (5.66)] ive can rewrite
Ck as
2rkm 2rkn ck = cos - cos - Ai :v
38
%k(m - n ) Zak(m + n ) = . 3 {,,, + cos - AT *Y
and Sk as
%km 2rkn SI, = sin - sin -
!V 1v
%k(m - n) 27i.k(rn + n) - COS .v 3-
So the quantity (Ck + S k ) becornes
We define the sarnple lag as the distance between samples
and the autocorrelation function can be espressed as
3.3.3 Autocorrelation of the Imaginary Part
We m i t e the autocorrelation of the sequence {XI [ni} as
\ ' 2 N terms
2nh I "-l
- [ [ [ sin - - - z F[l]B[l] COS -v -v l=O 1=0
Expanding t his product. ive n-rite
Comparing this espression with that for rRR[rn. n] , we see the middle two terms are
again zero, due to (3.20). while the first term is given b~
:Y-1 2 r k m - , ( F [kl ,2 sin (T) sin (7) -v" ,=,
due to (3.18) and the last term is given bj*
due to (3.19). Hence. the autocorrelation can be espressed as
and thus from (3.26)
which is identical to the espression for r R R [ d ] .
Considering a general real-valued { F [ k ] } ? we write
where * denotes comples conjugation.
Let G[k] = ( ~ [ k ] ) ' and the secpence { G [ k ] } have inverse DFT
So, the autocorrelations of { a & ] } and { ~ [ [ n ] ) are dependent only on the real part
of { g [ d ] } : and { G [ k ] ) and { g [ d ] } are related by
3.3.4 Cross-Correlat ion Between Real and Imaginary Parts
We now express the cross-correlation sequence in terms of { g [ d ] ) . We start by mriting
the cross-correlat ion espression as
2nkm 1 ;v-L %km = {[i F[klA[klcos- + - 1 F[k]B[k] sin - 1v -V ,=O -v
4 \ ' 'ZN terms
Espanding t his. ive obtain
Due to the independence of { A [ k ] } and {B[k]} indicated by (3.20), the middle
terms of this expression are both zero. The independence of the components of the
sequence {--l[k]} ? indicated by (3.18). reduces the first term to
o2 x - 1 %km 2 ~ k n - 1 ( ~ [ k ] ) ~ c o s - sin - ,472 * k=O :v !V
while the independence of the components of the sequence {B[k]} . stated in (3.19).
reduces the last term to
o2 N-i '2lrk.m 2 ~ k n -- ( ~ [ k ] ) ? sin - COS -. *v* k=O iv y
Making the definitions
%km 2irkn Dk = COS - sin - :v IV
and
2rkm 2nkn Ek G sin - COS -. *v -v
ive obtain the cross-correlation as the sum
a2 .v-1
r,,[m. n] = y C ( ~ [ k ] ) ' (Dk - Ek) . k=O
Csing anot her well-known trigonometric ident ity [52: (?1.67)]. m-e can irrite
2rkm '2lrkn Dk = COS - sin - :v LV
2~ik(n2 + n ) % k ( m - n) = {sin - sin
2 -v 3-
and
2rkm 2-irkn Ek = sin - COS -
N rV 2rk(m + n) 2iik(m - n )
= {sin + sin 2 iv .v
and so
'2ak(n - rn) (Dk - Ek) = sin -v
Substituting this result into the expression for rRI [m. n]. ive obtain
a2 N-1 2 ~ t d rRI[m. n] = ~ ~ ~ [ d ] = - ( ~ [ k ] ) ~ sin -.
Y* t-0 3f-
CVriting this in the form of IDFT operations,
Let g[d] be defined as in (3.29). Then we can write
So. the cross-correlation between {xR[n]} and { x c [ n ] } depends on- on the imag-
inary part of { g [ d ] ) , with { g [ d ] } and { G [ k ] } again related by the DFT, as in (3.31).
We will make use of this result in modifying the algorithm to use a single cal1 to the
FFT routine. If we can ensure the sequence (g[d]} is purely real, Ive will obtain two
independent sequences in quadrature a t the output of the FFT. In Smith's routine.
{g[d]} is a complex sequence. prohibiting the use of both {xR[n]} and {xr [n] } for the
same fading output sequence.
3.3.5 Ergodicity of the Sequences
The property of equality between an ensemble average and an infinite-time average
is referred to as an ergodic property[53]. We will now show that the comples Gaus-
sian process used to form the Rayleigh process has this property for both mean and
autocorrelation.
Ergodicity of the Mean
The time average of the real-vahed randorn process { x [ n ] ) is
The time average (x[n]) is a random variable mith mean
The quantity (x[n]) is an unbiased estimator of E {x[n]} . We wish to show that the
random process {z[n]} is mean ergodic (i.e. that the time average mean equals the
ensemble average mean). In order to do this we must show that the variance of ( ~ [ n ] )
goes to zero as iV goes to infinity [&]. We note that
and so the time average of the real part is given by
and the time average of the imaginap part by
The variance of ( x R [ n ] ) is given by
The variance of ( r r [ n ] ) is given by an identical expression. Since ( ~ ~ ( 0 1 ) ' is finite
and constant. clearly
Thus. { x R [ n ] } and { x c [ n ] } are mean ergodic.
Ergodicity of the Autocorrelation
From the discussion of [53, Sections 8.2 and 8.41, a zero-mean stationary Gaussian
process S is said to possess rnean-square ergodicity of the autocorrelation if and only
if
1 Lim - ~ : t ( u ) d u = O T-+= 2T LT
where C\-(T) is the continuous autocovariance function for process S. The autoco-
variance is an even function, and therefore we may also write the condition as
We do not know the autocovariance function for al1 time-we have only samples of
the function over a finite interval. However, me can approsimate the integral by a
Riemann surn[54. Section 3.31.
of our discrete-time system.
If Ive partition the interval
definite integral Irom O to T is
rn
with the function evaluated only a t the sample points
[O, T] into X
given by the
subintervals of equal ividth -1 = 5. the
Limit of the Riemann sum
The partition width il. which corresponds to the sampling period of the discrete-time
systern, should be such that the approximation to the integral given by the sum is
very good. This will be true since the sampling rate is usually much greater than
twice the mavimum Doppler frequencp and thus the baseband fading signal is densely
sampled. Assuming that the integral is well approsimated: and substituting the sum
for the integral in (3.42), we obtain the approsimate espression
1 1 "-1
lim - 1 C$ (u)du .i iim - T+X T O T-m * [L (C'Y ;] '
The function cs (cl$) represents samples of the autocorrelatiori function for the FFT
output: where T is the sequence length and .V is the number of samples. Substituting
from equation (3.30): ive obtain
Writing {g [d j } as the inverse DFT of {G[k]} : the espression becomes
1 N- l a2
iim - (\Re{sid~})"] 7--ta3 T '+O Y
sow. ;V = Tb,, where & is the sampling rate of the discrete-time systern. For O,
fixed. -V mil1 approach infinity as T does. Also,
for a- finite G[k] and so
Hence. Ive can write
o.I x-l N-1
iirn 7 (x Re { ~ [ k j e T4= X'
&O k=O
@4 N - l
= lim - mF { G ~ [k] ) 1V-b.a ' j3 d=O
cT4 = lim - mas ~ ~ [ k ] .
N+w ;VZ k
The power spectrum { G [ k ] ) is certainly finite for al1 k in the digital system. so clearl-
the limit is zero and we can say
Thus. we have shown the output of the random generator to be autocorrelation er-
godic, under the assumption that the sampling rate of this system is such that the
au tocorrelat ion funct ion is densely sampled (or equivalent ly, t hat f, is small) . The
limit condition in (3.43) is not tight! so we would expect this condition to hold for
any practical case.
Since the Gaussian process is fully specified by the mean and autocorrelation
function, it follows that the comples Gaussian output of the simulator is ergodic.
3.4
3.4.1
We note
Distribution of the Amplitude
Obtaining a Rayleigh Distribution
that a given realization of the process will have mean given by (3.39). and
for :V finite and F[O] non-zero this time average takes on the exact value zero with
probability zero. The desired output from the routine is a Rayleigh-distributed pro-
cess: if the complex Gaussian process has nonzero mean the output will have a Ricean
distribution. In this case a given sample mil1 have probability density function [46]
where Io (x) is the zero-order modified Bessel function of the first kind. The parameter
s2. called the noncentrality parameter. is given by
where r î ï ~ and ml are the means of the component real and imaginary sequences.
here time averages of a single realization of the process. When s = 0, (3.44) reduces
to the Rayleigh density function. CVe therefore consider the distribution of S. and
require that s -t O.
From (3.40) and (3.41), r n ~ and r n ~ each are normally distributed ivitli mean
zero and variance [$ (F[o])'] . The distribution of the sum of two squared normal l -
distributed random variables is known as the chi-square distribution mith two degrees
of freedom[46]. The distribution of the square root of this sum is Rayleigh, and t hus
the parameter s has probability density function (see (3.3))
The expected value of s is (-16. equation (1. L l Z ) ]
and the variance of s is
The distribution (3.44) d l be Rayleigh if and only if s = 0: hoivever. for finite .Y
and F[O] nonzero. the espected value of s is a srnall but nonzero positive number.
The only way to ensure that the expected value of s is zero is to require F[O] = 0.
In this case the variance of s is also zero - s is identically zero in every realizatioti.
a desirable property. Thus. for Roleigh fading the requirement is made tliat the
zero-frequency coefficient be set to zero. as in the program code of [l].
3.4.2 Obtaining a Ricean Distribution
If, on the other hand, a Ricean process is desired, this can be achieved by setting the
zero-frequency term S[O] to a detenninistic value according to the desired nonceri-
trality parameter. Recalling that in a particular realization the means of the real and
imagina- parts are given by
and
we replace S[O] = F[0](.4[0] + jB[0]) by
X[O] = XR + j&'
where SR and SI are deterministic. Thus.
and
1 (x [nl) = -SI. ~\r
The Ricean noncent raiity parameter is t herefore given by
s = Jm = '4- !V
1 = ~x[oI~~ . (3.47)
Thus. to obtain a Rice distribution with noncentrality parameter S . S[O] must satisfy
the relation
l 2 = Ns.
52
To the best of the author's knomledge. this simple but useful modification to Smith's
algorithm has not been proposed previously.
3.5 Summary
We have in this chapter formed a relation between the output samples of the FFT
operation and the filter coefficients { F [ k ] } . If { G [ k ] } = {(~[k])') is suitably chosen
such that the real part of { g [ d ] } approxirnates the desired autocorrelation. then tlie
Smith algorithm ni11 produce samples with good statistical properties. Ke note that
the process { ~ [ n ] } is stationary? since the autocorrelation function is a function of
the sample separation d but not the sample indices m and n alone. Thus. tlie output
from the Rayleigh fading generator will be stationary. We have also shown the process
{x[n]} to be ergodic.
It can also be seen that for a general { F [ k ] ) the cross-correlation sequence rRr [dl
may have non-zero elements' indicating that in general if both parts of the sequence
{x [n] } are used in (3.4) to generatc samples. an incorrect distribution will result .
Thus. two independent realizations of the process { x [ n ] } must be formed. earlier
labelled {dl)[n]) and {d2)[n]) . The real part of each is taken to forrn t~vo real
sequences {xi [n]} and {x2[n]} . and tlie Rayleigh output sequence {Ir,-[n]l} is giren
by equation ( 3 . 4 , repeated here as
This sequence at the output of the Smith algorithm does have statistical properties
that closely match theon;. but it is possible to generate statistically identical samples
nith only a single FFT operation and consequently rnake better use of cornputer
resources. We will see in the following chapter that this can be accomplished bu
redefining the filter { F [ k ] ).
Chapter 4
Modification of the Algorithm to Use a Single FFT Cal1
4.1 Introduction
It is necessary that the real and imaginary parts of the comples Gaussian sequence
used to form the Rayleigh proçess be uncorrelated. In Smith's algorithm as presented
in Chapter 3 the complex output sequence from a single Fast Fourier Transform (FFT)
step does not have this property, so two such sequences must be formed independently.
The real parts from each of these sequences. nhich have the desired autocorrelation
properties, are used to form the baseband comples Gaussian sequence. while the
imagina- part of each FFT output is discarded. It wili now be shown that the output
of a single FFT can be used directly. by properly rnodifying the filter coefficietits.
Figure 4-1 gkes a block diagram of the irnproved algorithm. This algorithm is simpler
(compare Figure Cl with Figure 3-2), and has two important benefits. First. ive
reduce the number of FFT operations required by one-half. Since the FFT operations
1 Variates /
k = OJ, ..., N-I
Sequence 1 N-point
Complex Inverse ' ' Baseband
Rayleigh Fading
Sequence
Figure Cl. -1 block diagram of the improred algorithm usiiig a single comples FFT to generate correlated Rayleigh samples.
require the major part of the computational effort in realization of the algorithm. this
reduces the esecution time substantially. Second. cornputer memory usage is reduced
by roughly one-third. because in the modified routine the output of a single FFT step
is directly the desired fading sequence while in the original routine the output of the
first FFT step must be stored during execution of the second FFT operation. Thus.
me realize important savings in cornputer resources. nit hout changing the statistical
quality of the output samples.
4.2 The Modified Filter Coefficients
In Chapter 3 me developed expressions for the autocorrelation of the real or imagi-
nary part a t the output of the inverse discrete Fourier transform (IDFT) operation
and the cross-correlation between the real and imaginary parts of this output. The
autocorrelation sequences were found to sat isfy the relation
and (F[k]} is the sequence of filter frequency coefficients used in algorithni. The
cross-correlat ion sequence mas found to satisfy the relation
The cross-correla.tion sequence is not zero-valued for al1 d. thus it is not possible
to use the real and imaginary parts of the same FFT output to form the Rayleigh
sequence. Hoivever. it is possible to choose the autocorrelation sequences { rRR [ a ] }
and {rII [dl} independently from the cross-correlation sequence { T R I [d l ) . due to the
following well-known properties of the DFT[4T].
-1 sequence {Gcs[k]} for which
is known as a conjugate-symmetric sequence1. ;\ sequence {Gc.4s[kl} for which
is known as a conyugate-antisymmetn'c sequence. The inverse DFT of a conjugate-
symmetric sequence is given by2
1 x-1 - ?a kd
IDFT {Gcs[k]} = - 1 G c s [ k ] ë l ~ -V ,=,
Substituting k = Y- kt in the last surn, the expression for the inverse DFT is obtained
- -
'\Ve note that any sequence used as input to a DFT operation can be considered to be periodic with period X: Le. G[k] = G[N t k ] . The DFT operation is taken over one period, for esample k = 0.1,. . . X - 1. Thus G[k] is defined for al1 k, and specifically Gcs[lV - k] is defined in (4.1) for k = 0.
'CVhile these results hold for any N, we will assume 1V an even number to simplify notation. since this will be true in practice.
Combining the two sums. ive obtain a final expression for the inverse DFT.
1 A-/ 2 - 1
IDFT {Gcs[k]} = - iv [ ~ ~ ~ [ o l + 2 n e { k= 1 G c s ~ k i e - ~ } + ~ c s [ g ~ ] . (4.3)
Now, esamination of equation (4.1) reveals that if G[O] = G*[:V - 01 = G'[o]. G[O]
must be purely real. Sirnilarly, if G [F] = G* [Y - $1 = G* [$]' G [g] - must be
purely real. Therefore, each term in (1.3) is real, and Ive have shown chat an inverse
DFT of a conjugate-synmetric sequence is a purely real sequence.
We now consider the conjugate-antisymrnetric case. The inverse DFT is writ ten
1 :v-1 ~ k k d
IDFT {Gc.rs[k]} = - 1 Gc.-&]e X ,=,
2x( .V- k ) d Once again making the substitution k = X - k'. noting that e-1-
making use of equation (4.2): ne write the inverse DFT of { G ~ . ~ ~ ~ ~ ) as
. r .v/2- 1
Considering equation (4.2), the only solution to Gc.rs[O] = -GE,, [ N - O ] = -G;,, [O]
is zero. -%O, Gc&] = O. since it must satisfy G[$] = -G8[N - $1 = -C;*[$].
We thus obtain the expression for the inverse DFT of a conjugate anti-syrnmetric
sequence as
2 IDFT {Gc.ii[k]} = - lm
LV
which is a purely irnaginary sequence. The DFT is a linear operation, meaning the
inverse DFT of the sum of two sequences is the sum of the inverse DFT's of each
sequence. We can therefore Say that for a general sequence { g [ d ] ) with DFT {G[k] )
g[d] = g ~ [ d ] + jgr [dl = IDFT { G [ k ] ) = IDFT {Gcç[k]} + j . IDFT {Gc.4s[k]} .
We also note that
and
for any general sequence { G [ k ] ) ? where G[k] = Cc&] + G a & ] .
To summarize, we have demonstrated that
and that a general sequence G[k] can almays be expressed as the sum of a conjugate-
symmetric and a conjugate-antisymmetric part. Hence, we can separatelp select the
real and imaginary parts of the sequence { g [ d ] } , using, respectively. the two sequences
{Gcs [ k ] ) and {Gc;ls [k] } and independently select the autocorrelation sequences and
the cross-correlation sequences for the FFT output {x[n] } .
For example. the shape of the filter used in Smith's algorithm is shown in Figure
M a . (This filter will be discussed in detail in Chapter 5). It is composed of the
surn of a symmetric part {Fcs[k]} (Figure 1-2b) added to an anti-symmetric part
{Fc'c.4s[k]) (Figure 4-2c). The symmetric part is responsible for the autocorrelation
properties of the real and imaginary parts of the FFT output. The autocorrelation of
the real parto plotted in Figure 4-3. is identical (from (3.30)) to the autocorrelation
of the imaginary part. However. in the original Smith routine the antisymmetric
part of the filter coefficients gives an undesired non-zero cross-correlation in the FFT
output (Figure 4-4) mhich necessirates the generation of a second independent set of
samples.
To generate Rayleigh variates with a single FFT. therefore. ive can require that
Gc.4s[k] = O (or equivalently Fc,ls[k] = 0) for al1 k. nhich will ensure zero cross-
correlation between real and imaginary parts of the FFT output. Csing (4.4) ive can
thus define a mcidified filter { F i I [ k ] } ! where
and {Fç[k]} is the filter defined by Smith. Cse of this modified filter in (3.14) will
produce a complex Gaussian sequence with identical autocorrelation properties to
Smith's original routine and the required independence between the real and imagi-
(a) Smith fil t er frequency response
d
2 I
(b) Conjugate-symrnetric part of Smith filter
4 ! 1 I l I I 1 I
O O. 1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 'iormalized frequency
m . d L
2 2 - 3
r d d CI
I
5 O - s % J 4 -2 - .d
h 4
(c) Conjugate-antisymmetric part of Smith fil ter
O O. 1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Xormalized frequency
-
C: -. d = 2 - Y . r. d
A I
5 O - e Li
2 -7 - 4
Figure 4-2. (a) The filter frequency response coefficients given in Smith routine. (b) The conjugate-symmetric part of the filter coefficients. (c) The conjugate- antisymmetric part of the filter coefficients.
/' L -
O O. 1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 I Sonnalized frequency
J
- -
t I I I I 1 I I I
' J - - -
I I 1 I I t I t L
rp- : Theoretical 1 0 : Empirical
l 1 I -0.5 - I I 1 I I t 1
-100 -80 -60 -40 -20 O 20 40 60 80 1 O0 Sample lag rn
Figure 4-3. Normalized erperimental autocorrelation of the real part of { x [ n ] } . plot- ted with . Jo(2r fmd) , for Doppler frequency f, = 0.05 per sample. The number of samples used in the esperimental case is 216.
I I I 1 -1 1 l 1 1 I L
-100 -80 -60 -40 -20 O 20 40 60 80 1 O0 S~unpte lag m
Figure 4.1. Xormalized experimental cross-correlation between real and imaginary parts of {x[n]) the output of a single FFT operation, for both the original Smith method and the modified method presented in this chapter. The maximum Doppler frequency is fm = 0.05 per sample, and 216 sarnples were used to obtain each curre.
nary parts. The esperimental cross-correlation sequence of the FFT output for the
modified routine is plotted in Figure 4-4. and can be seen to be near zero.
The autocorrelation properties of the random sequence generated by the IFFT
approach are dependent on the filter coefficients { F s [ k ] } . Giwn any set of filter
coefficients {Fs[k] } in Smith's method that results in IFFT output with good au-
tocorrelation properties. application of equation (4.4) will produce a sequence with
the same autocorrelation properties and require only a single IFFT operation in the
algorit hm.
4.3 Advantages to the New Approach
The independence betmeen the real and imagina- parts means that the comples
output from a single FFT is directly the complex Gaussian process we need in order
to form the Rayleigh output sequence. The necessity of the second FFT operation
has been eliminated. This has two principal benefits.
First. the tirne to esecute the procedure is reduced by almost one half. The
FFT operation is the most computationally expensive part of the Smith procedure.
Esperimentation wit h Numerical Recipes in C routines[49] has shown t hat 8s-90% of
the time to execute the routine is used in performing the FFT operations. (For higher
Doppler frequencies mc+re independent Gaussian input samples must be generated. so
this factor has a small dependence on the normalized maximum Doppler frequency
) Halving the number of FFT operations reduces the time to generate a give-en
number of variates by -40-45%. Figure -4-5 shows this difference for routines coded in
C and run on an CltraSPARC machine.
Second, the memory use for the new routine is one-half to two-thirds that of
the original. To perform the complex FFT, 2 N real storage locations are required.
In the case in which the real and imaginary parts of the data are easily separated.
the real part alone can be saved (using .V storage locations) while the second FFT is
performed. requiring 2-V locations for a total of 3 X locations used. In other cases. such
as the popular Numerical Recipes in C FFT routines[49]. in which real and imaginary
parts of a sequence alternate in a single real vector of length TV. or the I'rlSL Fortran
FFT routines[55], which use a single complex array for input and output. real storage
for the output of the first FFT cannot be achieved without moving this data to iiew
storage locations. The most straightforward approach in t his situation is to reserye
-LN memory locations to execute the Smith routine-one complex vector of length .V
for each FFT operation is allocated and then the real part of each wctor is added in
quadrature at the conclusion of the routine. Thus. in order to generate .V Rayleigh
random rariates. the original roütine needs at least 3.V storage locations for the data.
and 4iV locations are sornetimes used.
The modified routine requires 221 rnemory locations in either case. since only a
single complex FFT operation is required. Thus, me realize a savings of a t least one-
third. and possibly one-half. in computer mernory usage. On a given machine. this
means a larger sequence of correlated random variates can be generated without re-
(a) Run times for 216 samples
Original
1.1 odified
O riginal
S lodified
(b) Run times for 2*' sampies
Figure 4-5. Run times for the original and the modified FFT algorithms.
sort ing to disk- based virtual memory (nhich lengt hens execut ion t ime suhstant ially ) .
4.4 The Algorit hm Using Real- Sequence FFT's
LVe show here that the desired variates can also be generated in an algorithm that
uses two "real-sequenceo7 FFT's instead of the single complex FFT. If ive consider the
DFT of a real-valued sequence. from the properties given by (4.5) the transformed
sequence is conjugate-symmetric. meaning G[k] = G*[N - k]. Due to this symmet-
property the complex sequence representing the transform has on!y - + 1 unique
cornplex values: given { G [ k ] } . k = 0.1.. . . . $ we can determine {G[k] } . L = $ - t
.v 1. .. + 2 . . . . . .V - 1. As we have noted previously. G[O] and G[$] - are both purel'-
real. so a total of :V real storage locations are sufficient to hold both the original
real input sequence and its discrete Fourier transform. The real FFT algorithm (as
defined in. for example, [49, Section 12.31) takes as input the length-9 real-valued
data sequence. and gives as output the first - + 1 comples-valued coefficients of
the DFT, packing the real-valued G[$] coefficient into the imaginary part of G[O].
The inverse real FFT algorithm takes as input the first $ + 1 cornplex-valued DFT
coefficients, and outputs the length-X time sequence. The heart of the real FFT
routine is a complex FFT routine operating on a sequence of length $: in addition.
there are necessary operations which shuffle data in memory after the FFT step. in
the case of the forward transform? or prior to the FFT step in the case of the inverse
transform. Further information on the real-sequence FFT algorithm can be found in
[dg].
Conceptually. a real inverse FFT routine takes the first 2 + 1 comples samples of
,y the transformed data as input (i.e. S [ k ] , k = O. l! . . . . f). - and forms a corresponding
conjugate-symmet ric sequence
So[k] = S [ k ] + S8[.V - k].
For the inverse real-sequence
(4.7)
FFT routine S [ k ] = O over the range k = + 1. $ +
7.. . . 'i - 1.. -1 standard inverse FFT is performed on the sequence (4.7). The
resulting time sequence is purely real, from (4.8).
The output of the inverse FFT, { ~ [ n ] } . is given by
Substituting k = .V - k' in the second summation.
The DFT can be taken over any length-N period of the input, so we can equiralently
The last quantity is esactly twice xR[n] from (3.11). and Ive have already shown the
statistical properties of {xR[n] ) to be suitable. (The factor of two can be incorpo-
rated into the definition of { F [ k ] } : if the conjugate-symmetric sequence were formed
according to (4.4) the factor of two would not occur). Xote that the relation (4.10)
applies to any input sequence { S [ k ] ) : in the case of the inverse real-sequence FFT
the input frequency coefficients are only defined up to k. = $. thus one of the terms
in the addition (4.7) will always be zero. Relating the input filter of the real-sequence
FFT method. denoted {FR[k]) . tg that of the modified method. (F.J@]} we obsene
that
From equation (4.6). F[O] = 0: and typically F[$] = O. so the zero-frequency input
S[O] = O and it is not necessary in practice to pack s[$] into the imaginary part of
S [O].
Thus. the output from one real-sequence FFT yields a real Gaussian sequence
mit h the desired st at ist ical properties. Two such sequences ( from tmo independent
runs using (1.8)) can be added in quadrature to form the desired Rayleigh variates.
-4 block diagram of the simulator is shown in Figure 1-6.
C'se of the real-sequence FFT does not offer significant advantages in the gen-
eration of Rayleigh variates, because tmo real sequences are required and this can
{A'" [k] by Filter - Sequence .
N-point Inverse
Real-Sequence - FFT
d Baseband Rayleigh Fading
Sequence
Figure 4-6. Block diagram of routine to generate correlated Rayleigh samples using two real-sequence FFT operations.
be accomplished more easily with a single complex FFT. Hoivever. the real-sequence
FFT met hod requires only N data points in memory a t one time. Hence. if computer
memory restrictions are such that data must be swapped to hard disk. the routine
based on the real-sequence FFT's rnay be preferable to that based on the comples
FFT, which requires 2M data points in memory a t one time. Given the popularity
and availability of complex FFT routines. the sirnplicity of program code using the
complex FFT. and the large memory resources of the modern computer. we have
chosen to focus principally on the complex FFT met hod for generating correlated
Rayleigh varia tes.
One other application of the real FFT approach will be briefly rnentioned here. It
can be readily seen that any of the FFT methods can be used to generate independent
real Gaussian sequences with specified correlation properties. mhich are also useful in
simulation of communication systems. This has been examined further in referetice
[56]. The real-sequence FFT approach is the only one of these methods that can be
used to form a single real Gaussian sequence-the comples FFT approaches require
that two real Gaussian sequences with identical correlation properties be formetl si-
multaneously. In the event that only a single real sequence of correlated Gaussian
variates is required, the real-sequence FFT approach would be useful.
Summary
We have developed an improved IDFT algorithm for the generation of correlated
Rayleigh variates that requires only a single cornples F F I operation. instead of the
two complex FFT operations required in the original routine of Smith. The routine
gives statistically equivalent output samples wit h reduced compu tation and reduced
memory requirements. A method has also been given for the computation of statis-
tically equivalent Rayleigh variates wi th two real-sequence FFT routines.
The results to this point have been for any autocorrelation sequence {g [d ] } n-hicli
where { G [ k ] ) is a positive real sequence. The folloning chapter will define a specific
filter useful for simulation of many mobile communication channels. A quantitative
evaluat ion of the DFT-based algorithms using the filter. and cornparison with ot lier
approaches, is presented in Chapter 8, using a quantitative evaluation procedure
which will be developed in Chapter 7.
Chapter 5
Determination of Filter Coefficients for the IDFT-Based Met hods
5.1 Introduction
We have to t his point avoided putting unnecessa- restrictions on the filter coefficients
{ F [ k ] ) . In Chapter 3 we have stated that the elements of ( F [ k ] } are real and finite.
Cliapter 4 specified a filter with the furt her restriction of conjugate-symmetr. How-
ever. the conjugate-syrnmetry condition does not add any restriction to the range of
correlation functions which can be simulat edl but rat her ensures zero cross-correlation
between real and imaginarg parts a t the FFT output. Hence, the results presented
thus far should be applicable to any correlation sequence {g [d ] ) mhich is me11 appros-
irnated in the relation
Md]} ml} where {G[k ] ) is a positive real sequence.
In this chapter. ive will provide a specific set of filter coefficients which will result
in fading statistics modelling the signal received by a vertical monopole antenna
under the assumption of isotropic scattering. This mode1 is often used in simulation
of multipath fading channels. The physical situation is one in which the signal at
the mobile receiver is composed of a large number of horizontally travelling plane
waves, resulting from reflection of the transmitted signal froni buildings or diffraction
of the transmitted signal around buildings or other obstacles. So line of sight path
between transmitter and receiver exists in the rnodel. The received plane waves have
amplitudes. phases. and angles of arrival which are random. The phase of each n v e
is assumed to be independent and uniformly distributed on [O. 9 7 ) . The received
signal at a vertical monopole antenna. as the sum of al1 waves. is well approsimated
by a comples Gaussian random process for a large number of waves. due to the
Central Limit Theorem[G]. Hence, the amplitude of the received signal is Rayleigh-
distributed.
lssuming a continuum of received wwes uniformly distributed over al1 arrival
angles and equal antenna gain of 1.5 for each wave, the theoretical power spectrum
of either the real or imaginary (in-phase or quadrature) part of the continuous-the
received signai is [BI-[IO]
ahere Q represents frequency in Hz. The parameter @, is the masimum Doppler
frequency in Hzo given by
where u is the vehicle velocity and h is the carrier wavelength. The spectrum is
bandlimited to this maximum Doppler frequency 4,. The norrnalized (unit-variance)
continuous- time autocorrelation of the received signal under these conditions is [8]-[1 O]
r ( ~ ) = JO ( P T & T ) ( 5 . 2 )
rvhere T is the separation. in seconds, between observation times and JO(-) is the
zero-order Bessel function of the first kind. The theoretical autocorrelation function
and power spectrum are plotted in Figure 5-1 and Figure 5-2 respectively.
It has been noted in Section 3.1 that the filtering operations in the fading sini-
ulator in Figure 3-1 can be implemented as time-domain filters. In the case of the
particular pan-er spectrum (5.1) and the autocorrelation (5.2) this is not an easy task.
The ideal filter frequency response is the square root of the power spectrum function
(5.1). This frequency characteristic rnust be formed into a time-domain realization.
An infinite impulse response (IIR) filter can not be designed direct13 since a required
factorization cannot be performed due to the nonrational form of the filter frequency
response. In order to design an IIR filter, the filter response can be approsimated
bu, for example. a Butterworth filter response[Z],[27], but this response cannot dupli-
cate the power spectrum (5.1). A finite impulse response (FIR) filter design ni11 be
presented in Chapter 6 which does produce correlation that approsimates well the
Figure 5-1. The t heoretical autocorrelation function JO (2ri f,r), plot ted against ( - K ~ ~ T )
1 I
f c - f m f c f c + f m
Frequency
Figure 5-2. Theoretical pomer spectrum at carrier frequency fc mith maximum Doppler frequency /, .
theoretical autocorrelation ( 5 . 2 ) as the length of the filter becomes ver? large. HOK-
ever. the comput ational requirements to generate samples mit h large FIR filters are
prohibitive. Furthermore, methods that realize the power spectrum in frequency are
simpler to design than methods in which a time domain response must be specified.
Bot h the sum-of-sinusoids met h0d[8]~[22] ,[NI (described in Chapter 8) and the IDFT
method investigated in this thesis require only a frequency domain definition of the
statistical properties of the output. In Chapter 8 me will see that the IDFT method
produces better correlation properties than the sum of sinusoids method or the FIR
filter met hod for equivalent computat ional effort.
We therefore now discuss the problem of implementing the poiver spectrum (5.1)
and autocorrelation funct ion (S.2) in the IDFT algorit hm for generating correlated
Rayleigh samples. While this implementation is simpler than a time domain imple-
mentation, it is not a trivial task. In the digital frequency domain specification. W.
are restricted both to discrete-time and discrete-frequency, as ive11 as finite precisioti
arithmetic. This makes a direct implementation of the square root of the spectrum
(5.1) impossible. and we m u t consider this problem carefully.
5.2 The Discrete-Time Problem
Ideally, me wish Our finite-length sampled sequence (sampling frequency O, in Hertz)
to have the same statistical properties as a sampled version of the theoretical continuous-
time signal. That isl we mish the generated sequence to have normalized autocorre-
Iat ion sequence
where f, = * is the maximum Doppler frequency per sample, and d is the separation 4s
between samples or the sample lag.
In the infinite-time case, the discrete-time Fourier t ransforrn of the autocorrelation
(5.3). given by
- jwd s(gW) = 2 Jo(2&d)e
is periodic in frequency with period 29,. This function for the single period for whicli
1 0 1 5 os is the theoretical poiver spectrum in equation (5.1). The coefficients of
the discrete Fourier Transform (as defined in (3.6)) are given by this discrete-time
Fourier transform sampled at frequencies k = o,/'i. Thus. in the infinite-time case
the best choice for the filter coefficients { F [ k ] } is the square root of the spectrum
(3.1). sampled at k = &/N. Since the spectrum (5.1) is positive for al1 frequencies. a
real filter results.
In a digit al simulation, hoivever? the autocorrelat ion sequence is of fini te lengt li
because a finite number of channel samples are generated. -1 signal cannot be both
bandlimited and time-Iimited [dB]. Therefore, the IDFT of this truncated and sampled
autocorrelat ion miIl not be bandlimited and t hus will differ from (5.1).
The truncation of an infinite data sequence to finite length is known as window-
ing; the infinite data sequence is multiplied aith the window sequence to form the
truncated sequence. In the present case in which the mindow sequence is constant
over an interval and zero elsewhere. the mindoit- is knomn as a rectangular u-inciou-.
I t is also possible to gradually attenuate data near the edge of the window instead
of a hard truncation, leading to different window shapes (see, for esample. [47].[48]).
The discrete-time Fourier transform of a windowed sequence is given by the convo-
lution of the transform of the infinite-time data sequence with the window sequence.
resulting in the "spreading" of features in the spectrum of the infinite-time data se-
quence. One of the consequences of time-domain windowing is a phenomenon known
as Gibb3 oscillations in the frequency-domain representation of the signal. The ef-
fect is that the ponrer spectrum corresponding to the finite-time signal oscillates about
that corresponding to the infinite-time signal. In the case of a bandlimited t heoret-
ical spectrum, the resulting finite-time response is no longer bandlimited. since the
power spectrum oscillates about zero over the range of frequencies with theoreticall'
zero power. This also means the resulting frequency response is negative for some
frequencies. Recall that the filter sequence { F [ k ] } must be real and therefore the
power spectruni sequence {G[k] } = { ( ~ [ k ] ) ~ ) must be strict ly non-negat ive. Tlius.
it is impossible to realize esactly the finite-time frequency response of the filter using
the DFT method. The issue of reducing the magnitude of Gibb's oscillations in the
case of the DFT methods is discussed in Section 5.4. Figure 5-6bo presented in that
section. shows an example of this phenomenon.
On the other hand, if we do force the power spectrum to be bandlimited. ne will
observe aliasing of the time samples. The inverse DFT coefficients can be considereci
to be periodic in time. witli period 3' (this will be shown in Section 5 .4 . The inverse
Fourier transform of the bandlimited spectrum (5.1) is of infinite length. and chus
the inverse discrete Fourier t ransform will be a t ime-aliased version of the infini t e
length correlation. That is, the infinite-length inverse transform mil1 be repeated
at X-sample intervals, and any given period of the inverse DFT will be @en by
the sum of al1 the overlapping sequences. A severe esample of time aliasing of the
infinite-length autocorrelation function is illustrated in Figure S-3. wit h .V = 1024
samples. The plot shows two overlapping sequences. from two adjacent periods of
the DFT. The solid line indicates the ideal infinite-time autocorrelation funct ion for
d = [O0 10231. while the dotted line indicates the ideal infinite-time autocorrelation
function for d = [- 10%. - 11 shifted to the right by 1024 samples. The periodicity of
the output is due to the inherent periodicity of the DFT operation (due to sampling).
and the infinite-t.ime span of the autocorrelation function is due to the bandlimited
frequency response. The O bserved autocorrelat ion is given by the sum of the overlaps
from each period. There will be a infinite number of overlapping periods. but typically
the most severe contribution will be from the adjacent period as illustrated in the
figure.
In practice? if the magnitude of the overlapping terms is small, the aliasing error
will be small, but nonetheless a direct implementation of the infinite-time pon-er
spectrum cannot yield the exact autocorrelation sequence in t his discrete-time system
due to time aliasing.
I I 1 1 I 1 I 1 1 I
100 200 300 400 500 600 700 800 900 1000 Sample lag m
Figure 5-3. An illustration of time aliasing. The infinite-t ime autocorrelatiori func- tion is shomn (solid line) along with a time-shifted and overlapping autocorrelation function (dotted line) from an adjacent period of the IDFT.
K e must therefore find a digital approximation to the power spectruni of the
truncation of the sampled correlation function (3.3). We first consider the filter
provided by Smith in the algorithm code, then esamine whether a superior filter is
possible or desirable.
5.3 The Filter Given in the Algorithm of Smith
Smith has included filter coefficients in the program code in [II. Howewr. he did
not provide any explanation or justification of this filter. and the above discussion
indicates that the choice of filter is not obvious. The approach of Smith to tliis
problem is to sample. in frequenc- the continuous-time spectrum (S. 1) (effectivelu
ignoring finite- time effects) giving special treatment to the frequency coefficients
at two points. The first is the point at zero frequency. mtiich is made zero. K e
have esplained the benefit of this in the discussion of Section 3.4. The other. tiere
given the index km, is the point a t , or just below, the maximum Doppler frecluency
(in other words. the closest frequency sample less than or equal to the maximum
Doppler frequency). The coefficient at km is chosen such that the variance is close to
the continuous time case, which is accomplished by ensuring that the area under an
interpolation of the spectrum coefficients is equal to the area under the continuous-
time spectrum curve, as \ d l be presently outlined.
In forming the Smith's coefficients, the continuous power spectrum (5.1) is sampled
in frequency to form the discrete pon7er spectrurn { G [ k ] ) , k = 1,2 , . . . km - 1. The
range of frequencies from O to d, is divided into .V inten-als. each of width in Hz.
The maximum Doppler frequency. given by mm in Hz, will occur at sample location
#m (%)-la The sample index rnust of course be an integer, so this value is roundetl
down and the index km is given as
where lx] indicates the largest integer less than or equal to x and f,: as defined
previously, is the maximum Doppler frequency divided by the sample rate. The
realized maximum Doppler frequency in the digital system in Hertz is K , = k m O* -Y -
Gsing this Doppler value in (5.1) yields. using Smith's norrnalization.
as the analog power spectrum for positive frequencies with the modified Doppler rate.
The area under this power spectrum curve from O to analog frequency d is given
by[5ï, 2.271.41
for O 5 4 5 K m . Ewluating this function at the realized maximum Doppler frequency
4 = tcm? ive obtain -
. - I ( K ~ ) = K m arcsin (1) = Lm. (5 .7) 2
The area under the power spectrum curve is equal to the variance of the resulting
samples, and thus for the analog systern using the maximum Doppler rate realized
in this digital system this variance is given by (5.7). The area under the section of
the pomer spectrum (J.5) from zero frequency to the point one sample prior to the
maximum Doppler sample. G[k, - 11: is given by the evaluation of the area formula
(5 .6 ) for 4 = N - Thus. t his area is
The area under the continuous power spectrum (5.3) between the analog system
frequencies corresponding to (km - 1) and km is therefore
The interval between (km - 1) and km has width
0 s [km - (km - l)] - 0 s - - ,V -v
in Hz. If we approximate the area under the spectrum cun7e (5.3) by the area of a
rectangle of widtli % and height G[k& me obtain this point G[k,] as
This is the value for G[km] used by Smith.
We can now give the complete filter used by Smith. which corresponds to the
square root of the spectrum samples { G [ k ] ) . The filter is thus given as {Fs[k]} .
where
elsewhere
This filter is used directly in the algorithm of Chapter 3. The filter producing identical
autocorrelat ion in the modified algorit hm of
(5.8). The resulting filter is [k]} . shere
Yote that it
Chapter 4 is given by applying (4.6) to
is trivial to compute {FL,,l [k]} from {Fr[k]) .
Cse of either filter in the corresponding DFT routine produced samples with au-
tocorrelation closely matching theory over a wide range of the sample lag d. This is
seen: for example, in Figure 4-3.
5.4 Aut ocorrelation Derivat ive Cont inuity
The discrete power spectrum { G [ k ] } obtained as the square of the filter coefficients
will be different than the discrete power spectrum obtained as the discrete Fourier
transform of the truncated sampled autocorrelation function (5.3). We here consider
how this agreement rnay be improved. by ensuring that the periodic autocorrelation
function has a continuous first derivative.
Bo th the t ime-domain and Frequency-domain sequences in the discrete Fourier
transform pair can be considered to exist for al1 time and over a11 frequencies. being
periodic mith period .V. Recalling the definition of the forward transform.
ive observe the function e - j y is periodic nith period N ! as either a function of k
mith d held constant, or as a function of d with k held constant. The DFT can thus
be taken over any length-iV interval of the (Wperiodic) infinite sequence { g [ d ] } . and
the resulting infinite sequence { G [ k ] } will also be periodic with period 3. Similarly.
the inverse t ransform
can also be taken over any length .V interval of the periodic sequence { G [ k ] } . and
{g [d ] } d l be periodic with period N.
Furt hermore: the continuous- time autocorrelation function is symmetric about
T = 0, and therefore the discrete-time sequence f g [d]} is also symmetric about d = 0.
Due to the periodicity property of the DFT, the "negative-time" elernents of ( g [ d J }
N (for d = -2 + 1: -F + 2 . . . . ? - l ) , shifted by iV, are present as the elements of { g [ d ] }
for d = - + 1.2 + 2, . . . : X - 1. In other words. the upper half of the sequence { g [ d ] ) .
d = 0: 1.. . . . N - 1 represents the autocorrelation sequence for negative time.
This situation is iliustrated in Figure 3-4. which illustrates the periodicity of an
autocorrelation sequence of length one thousand obtained by inverse DFT. Due to
the symrnetry of the autocorrelation function, within each period ive observe even
symmetry (G[k] = G[N - k]). For esample, the period d = -499. -498. . . . ,500. is
even about d = O. Due to the properties of the DFT itseif. the entire autocorrelation
sequence can be considered to repeat in time with period 1000. For esample. the
autocorrelation samples for d = -499. -498: . . . . 500 are identical to the samples for
d = 501,502,. . . ,1500.
.At the point d = f the "negative-time" coefficients meet the "positive-tirne"
coefficients. At d = $ - the interpolated autocorrelation function may have a con-
tinuous first derivat ive, and the transition from the positive-frequency coefficients to
the ivrapped negative-frequency coefficients will be smooth, as seen in Figure 5-5a.
Or, the function may have a discontinuous first derivative at this point. in whicli
I I I 1 1 I
-500 O 500 1 O 0 0 1500 2000 Coefficient number
Figure 5-4. An illustration of the periodicity of a length-1000 autocorrelation sequence obtained by inverse DFT.
Figure 5-5. (a) The theoretical autocorrelation function in the case of a continuous first derivative ( fm = 0.01 per sample) . (b) The corresponding power spectra, ob- tained by (i) t aking the DFT of the correlation function, and (ii) squaring the Smith filter.
case the transition will no longer be smooth. seen in Figure Z-6a. Intuitively. Ive
expect that discontinuities will increase high-frequency components and make the
assumption of a bandlimited power spectrum less secure. Indeed, this is observed in
the power spectra for the two cases, plotted in Figure 5-5b and Figure 5-6b. The
magnitude of GibbYs oscillations in the DFT of the truncated autocorrelation is seen
to be large in the discontinuous case. making the match with the filter spectrum
poorer than the continuous derivative case, which shows good agreement. The pou-er
spect rum of the discontinuous derivat ive autocorrelation ais0 has large negat ive val-
ues. The spectruni of the continuous derivative esample has only very smali negative
values. which is important because negatiw d u e s in the porver spectrum do not
occur in the DFT method and so a better filter match is possible. Also. it is obserwd
that the spectral peak occurring near f m is larger in the continuous-derivat ive case.
This peak corresponds to an infinite vertical asymptote in the power spectrurn of the
contiriuous-time signal modelled by the discrete-time process. Hence. the larger peak
in the continuous-derivative case gives improced power spectrum agreement ivi t h t lie
cont inuous-tinie t heoretical spectrum.
It is thus worthwhile investigating whether it is beneficial to force autocorrelation
derivative continuity This can be accomplished by an appropriate choice of the
normalized Doppler frequency f,. specifically a choice which makes the derivative of
the autocorrelation zero at point d = X / 2 . The derivative of the zero-order Bessel
Sormalized frequency
Figure 5-6. (a) The theoretical autocorrelation function in the case of a discont,inuous first derivative ( fm = 0.01 per sample). (b) The corresponding power spectra. ob- tained by (i) taking the DFT of the correlation function. and (ii) squaring the Smith fiiter.
function is [K. 8.473.4j
Thus. to ensure continuity of the derivative at the .VI2 point we must pick f, sucti
that
J~ (SK f m g ) = J! (ajm~"i) = O. (5.10)
This can be readily solved for f, by standard cornputer iterative routines.
Imposing condit ion (3.10) usualiy involves only a very small change in f, from t lie
design value. The quality of the algorithm output mith and without the continuous
derivative condition was compared using the measures presented in Chapter 7. Two
hundred samples of each theoretical autocorrelation sequence were used. with normal-
ized ma~imurn Doppler frequency f, = 0.0811 in the continuous-derivative case. ancl
f, = 0.0515 in the discontinous-derivative case. The improvement in qualit? witti
the continuous derivative autocorrelation \.as observed to be roughly 1.5 x IO-^ dB
for al1 the measures. With realistic sequence lengths. this is less than the variablity
observed in the time autocorrelation from one realization to another. and tlius will
not be significant in practice. The sharper bandlimiting of the cont inuous-derirat ive
case may be significant in some simulation situations where systems have sharply
restricted bandwidths.
5.5 Improvement in Calculation of Last Filter Point
We have stated that the realized autocorrelation of the DFT routine output samples
satisfies the relation (3.31)
and we have defined a set of filter coefficients {FLlr[k]) in frequenc. Since our goal
is to have the best possible match of the output correlation { g [ d ] ) to the truncated
theoretical autocorrelation (5.3). ive r ~ i g h t consider an alternate method of obtaining
filter coefficients { F [ k ] } . If ive start by defining the desired autocorrelation { g [ d ] } .
then we can use the DFT relation to find { G [ k ] } and hence { F [ k ] } . As has already
been noted, this cannot be accomplished esactl- because such a procedure will give
G[k] negatice for some values of k. and ive have shoivn that { F [ k ] } must be real.
Furthermore. such a procedure ivould necessitate another DFT step in the routine.
mhich would be costly computat ionally.
Figure 5-7 illustrates the difference between spectrum coefficients { G [ k ] ) calcu-
lated according to (3.30) using the sampled and truncated t heoretical au t ocorrelat ion
function? and the power spectrum obtained by squaring the filter defined in (5.9).
In this continuous derivative case, the difference between the two functions is srnall
escept for a peak a t sample km - the sample just under the maximum Doppler fre-
quency. We could replace the point F[k,] in (5.9) by the square root of the single
-0.5 1 1 l I 1 l l I L
0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 Yormalized frequency
Figure 5 7 . (i) The magnitude of the difference between the power spectrum obtained as the square of the Smith filter coefficients and the power spectrum obtained as the DFT of the truncated autocorrelation function, relative to the sum of the coefficients of either spectrum. The masimum Doppler frequency is f, = 0.01: and the first derivat ive of the autocorrelation is continuous. (ii) The power spectrum obtained as the DFT of the t runcated autocorrelation funct ion, plot ted for reference.
point in the DFT
where { r [d] ) is given by (5.3).
Despite the extra computation to obtain F[k,] in this fashion. it was observed
that use of this filter point in the routine resulted in a very small improvement in the
quality of the output samples. The measures of Chapter 7 were applied in the case
of a continuous autocorrelation derivative to the algorithm with the modified F[C,].
The improvement mas seen to be roughly 2.5 x IO-" dB.
As observed in the discussion of Section 5.4. the filter {&[k]} (Z.9) gives samples
with sufficiently high qualit? that further improvement becomes iinworthwhile. Sta-
tistical averages are observed in practice as time averages, which for a finite number
of samples are randorn variables as indicated in Section 3.3.5. Small improvements in
autocorrelation accuracy which are less than the variability in the random time aver-
ages are likely to have negligible effect in any given communication system simulation
result. This is the case here.
5.6 Summary
We have provided a set of filter coefficients suitable for modelling the effects of
isotropic scattering using a vertical monopole antenna. The filter coefficients in the
original aigorithm of Smith were developed (such a development is not present in the
original paper) and found to produce satisfactory output autocorrelation properties
ni t h relatively srna11 effort to compute the filter coefficients. The observed correla-
tion of the random output samples using this filter has been obsened to be close to
t heoretical prediction: quantitative results will be given in Chapter 8. The square
of this filter has been observed to be close to the DFT of the theoretical truncated
autocorrelation. This is particularly true when the derivat ive of the autocorrelation is
continuous. An improvernent in calculation of the last filter point has been given. but
esperimental results have shown the improvement in output correlation to be smail.
Chapter 6
A Finite Impulse Response Digital Filtering Approach to the
Generation of Correlated Rayleigh Random Variates
6.1 Introduction
It has been previously noted that Smith used as a b a i s for his computer aigorithm
a hardware simulator design found in reference [2], which has been presented as a
block d i agam in Figure 3-1. -in alternative to the IDFT approach for digitally
implementing the system of Figure 3-1 is time-domain finite impulse response (FIR)
filtering of white Gaussian noise samples. We noow present the design of a finite
impulse response filter to be used in such a system, which mil1 mode1 the same fading
statistics discussed in Chapter 5. The fading simulator based on this filter. which is
pictured in Figure 6-1: will be compared in Chapter 8 to the IDFT approach.
The real or imagina- part of the output sequence of baseband fading signal
T Rayleigh 1 Fadine:
siPd
Noise Samples Filter
Figure 6-1. Block diagram of simulator to generate correlated Rayleigh samples by FIR filtering of white Gaussian noise samples.
samples in the simulator is given by the linear discrete-time convolution of white
Gaussian noise samples with the filter impulse response. The particular impulse
response presented in this chapter? mhich will be denoted {h[n]} : approaches the
ideal filter impulse response as the length of the filter approaches infinity. It is shou-n
in [58] that in the tirnit as the nurnber of filter taps approaches infinit- the ideal .A(-)
autocorrelation (5.3) is esactly realized.
It has already been noted in Section 3.1 that we may pass white Gaussian noise
with power spectrurn 9 - = 1 through a continuous-time filter, with impulse response
h( t ) and corresponding frequency response H ( w ) , to obtain colored Gaussian noise
with power spectrum
S ( 4 = 1 ~ ( 4 1 ~ (6.1)
mhere w represents analog radian frequency. This filtering operation ma? also be
perforrned in discrete-t irne. \lé u-il1 design an infinite-length discrete-t irne filter re-
sponse {h[nl) that yields exactly the correct correlation properties for the output
of the discrete-time system. This infinite response m a i be truncated and shifted
to create a realizable FIR filter {h[n]) . -1 discrete-time linear convolution of {h[n]}
and white Gaussian noise samples will yield Gaussian noise samples having statis-
tical properties approximating the desired t heoretical properties. The filter length.
denoted LI ! determines the quality of this approximation. Two such correlated Gaus-
sian sequences added in quadrature will form the desired complex Gaussian fading
process. the magnitude of which is Rayleigh-dist ri bu ted.
6.2 Forming the Theoretical Filter F'requency Re-
sponse
Recall ( 5 . 2 ) t hat under the assumption of a vertical monopole antenna? the normalized
autocorrelation function of the real or imagina. part of the baseband fading process
is @en by
where w, = kd,, Qm is the maximum Doppler frequency in Hertz. and r is the
time separation between samples. The Fourier transform of this autocorrelation is
computed. using the tables of [ s i . 6.611.11, as
This spectrum is consistent with the power spectrum (5.1) of Chapter 5 .
Nom? solving equation (6.1) for H ( s i ) . ive obtain a magnitude response function
The full specification of the frequency response H ( w ) also requires a phase response
function: ive arbitrarily assign a phase of zero at al1 frequencies. yielding
as the t heoret ical cont inuous- t ime filter frequency response.
6.3 The Impulse Response of the Continuous-Time
In order to find the
frequency response
filter impulse response h(t ) , Ive inverse Fourier transform the filter
(6.3). This is espressed as
We non7 espress the cont inuous- time response in closed form. SI aking the substitut ion
u = 2 tve obtain W m
I cos (+ut) du + j sin (u&) du (6.5) 1
LVe observe that the second integral in (6.5) is zero. since the integrand is an even
function multiplied by an odd function and we are integrating from -
integrand of the first integral in (6.3) is an even function multiplied by
function. therefore ive mmay m i t e the impulse response h ( t ) as
The solution to this definite integral is given in [X, 3.771.81 as
under the conditions of a > 0: u > 0. and Re u > -$. In the present
u = 1, v = i, and a = u,t. Therefore, for t > 0.
1 to +l. The
anot her even
(6 .1)
case ive have
We observe that the expression (6.6) has the property h ( t ) = h( -t ). since ~ o s ( ~ , u t ) =
cos(-w,ut). Thus, the impulse response for negative time is also given by equation
(6.8): with t replaced by Itl . This gives the expression
We do not yet know h(0) . Taking the limit of (6.8) as t + O , we obtain h(0) as
r($) lirn t+o t -1 '4 .~ l /4 (umt ) .
Yow, the function Jl14 ( z ) can be mritten as [37. 8.4021
Substituting z = ji,t into this espression, me obtain
Evaluating the sum at t = 0: nre observe that the t2"actor is zero for k > 0. and
unity for k = O. Thus'
and ive have the complete continuous- time filter specification
6.4 Forming The Discrete-Time Filter Response
Now, we can specifj- a discrete-time infinite impulse response filter as the response
(6.10) sampied at frequency os. Substituting t = n/& into the argument of the f -
order Bessel function JI14 (dm J,t 1 ) , Ive obtain this argument as (27& 1 nl) : where f,
is again defined as f, = 2. Thus. we obtain the infinite discrete-time response as
A realizable FIR filter can be obtained by truncating the infinite filter (6.1 1)
symmetrically about n = O. then shifting the truncated response to make the filter
causal. This gives the discrete-time finite impulse response {h[n] } ! for filter lengt h
Lr odd, mhere
When this filter is used in the system of Figure 6-1: the approsimation to the
autocorrelation (6.2) can be made arbitrarily close by increasing L I , the length of the
FIR filter. It will be observed in Chapter 8 that better performance is observed when
LI is chosen such that h[L f - l] and h[O] are near zeros of the function (6.10). This
reduces the discontinuities at the edges of the filter due to the rectangular mindow
shape.
Gnfortunately. the decay in the magnitude of the filter coefficients (6.11) at the
edge of the filter is slow: thus. a long filter is needed to achieve accurate correlation
properties, resulting in long execution times. An exact result occurs for LI = x.
however such a filter is unstable. A stable filter rnust have the property that the filter
coefficients are absolutely summable: Le. the condition
rnust be true. Clearly, this condition is met with an' finite L I . since {h[n]} is finite.
n- I/3 Howerer: the sequence {h[n]} asyrnptotically decays approsimately as = dl4.
so the sequence {h[n] ) is not absolutely summable and hence the infinite filter is
unstable.
6.5 Some Cornparisons with the IDFT Method
We have already indicated that the FIR filtering approach alloms control over the ac-
curacy of the approximation by the choice of filter length, and greater cornputational
effort is required to obtain greater accuracy. It will be seen in Chapter 8 that the
IDFT method produces samples with good correlation properties. while for similar
computational effort the samples obtained by FIR filtering exhibit poorer correlation
properties. In most circumstances the IDFT method remains preferable in terrns of
correlation accuracy, since execution time is usually a major constraint .
An advantage to time-domain filtering is that channel samples can be generated
sequentially as they are needed, with storage required only for the filter contents and
the filter coefficients. With the IDFT-based approach, all channel samples must be
generated mith one cal1 to the routine a t the start of the simulation run. In simulations
on cornputer hardware. hoivever. storage for the channel samples is usually readily
available. and the substantial time savings mith the IDFT methods are much more
significant to simulation cost than additional memory usage.
The IDFT method also has the advantage that simulation of different correlation
functions can be readily accomplished by a change of the filter coefficients { F [ k ] } .
Since these are defined in frequency, a sequence { F [ k ] ) c m be obtained by the meth-
ods of Chapter 5 for a non-negative power spectrum, even if this spectrum is nonra-
tional. In contrast, the preceding FIR filter design models only the particular corre-
lation function (6.2). While in theory any correlation function can be simulated by
an FIR filtering approach: a closed-form expression for the tirne-domain coefficients
may not exist for every choice of correlation function, and when such an expression
does esist, it m y be difficult ta find.
Summary
We have presented an impulse response specification of an FIR filter. Cse of this
filter in the system of Figure 6-1 will produce Ra?-leigh-distributed samples modelling
isotropic scattering wit h a vertical monopole antenna. The stat is tical properties of
these samples can be made arbitrarily close to the theoretical properties by increasing
the length of the impulse response of the filter, which also increases the esecution time
of the routine. This approach wi11 be compared to the IDFT approach in Chapter S.
and it will be seen that for equiwlent cornputational effort the IDFT rnethod remains
favorable to this FIR filtering rnethod.
Chapter 7
A Goodness-Of-Fit Test to Quantitatively Assess the Quality
of Random Variate Generation
7.1 Introduction
We develop here a test which ive will use to evaluate the performance of the IDFT
method described in Chapter 4. the filtering approach described in Cliapter 6 . and
a sum-of-sinusoid approach to the generation of Rayleigh variates? which will be de-
scribed in Cliapter 8. Rayleigh-distributed output sarnples are given by the amplitude
of samples of a complex zero-niean Gaussian process? so it is sufficient to test tliis
comples Gaussian sequence since it is usually also available for observation in the
given variate generat ion algorit hm. Uoreover. the comples, zero-mean, multivariat e
Gaussian distribut.ion of these sarnples is a special case of a real multivariate Gaussian
distribution, so the goodness-of-fit test need only operate on real Gaussian-distributed
samples. Other distributions based on the rnultivariate Gaussian distribution! such
as Ricean and Sakagarni-m distributions. can also be evaluated by this method.
7.2 Definition of the Goodness-Of-Fit Problem
We start mith the real or imagina- part of the generator output sequence. This real
sequence {i?[n]) ? n = O, 1,. . . , iV - 1' is assumed to consist of samples of a stationary
ergodic zero-mean random process.
Define the random vector x as
where the elements of x are statistically equivalent to an? length-L subset of adjacent
samples in the sequence {Ifn]}. That is, the probability density function of x is
identical to the probability density function of i [ J f + 11' . . . . i [ M + L - II} .
where JI is any integer in the range [O..V - LI. The probability density function of
al1 such subsets will be the same due to the stationarity of the process {f [n]} .
Define also the random vector
where the elements of X are zero-mean jointly Gaussian random variables. distributed
according to the probability density funct ion[lS]
The vector X represents a random vector of samples distributed according to the
desired probability density function. The L x L m a t r ~ ~ Cx is the covariance rnatris
o f the random vector X: with the element at the n t h row and nth column denoted
c e n ) and given by
cpn' = E {X[rn]S[n]} .
This can be expressed in matris form as
The matris Cx represents the desired covariance matris. and thus is known esactlp.
The output of a g iwn generation algorithm. represented by the vector X. in general
.vil1 be distributed differently than X. Ré wish to define a quality measure that gives
a good indication of hom well the probability density function of X' denoted f%(x) .
approximates the desired probability density function. fx(x).
LVe can make the assumption that the elements of the observed vector )7: have
zero-mean joint Gaussian density
1 fk (4 = esp (-O.SX~C+) .
( 2 4 ldet C% 1 L'2
In the case of the IDFT-based routines discussed in Chapters 3 and 4, it is assured
that with a tested normal random number generator the assumption that our out-
put saniples are normally-distribut,ed is accurate, since a linear operation (such as
the FFT) on jointly Gaussian randorn samples will yield jointly Gaussian random
samples. Similariy. in the case of time-domain filtering of white Gaussian rioise. clie
linear filtering operation on jointly Gaussian sarnples \viril1 produce samples with joint
Gaussian distribut ion. The third met hod often used to generate Rayleigh samples.
that of adding a number of sinusoidal waves, has been shown[59]:[60] to me11 appros-
imate the Gaussian distribution for greater than sis waves in the sum. a condition
that is usually met in any practical simulator. if a situation exists in wliicli this
assumption of joint normality is in doubt. it can be tested with other statistical tests
for multivariate normality found in the published literature. for example the tests of
[38] or [QI.
For some variate generation schemes. such as the IDFT and FIR methods. the
covariance matris can be determined esac t l . Khen this is not possible. techniques
such as those found in reference [49, Section 13.21 can be used to estirnate this L x L
rnatris. Since both x and d are assurned to represent stationary randorn processes.
the autocovariance function of either vector depends only on the magnitude of the sep-
aration between samples. Hence. both Cz and Cx are symmetric Toeplitz matrices:
Le. the? have form r
where each element is given by in j i . 2 ) . Both matrices are non-negative
definite. meaning a%a 2 O for every vector a' ~vhich can be seen in the relation
since ~ ~ ' ' a l * is always positive or zero. If the condition a*%a > O is satisfied for
every a # 0' the covariance matriv is positive definite.
In defining a quality rneasure. me must establish a criterion (or multiple criteria)
which will form the basis of our evaluation. Consider the output vectors frorn two
different random wriate generators: xi and x*, the probability density functions of
which each approximate the probability density function h(x) of an ideal randoni
vector X. \Ve can say the distribution of X, is çuperior to that of x2 if the results of
a communication system simulation using X, are closer than the results using X? to
the results that would be obtained with the perfect samples X. However. we do not
necessarily know the application in nhich the saniples miIl be used and. even wlien
we do have this information, we do not know mhat the ideal simulation results should
be for comparison. Thus, we need to impose our own definition of the quality of the
sample distribution, using a measure ivhich is designed to be useful for a nide range
of problems.
7.3 Development of the Test: The One-Dimensional
Case
We first consider a one-dimensional case (L = 1). We have a communication system
simulator which as input requires random samples wit h desired pro bability densiry
where 0* is the variance of the samples. Often the information simulation of a coni-
munication system is supposed to provide is a probability or an average value. for
esample a bit error rate or an average outage rate. This result is in some way depen-
dent on the probability that the sampie x falls in a certain region (C. <) of the normal
curve (it is of no consequence if the endpoints are also included in the region). In the
case of the t heoretical reference distribution. t his probability is espressed as
6 ,,(,<x <,, = / j x , ( 4 d u = ~ (6) -Q (f) -
C
where the Q-function[61] is defined as
In practice, the generated input samples will have probability density
where o2 is the variance of the output of the random variate generator. The proba-
bility of x falling in the same region (C: <) is given by
E P ? , ( C < X < < ) = / c f \ , ( u ) d u = ~ (f) -Q($) -
.A criterion for goodness that would seem to span many problems in the one-
dimensional case is how well the probability (7.6) obtained with the generated samples
compares to that obtained with the ideal distribution: given by (7.5). averaged over
al1 possible C and <.
Each probability directly depends on the corresponding density function: there-
fore, a comparison of f s , ( u ) and ft,(a) d l also indicate the closeness of the prob-
abilities (7.5) and (7.6). Clearly. if the two density functions are equal the two
probabili t ies i d 1 also be equal. In the one-dimensional caseo the zero-mean normal
density function is completely specified by the variance parameter. and it is clear that
unless = a. the probabilities (7.3) and (7.6) ivill be different for the two distribu-
tions over any non-trivial region. Furtlier. for a given region (C. <) the magnitude of
the difference between the pro babilities monotonically increases wit h the magnitude
of the difference between a and â. Thus in the one-dimensional case a comparison
between the parameters a and 3 is suficient to indicate the quality of the sample
distribution, and could form a quality measure.
Cornparison of the variance parameter is a popular and useful means of comparing
communication system performance. Comparisons betmeen communication systems
are usually espressed in terms of the argument of the Q-function. This argument
is generally a function of the square root of the signal-to-noise ratio (SSR). and a
cornparison between two systems is made by comparing the SNR requirement for
two systerns to achieve the same level of performance. espressing the ratio of the
error function arguments in decibels (dB). Such a measure is meaningful over a n-ide
range of SXR values, whereas any measure based on the value of the probability (7.6)
directly would depend on the operating condit ion of the system.
For example. it is d l -knonm that the probability of error for bina- phase-shift-
keying (BPSK) over an additive white Gaussian noise (-ILVGN) channel is given by
~vhere the quantity Eb/& is the signal-to-noise power ratio in the binan; case (see.
for example, reference [62]). while the probability of error for orthogonal frequency-
shift-keying (FSK) is given by
In comparing the two systems, it is often said that the FSIC system is "3 dB worse"
than the BPSIC systeni, because a 3 dB (or a factor of 2) larger signal-to-noise ratio
is required in the FSK case to achieve equivalent performance to BPSK. This figure is
terrned the power murgin, and we denote it by the letter Q. In this esample, Ç = 0.5.
Figure 7-1. Theoretical bit error rates for BPSK and coherent FSK digital transmis- sion systems.
and satisfies
We will say "the pomer margin of FSK over BPSK is -3 dB".
-4 graph of the bit error rate
The difference in the probability
(by orders of magnitude for large
for the two systems is presented in Figure 7-1.
of error between the two systems varies widelj-
Eb/!Vo) over the range of Eb/& values; hoivever.
117
the 3 dB power rnargin between the two curves is constant over the entire range of
operat ing conditions. Thus? regardless if Ive are evaluat ing performance for events of
high probability or events of low probability. the 3 dB power rnargin figure accuratel-
represents the performance difference be tween the two communication systems.
We wish to use a similar cornparison to compare the sample distribution from a
random variate generator to a specified Gaussian distribution. In the one-dimensional
case. the probability of a generaied sample falling in a region (C7 m) is Q (&) . irhile
the t heoretical probability of this event is Q (G) . Since
Ive say the nonideal samples have a power margin of (10 log,, O) dB over ideal sam-
ples. The facther the power margin is from O dB? the worse the observed generator
performance.
Two major benefits to this rneasure are that it is intuitive to designers of digital
communication systems, and that it is constant (in this single dimensional case) over
the entire range of C. We will present a multi~ariate extension of this measure.
applicable in the present case of correlated normal samples.
7.4 Defining the Multivariate Power Margin
The situation is not simple in the multivariate case. The integration of the probability
density function corresponding to equation (7.5) or (7.6) is over an L-dimensional
region, and the relation of the total probability mass in this region to the elements
of the covariance matriu is not obvious. Clearly, if the covariance matrices of two
esperimental distributions are the same? the probability of a given event will be the
same. but if the covariance matrices are different in tmo distributions. we must assess
which distribution will more closely represent the correct systern event probability.
The left side of the one-dimensional comparison (7.8) can be mritten as
Sirnilarly. the right side can be written as
Q ( Q =/ r cc , 1 5 3 1 exp (- &) dx- Equating t hese expressions to obtain
(7. i l )
it can be seen that the quantity Q is given by the ratio of the arguments of the
esponential functions in the two densities. That is.
argument of exponential for simulator density ç =
argument of exponential for t heoretical density
Thus. the power margin Ç is directly a cornparison of the variance parameters for the
two densities. In this one-dimensional case: the dummy variable of integration (x)
cancels in the expression for Ç' indicating that the power margin does not depend on
the region of integration. We will see this is not true in the multivariate case.
The probability of the vector X faliing in a region R in L-dimensional space is
- - 1
/ (27~) d e t C* 1 / 2 exp (-0.5xT~~lx) dx. (7.14)
R
The probability of the generated wctor of samples x f'alling in the same region R is
1 exp ( - o . ~ x ~ c x ~ x ) dx. (7.15)
( 2 x ) ldet ~~1
If we change the region R in either expression (7.14) or expression (7.15). we will
be able to make the two probabilities equal. For example, we can change the event
represented by R in the first expression, such that the probability of the nem event in
the density f%(x) is equal to the probability of the original event in the density jx (x).
i.e. the correct pro bability. Suppose in the probability espression (7.13) we multiplj-
120
every vector in the region R by the scalar constant ( ~ ) - ~ i ' . K e denote the region
obtained frorn the region R by this scaling by R/&. If the region R estends frorn
some surface to infinity in every direct,ion (analogous to the region used in (7 .3)) .
the value of the probability expression will monotonically decrease with increasing
(0)-'I2. as in the one-dimensional case.
We can therefore write that Ç must sa t i se
Expanding the left-hand side of (7 .16) and changing the rector of integration to u.
ive can w i t e
We can make the substitution xi = &*ui for ui: the ith element of u. Since u =
(g ) - ' i 2x and dui = ( Ç ) - L ' 2 B ~ i ive w i t e
Simplifying this we write the probability as
.{a, X} =/.../ 1 esp ( - 0 . 5 ~ ~ [ Ç C ~ I - ~ X) dx. (7.17) JÇ R ( 2 ~ ) ldet (ÇCn) 1 Il2
Comparing this to the one-dimensional example (7.12). we see the constant Ç is again
given by a ratio of the arguments of the esponential functions in each probability den-
sity, here the ratio of the arguments in (7.15) and (7.14). Cnlike the one-dimensional
case. however, the variables of integration do not cancel in this ratio' meaning that
the power margin Ç depends on the point in L-space at nhich the probability den-
sity function is evaluated, and hence on the region of integration R. We define the
multidimensional power margin between the pro bability density functions of the two
vectors x and X3 evaluated at a yarticular vector a. as
7.5 Discussion
Each vector in the region
of the Multivariate Power Margin
corresponding to a particular event maps to a particular
value in the continuum of power margins. The overall power margin applicable to the
cornputation of this particular event probability niIl corne from a iveighted averaging.
via the integration operation? of the power margin (7.18) at each point in the region.
-4 quality measure must reduce the continuum of power margins arising from the
evaluation of (7.18) at the infinite number of points in a typical region to a finite
number of useful indicators.
-4 simple multivariate case is the one in which the L variates in each vector are
mut ually independent. Given this independence: comparison of the two mult ivariate
distributions can be accomplished by cornparison of the univariate probability density
functions for each of the L independent variates. From the discussion of Section 7.3.
the power margin is appropriate for a univariate comparison. and it is clear that in ttiis
case of L independent uniwriate comparisons the L power margins are appropriate
measures. In rnost multivariate distributions of interest. however: the elements of the
random vector in question ivill be correlated, and ive must ask if the power margin
can also provide an accurate assessrnent of variate quality in the correlated case.
For a geometric perspective on the problem? it will be useful to introduce the
concept of the level cunies of a probability densitu function [63]. A level curre is
defined as the locus of points in the domain of the probability density function for
which the density is equai to a constant value. Recalling the normal density function
the locus of points at which the quadratic form in the argument of the exponential.
x T C L x ? is equal to a constant. say K. will be termed a 1'-level cune. The power
margin represents the ratio of the distance from the origin to the K-level curve for
one density function t o the distance between the origin and the same level curve for
the other density function, in the direction of the vector a. The ratio will not depend
on the particular value of I< under consideration. a fact which mas observed in the
uniwriate case in that the univariate power margin mas applicable to both events of
high probability and events of low probability. This is also true in the multivariate
case: which can be observed in the expression for Çf ,x in (7.18). SIultiplication of the
vector a by any nonzero scalar value will not change the value of &x7 and hence the
multivariate power margin is independent of the density level under consideration. It
was noted in Section 7.3 that the applicability of the power margin to the full range
of event probabilities is a key feature of the power margin measure.
The ratio does, however, depend on the direction from the origin at which the
power margin is considered. In the multivariate case, this dependence is evident in the
power margin espression, since Ive have no ted t hat the expression is independent of the
magnitude of a and must therefore depend only on the direction of the vector a. In the
univariate case, with only a single dimension in the domain of the probability density
function. the power margin ratio is constant. The level curve in the multivariate case
has the shape of an L-dimensional ellipsoidal shell centered on the origin [64][65], a
shape mhich is completely defined by the directions and lengths of its L wes.
We again consider the case in which each multivariate distribution consists of L
mutually independent variates. -1 given level curie then consists of an ellipsoiclaf
region wit h axes collinear with the L basis vectors. Since each ellipsoidal region is
defined by L axes, and these L axes are collinear For the two density functions, the
comparison between the densities can be made by evaluating the power margin in the
direction of each of the axes. The power margin in other directions is directly related
to the L avis power margins, because each ellipsoidal region is completely defined by
these axes. This method of evaluation is identical to the procedure suggested preri-
ously for comparing the same two distributions, since evaluating the power rnargin for
each of the L independent variates is equivalent to evaluating the ratio of distances to
the II-level curyes in the direction of each basis vector. In fact. the squared distance
from the origin to the unity-level cume is given by l/a2, where o2 is the variance of
the corresponding variate? and thus the ratio of reciprocal variances observed in (7.9)
appears directly in the geometric model. It is clear that comparison in this manner
does not require the axes of the level curve ellipsoids to be aligned 115th the basis
vectors, but only that corresponding axes are collinear. This fact ni11 be used to
define the final quality measures in subsequent sections of this chapter.
The more general case is that in nhich corresponding axes of the lerel curve
ellipsoids are not collinear. Scalar comparison at L points is not sufficient in this
case. A comparison between the densities must take into account not only scaling of
the L axes. but aiso the rotation of these axes. The distance between the level curves
in an arbitra- direction does not have a simple relationship to the distances between
the curves in L specific directions. Expressed differently, in the case of independent
variates knowledge of the reference distribution and the set of independent power
margins is sufficient to completely reconstruct the sample distribution and hence the
set of power margins provides a complete measure of the difference betxeen the two
distributions, In the correlated variate case, however, information about the relative
positions of each of the L axes of one level curve n-ith respect to those of the other
Ievel curve is also needed to completely ceconstruct the sample distribution.
It is possible, however, to apply one transformation to both vectors of random
variates such that a power margin comparison can be made. The transformation
expresses each of the randorn vectors X and x as a linear transformation of a vector
of independent random variates. We can then appeal to the knonn utility of the
univariate pomer rnargin to mesure the quality of the transformed samples. Froni
the geometric perspec tive, t his transformation aligns corresponding axes of the level
curve ellipsoids of the two distributions. allowing scalar comparisons of auis lengths
to be made.
Having L numbers which accurately represent the cornparison of the independent
variates. Ive can then apply the reverse transformation (which is the same for both
distributions) to obtain useful comparisons of the original variates. The informat ion
on the relative positions of the L axes of each level cunle ellipsoid is contained iri this
transformation, and hence included in the measures.
The proposed measures. in a form which can be easily implernented. are defined
in the folloiving section, and justification for the use of the measures is presented
following t hese definitions.
7.6 Definition of the Measures
We now proceed to define the specific quality measures proposed. We assume that ire
are given Cg' the L x L covariance matrix of a length-L sequence from the randorn
variate generator, and Cx, the L x L covariance rnatrix of the reference L-variate
distribution. Both the generated and the reference samples are assumed to have
jointly Gaussian distribution and zero mean.
Accurate and useful measures to assess the quality of the samples represented bu
the vector R are given by three quantities. Cnder the assumption of a stationan
reference distribution wit h variance O$ , the measures can be espresseci as follows.
The mean basis power margin is defined by
The maximum basis power margin is defined by
T h i r d I ~ the mznzmum basis power margin is defined by
1 Ain = 2 min {diag { c ~ c ~ ~ c X } } . (7.21) UX
We iesi11 now show that these measures do in fact provide a good indicator of the
quality of the generated samples. The argument mil1 proceed in three steps.
In Section 7.6.1 it will be shown that the vectors X and x of correlated random
variates can each be written as the same linear combination of the (different) random
vectors Y and Y , respectively, where the components of Y and 9 are independent.
This transformation represents a change of basis for the domain of the probability
density functions of X and 2, and a congruence transformation of the respective
covariance mat rices.
Secondly, the power margins between the corresponding components of Y and Y
wiil be found in Section 7.6.2. Due to the independence of the components of each
vector. the problem will reduce to L univariate comparisons. In the exact way that
the univariate power margin represents the difference between tw70 univariate density
funct ions, each power margin will represent the difference between the pro ba bili ty
density function of a particular component of Y and the probability density function
of the same component of Y . The power margin along each basis vector in the dornain
of the transformed density functions Y and Y will t hus be found.
Finall-, in Section 1.6.3 the congruence transformatioii mil1 be applied to the ma-
trix of power margins. to obtain power margin measures for eadi of the basis Yectors
of the original density function domain. It will be shomn that in the case of a sta-
tionary sequence these L power rnargin measures are given by d i a g { ~ x ~ $ ' ~ x ) /or. and three useful summary measures are the mean, maximum. and minimum of these
L margins.
7.6.1 A Transformation to Represent the Components of X and x as Linear Combinations of Independent Variates
The randorn vectors X and x can be formed by a linear operation on independent
Gaussian variates. Finding an expression for this transformationt as well as the
distribution of each independent variate, rvill be accomplished in this section.
-4 linear transformation can be performed on the random vector X to form another
randorn vector. say Y. by multiplying by a matriu. say T. That is. we form the vector
Y = TX.
where T is an L x L rnatriv and Y is an L u 1 vector. The components of the new
random vector Y will have a joint Gaussian distribution. since they are the result of a
linear operation on jointlp Gaussian wriates. and t his joint distribut ion is cornpletely
described by the mean vector and covariance matrk of the transformed variates. This
random vector Y will have mean vector
E {Y) = E {TX} = TE (X} = 0.
due to the linearity of the expectation operator. and the fact that the vector X has
zero mean. The covariance matriu of the vector Y is thus equal to the correlation
matris, given by
again due to the linearity of the espectation operator. The espectation E {xx~) is
the covariance matrk Cx, so the covariance rnatrix of the transformed ïector Y is
given simply by the relation
A transformation of the form T C ~ T ~ is known as a congruence transfomation[64].
Som, the probability of the vector X falling in a certain L-dimensional region
R is found by the integration of the probability density function (7.14) over this
region. This probability is identical to the probability that the vector Y falls in an
L-dirnensional region Or7 where R' is a region @yen by the transformation of el-ery
vector in the original region R by the r n a t r i ~ T. Thus.
Since the probability of any event can be equivalently represented in the tivo forms.
provided the region of integration is transformed. it is acceptable to niake a compar-
ison of the transformed probability density functions rather than the original espres-
sions.
Let b represent the transformation of vector a in equation (7.18) by transformation
T: that is.
b = Ta.
Substituting a = ~ - ' b in the quadratic form a T C ~ L a we find the relation
where Cu is giwn by (7.22). Denote by Ç* Y (b) the power margin of x mith respect
to X expressed in the transformed system. For a vector b in the transformed region
R': this quantity is given by
Note that. provided the vector b is defined as in equation (7.24). th
(b) and ÇA,x (a) are equal.
e quantities
Let Ex be a matrix mith an orthonormal set of eigenvectors of the matris Cx as
columns. and Ax be a diagonal matris of the corresponding eigeni-alues. Hence. ive
have
Define a transformation m a t r k
This transformation is applied to the vectors X and x to obtain two nen random
vectors
and
The covariance matrix of Xr can be obtained using (7.22) and (1.26) as
since E ~ E & = E ~ E ~ = 1. the L -dimensional identity matris. The vector XI is thus
a vector of zero-mean unit-variance uncorrelated Gaussian variates.
The cornponents of the vector X' are in general correlated. The covariance matris
of x can also be found using (7.22). We denote by Eb, the rnatrix containing
the eigenvectors of Cg, as columns. and the diagonal matrix of the corresponding
eigenvalues as A*. Defining a second transformation.
Tî = E;, ,
we apply this to the vectors Xr and x to forrn
Y = TÎX1 = (TiTL) X
and
Ê. = T&' = ((TT,) X.
The covariance matrix of the random vector Y is therefore given by
and the covariance matriv of the vector Y can be written as
The covariance matrices of both P and Y are thus diagonal matrices. and since the
components of each vector are uncorrelated and normally-distributed. they are also
inde pendent.
The cascaded transformation will be denoted
The inverse of this transformation is given by
where we have used the fact that the inverses of the orthogonal matrices Ex and Eict
are given by the corresponding matrix transposes. Thus, we can write the vectors X
and x as
and
mliere the components of Y are independent zero-mean unit-variance Gaussian ran-
dom variables, and the cornponents of Y are independent zero-mean Gaussian randorn
variables with variances given by the diagonal elements of A*. Each component of
the original random vectors has been written as a linear combination of independent
random variates. and the f is t step in finding the measures has been completed. The
formation of X and x from vectoe Y and Y is illustrated in Figure 7-2a. and the
resulting transformation of the covariance matrices is shown in Figure 7-2b.
7.6.2 The Power Margins of the Independent Variates
The cornparison of two univariate densities using the power margin measure was
described in Section 7.3. The variance of the i th component of P is given by the
ith diagonal element of Cy, which, from (7.30), is the ith eigenvalue of the rnatris
C*. This variance will be denoted O;&. The variance of the ith component of Y is
oct = 1: from (7.29). Therefore, the power rnargil of Ti, the ith elernent of Y . n-ith
respect to Yi. the ith element of Y: is given' from equation (7.9). by
Transformed T-1 I T - L T Reference reference conriance
covariance I CX
Transformed Sample
saniple covariance
covariance Ajp C~
Sample distribution
7 1 7 Compute
)Ê .\-(O. Cg)
Trans forrned Original basis summary
basis power power margins measUres
rnargins GI,y G ~ . ~ from GRex
Ê. .t'(O, Ag, )
Figure 7-2. The use of the transformations of Section 7.6.1 to forrn power margin measures. (a) The observed and reference variates espressed as a linear operation on independent variates. (b) The effect of these linear operations on the covariance matrix of each distribution. ( c ) The application of the congruence transformation ro the power margin matris.
x -\-(O. CA, )
T; C TT
C
where A:! is the ith eigenvalue of C*. In other words. the potver rnargin between the
ith component of Ii and Yi is given by the ith diagonal element of A;!.
The diagonal matriv containing the power margins of 9 with respect to Y will
be denoted G*,y, where
GQ,Y = A,!.
The power margin along the ith basis vector in the domain of the probability density
functions fq(y) and fu(y) is thus given by the ith diagonal element of GpYy.
7.6.3 The Power Margins in the Original Basis
We have found a set of power margins which accurately represent the difference
between the two sets of independent randorn variates Y and P. These random variates
can be linearly combined to form the vectors of correlated variates X and X. This
linear combination, given by the matrix T-': represents a change of basis for the
domain of the probability density functions of Y and P. We have observed that the
covariance matrices of the vectors X and x are related to the covariance matrices CQ
and Cu by a congruence transformation and we now apply this same transformation
to the power rnargin rnatrk GYTY to obtain power margin measures with respect to
the original baçis, as illustrated in Figure M c . The resulting rnatrix, is given
C
- 1 T since CS! = Ea, A* E*.
We wish to find a set of power margins summarizing the difference between the
two probability density functions over each basis vector. In the same way that the
principal diagonal of the covariance ma t rk directlp giws the variance of each variate
in a random vector. the principal diagonal of the matris GXqx Qves an important
summary poiver margin mesure for each component in the random vectors under
test. Hence, ive will use the principal diagonal of Gaex to form our measures.
The effect of the change of b a i s is that the elements of the matris GZvx are linear
combinations of the power margins found in the matrix GI,Y. It is necessary that the
linear combinations of power margins represented by the diagonal elements of G*,x
be normalized to unity. The magnitude of the linear combination on each element of
the principal diagonal is given by the corresponding element on the principal diagonal
of
Thw. the linear combination of the ith diagonal element can be nornialized by di-
viding by the ith diagonal element of Cx. In the common case where X represents
a stationary process, the diagonal elements of Cx are ail equal to the variance 0;.
Hence. the power margin representing the ith randorn variable is given by the itli
element of
Often the covariance matrices Cx and CZ will be normalized to unit variance. repre-
senting multiplication of the randorn vector by a normalizing constant. and the power
margins mil1 be found directly on the principal diagonal of GXYx.
We ccan write the measures in the form of equations (7.19)-(7.21) using appropriate
substitutions in the expression for GXVx given in (7.33). We write
and t hus
1/2 T c l xt = hX E,C;~E~A~~.
Substituting in the expression for GicVx,
The diagonal elements of Gx., represent the power rnargins dong the basis vectors
of the domain of the probability density functions fR(x) and fx(x) and the poner
rnargins for the cornponents of x and X. Hence the maximum power rnargin for any
individual cornponent of x mith respect to the corresponding component of X is given
b,v
earlier defined as Çma. The minimum power margin for any individual component is
1 2 min {diag { c ~ c ~ ~ c ~ ) ) :
OX
defined as Qmin. Finaliy. the average of the individual component power rnargins is
1 - trace { C ~ C G ' C ~ } . o$L
which has been defined as O,,,,. These three measures' taken together. proiide useful
and accurate measures of the qualit- of the distribution of the generated variates 2.
In the case where the test and reference distributions are equal.
Since for a stationary process the elements on the principal diagonal of Cx are O$.
the vector (7.36) is equal to a length-L ones vector. and the measures Çmin7 Çmean.
and Çmax al1 have value unity in the case of a perfect generator distribution.
Xote that Gm, and Çmin do not strictly define the range of possible power margins.
Rather. from sell-known properties of gt-x (which is in a form known as the Rayleigh
Quotient [64]), this range is given by the minimum and masimuni values in the mat ris
GpYy. However. the range of the independent power margins given in GpSy does not
represent a realistic assessrnent of the quality of the sample distribution. because we
can only observe the variates X and X. which are always linear combinations of the
variates Y and Y . These linear combinations will in general weiglit some elements of
Y and Y much more heavily than others. The particular linear combinat ion yielding
a particular power margin in GYVY is not necessarily representative of an? power
rnargin likely to be encountered in practice. It is the congruence transformation of
the matris Gq,, by T-' that forrns measures which do represent realistic assessments
of the quality of the original variates in X. since this transformation applies to the
power margin matrix the same linear operat ion applied to the covariance mat rices
Cp arid Cu.
Expressed differently, the m a t r k Çv , Y is invariant to transformations of the form
(2.2) and. from the discussion of Section 2.4.2, is not directly suitable as a quality
measure. The matris ÇXYx7 however. accounts for the fact that the observed variates
result from possibly unequal weigtitings of the underlying independent variates. and
t hus the measures (7. I9)-(7.2l) are useful as quality measures.
The measures Ain! Çmean, and Çmar are designed to provide an indication of the
typical performance of a given set of random variates relative to a reference set of
random variates. They do not provide an exact cornparison for a particular applica-
tion, and thus ive would not expect the three measures to necessarily correspond to
specific observable quantities in a system under test. In a particular application the
probability of certain events may be equal for the reference and test sample vectors
even when the generated variates are verp poor. and, conversely, certain events might
reveal large differences between even a good generated distribution and the reference
distribution. In consideration of a set of typical events ive may observe events for
which the observed performance is better than indicated by the measures. and events
for which the observed performance is worse than indicated by the measures. We
espect. hoivever. that the measures mil1 in general accurately reflect the degree to
wtiich two vectors of variates differ.
In the case where the number of variates L is srnall, the probability of certain
events can be computed directly for the multivariate Gaussian distributions by nu-
merical integration. using, for esample, the method of [66]. In order to demonstrate
that the measures (7.19)-(7.2 1) yield results consistent wit h observed power mar-
gins in such a case: the probability of some events was computed for two different
five-variate joint normal distributions. The distributions rested had the following
properties. Both distributions had mean vector given by the zero vector. The co-
variance sequence for the first distribution was defined by samples of the zero-order
Bessel function JO(-). This function is repeated here as
where dm is the maximum Doppler frequenc- in Hertz. The covariance sequence for
the second
t hird-order
distribution was given by samples of the autocorrelation function for a
3 21 - r ( r ) = sin ( z ) esp {-Zn@, ~ T J sin (-$?)}
l= L
over the same range of T. The events considered were of the form
> where represents one of the two disjoint events Jxll > a or IxLJ 5 a, and a is a
specified threshold level. For a given a, there are 2' = 32 events of the form (7.39).
Ewnts of this form are encountered in. for esample. block coded systems using hard
decision decoding. A five-bit repetition code is one esample in which events of the
form (7.39) are relevant. In such a coding scheme each source bit is transmitted
identically five times and the rnajority of the five received bits determines the receiver
output.
The probability of each event was computed for several threshold points and the
power margin for each event was found bu comparing the computed probabili t ies
for the case of autocorrelation (7.37) and the case of autocorrelation (7.38) at a
probability of 10-~. The quality measures Çmin: Çmean' and Çma were also computed.
Three cases were considered, each sampling the autocorrelation functions ((7.37) and
(7.38)) at different interwls. The results of these tests are presented in Figure 7-3.
The plots show the number of states. out of the total 32, having an observed power
margin mithin intervals of width 0.05 dB. Figure 7-3a shows these results when the
spacing between adjacent samples of the correlation function satisfies o,r=0.3. The
computed quality measures for this case were Ain = 0.74 dB. Ç,, = 0.76 dB.
and A, = 0.83 dB. The estimated power margins for sample spacing satisfying
dmr = 0.35 are shown in Figure 7-3b. with corresponding computed quality measures
Ai,., = 0.58 dB. Aean = 0.81 dB and Çmay = 0.89 dB. Figure 7-3c gives the observed
power margins for @,T = 0.4 spacing, and the computed measures in this case were
Çmin = 0.46 dB. Çmean = 0.68 dB, and Çmw = 0.79 dB.
In al1 three cases it is observed that the computed measures Çmem and Ç,, are
close in value: the maximum difference betmeen the measures was O. 11 dB. It is also
observed that the measures provide good representation of the observed differences in
the calculated probabilities. In Figure 7-3a: the largest power margins are clust ered
within 0.3 dB of Çmm,, with the esception of one outlier value. Again in Figure 7-3b.
the largest power margins are clustered around the values of the measures Qmean and
Ç,&., and in Figure 7-3c al1 but one of the largest power margiris is located within
Magnitude of power margin (dB) at probability
Z Magnitude of power margin (dB) at probability
c. "-- 3 O 0.2 0.4 0.6 O. 8 1 1.2 1.4 1.6 1.8 2 Z Magnitude of power margin (dB) at probability IO-'
Figure 7-3. Histograms shoming the number of states having observed power margin within the indicated 0.05 dB intervals. The spacing between correlation function samples satisfies (a) dmr = 0.3? (b) Q m r = 0.33: and (c) Qmr = 0.4.
0.1 dB of A,. The measures are thus observed to indicate well the largest. and
therefore most significant, potver margins observed for this set of typical events.
7.8 A Note on the Computation of CS'
It should be noted that in some cases the cornputation of Cg1 cannot be performed
directiy. This is due to the fact that in some cases Cx is nearly singular, or equiva-
lently one or more eigenvalues have value near zero. The inverse of a nearly singular
(or poorly conditioned) matrk, while existing in theo. cannot be reliably cornputed
in finite-precision arithmetic. The poor conditioning of the matrix C x occurs in some
cases where the system samples are closely spaced. and therefore highly dependent.
Poor conditioning is observed when the theoretical ponter spectrum is sharply ban-
dlimited to less t han half the sampling frequenc. such as the spectrum corresponding
to (7.37) for fm « 0.5. For less sharply bandlimited spectra. such as the But teru-ort h
spectrum of (7.38)? the matris Cx is such that the inverse can be cornputed directly.
The measures (7.19)-(7.21) do not require the inverse CE' to be taken. but rat lier
the inverse of the test matrix ciL. In sharply bandlimited cases. the matris CR
typically differs somewhat from the theoretical matrix Cx, due to difficulties in pro-
ducing variates with such spectra, and Ca1 is cornputable. Hence? unless Cx is poorly
conditioned and the matrix CR is closely equal to the matris Cx, the matris CR'
can usually be found directly. Thus, the computation of CR' could be performed
for the Butterworth autocorrelation (7.38), and al1 empirically-determined covariance
matrices considered in Chapter 8. The inverse m a t r k could also be computed for
FIR filters defined as in Chapter 6: for a- practical filter lengt h (~erified for lengt hs
up to L, = 131071). It was only for the theoreticall-determined covariance matris
of the IDFT method that it was necessary to approsimate the inverse of ciL. The
procedure for this will now be explained, and it will be obsen-ed that the error due
to this approximation is small.
I n L x L matrix C can be decomposed using the singular value decomposi-
tion[49][64] as
The L x L matrices U and V each have mutually orthonormai columns. and the
matris D is an L x L diagonal matrix of non-negative elements
where
are known as the singular values of C. The elements of C are thus ,&:en by [49.
equation 2.6.131
Figure 7-4. The singular values of the theoretical covariance m a t r ~ ~ Cx in the case of JO ( - ) autocorrelation (5.3) with normalized maximum Doppler f, = 0.03.
for r = L. The matrix C can be approximated by taking femer terms in the sum.
That is. the terms weighted by the L - r smallest singular values are discarded. If
the singular value d('+l) is much smaller than d ( ' ) : the approsimation mil1 be very
good. Plotted in Figure 7-4 are the singular values of the covariance matris when
{ r [ d ] } is given by samples of the JO(-) function (5.3). A correlation sequence lengtli
of 200 \vas used for the figure, with fm = 0.05. Clearlp the rnatris Cx is ne11
approximated using the first. Say, 35 singular values. The location of the "shelf' in
Figure 7-4 is dependent on the normalized maximum Doppler frequency f,: given
by the product of the analog maximum Doppler frequency and the systern sampling
period. The value of /, = 0.05 represents a situation where the samples are closely
spaced, and therdore highly dependent (correlated). In such a case several of the
singular values are near zero. When the samples are widely spaced. the- are less
dependent (correlated) and there are fewer singular values near zero. For esample.
near-zero singular values do not occur for fm z 0.5.
Consider the singular value decompositions of Cx and (2%.
Espressing the matrix in terms of these decompositions. and assuming without
loss of generality that (T$ = 1, ive obtain, from ( i . 33 ) ,
'\;on-' the matrix CR is observed to have poor conditioning when it is approsimately
equal to a poorly conditioned Cx. In this case, Ux = UR and Vx zz VR. Due
to the orthogonality of each of the matrices Ux, Ua, Vx, and Vxo ive can mrite
v ~ V ~ 'Y I and UiUx zz 1. Substituting these identity matrices in the expression
(7.42). we obtain
and the ith diagonal element of this
with r = L. Esamining the weight of the kth term,
ive observe that if d$) and dg) are of similar order of magnitude and ver- close to
zero. the ratio (7.45) will also be very close to zero. and well approsirnated by esactly
zero. Hence, if V& = I and U ~ U * = Io a proper choice of r < L in (7.44) ni11
not result in significant error in the cornputed measures.
In practice neither vZVX nor U ~ U ~ will be exactly equal to the identity matris.
Homever, for CL approximately equal to Cx significant non-zero componerits in the
matrices v&V% and uZUx will tjpically be on the principal diagonal or subdiagonals
very close to the principal diagonal. In addition to weighting factors of the form (7.45):
factors of the form
mil1 arise. mliere k is a small integer. Due to the ordering (7.40). over the range of
near-zero singular values we espect t hat
The quantity (7.46): like ( U S ) , is well approximated by zero for k greater than an
appropriate choice of r.
Thus, in cases where CR' cannot be computed directly and CR x Cx, the mea-
sures can be computed using the pseudoinverse of (2%. That is. we mmay use the
approsimat ion
where VIX is given by the first r columns of V%, Urz is given by the first r columns of
UR, and D'X is given as the r x r submatrix of Dk containing the r largest singular
values. This approximation does not significantly affect the value of the computed
nieasures, which has been verified with the covariance matrices considered in Section
7.7 and Chapter 8. Computing the measures for various r7 and where possible r = L.
confirms that CS' can be safely approxirnated by the pseudoinverse. A good choice
for r was found to satisfy
It should be stressed that use of the pseudoinverse is seldorn necessary The
JO ( 0 ) autocorrelation function represents a severely bandlimited case: and only rarely
was the theoretically-computed covariance matris close enough to the ideal matris
to necessitate using the pseudoinverse. In most cases: the inverse of czl can be
computed directly.
7.9 Summary
Quantitative quality measures for the output of random variate generation algorit hms
have been presented, which provide measures providing a meaningful and intuitive
assessrnent of variate quality in communication system simulation applications. Chap-
ter 8 will present the application of these measures to the output of Rayleigh variate
generation algorit hms.
Chapter 8
A Quantitative Evaluat ion of Output Sample Sequences from
Rayleigh Fading Simulator
Routines
8.1 Introduction
In Chapter 7 measures for quantitatively evaluating samples mith a joint Gaussian
distribution, such as the complex Gaussian samples obtained from a Rayleigh fading
simulator, were presented. We will now apply this test to three methods of generating
correlated Rayleigh variates. The quality of output samples will be evaluated for
the IDFT approach presented in Chapter 4, the FIR filtering approach presented in
Chapter 6: and an approach based on superposition of comples sinusoids. outiined
in Section 8.2.3. The computational effort required to generate samples using each
routine will also be compared.
8.2 The Routines Under Consideration
We now describe the implernentation of each variate generation method used in the
tests.
8.2.1 The IDFT Method
The IDFT method presented in Chapter 4 was implemented in both the C program-
ming language[68] and as a .LI.\TLAB[69] function. The C program code is given in
.+pendLu A' and the 4IATLAB code in Appendix B. Single-precision floating point
storage aas used in the C version of the algorithm. resulting in half the mernory use
of the 11ATLAB version. which is restricted to double-precison storage. There !vas
no measurable improvement in the quali ty of the samples when double-precision data
storage was used.
The C language code uses Sumerical Recipes in C [49] routines for the IFFT op-
eration (f ourl . c) and generation of independent Gaussian wriates (gasdev . c and
r a d .c). This code, cornpiled using the GXU gcc compiler, mas used on an Cltra-
SPARC machine to obtain randorn sarnples for empirical testing and to provide time
comparisons.
In the 4lA'TLAB version of the code, the standard library routines f f t and randn
are used for the FFT operation and the generation of independent normal variates.
respect ively.
8.2.2 The FIR Filtering Method
FIR filtering can be performed either direcrly in the time domain, or by using DFT
and IDFT operations. Both methods were implemented for cornparison purposes.
FIR Filtering by Convolution in Time
FIR filtering in time is accomplished by direct realization of the convolution sum
The direct approach requires storage of the LI filter coefficients and the L - 1 previous
inputs {2[1]}. 1 = n - Lf. n - LI + 1,. . . : n - 1. Computation of each elernent y[n]
requires L multiplications and (L - 1) additions. The structure can be made more
efficient by exploitation of the sgmrnetry of the impulse response (6.12) [48]. For Lf
odd the convolution can be expanded as
Since h[k] = h[Lj - 1 - k ] , we can write
\ J
LL- .-J n - L + y[n] = h [ 2 ] [ ;'] h[k] (z[n - k] + z[n - ( L I - 1 - k)]) . (8.1) k=O
This final espression requires storage of LI - 1 previous inputs. as before. but non- only
( L I - 1)/2 filter coefficients must be stored. The number of multiplications is reduced
from L, to (L + 1)/2. The direct form based on equation (8.1) [vas implernented in
C for the tirne comparisons.
FIR Filtering Using the FFT
The linear convolution of two sequences can be performed using the DFT by pxiding
each tirne sequence with a sufficient nurnber of zeros before the DFT is taken [ U j .
The required length of each sequence after the zero padding is [48]
The DFT is normally perforrned at the nest highest power of two. for masimurn
computational efficiency using the FFT. After multiplication of the DFT of each
sequence, and an inverse DFT operation. the first L - 1 samples of the resulting
sequence are taken and the remainder, corrupted by aliasing, are discarded.
To obtain the output sequence. three calls to the FFT routine are required. one
to transform the zero-padded filter sequence. a second to transform the zero-padded
data sequence of white Gaussian variates? and a third to inverse transform the product
of the two DFT1s. Cornputing this product requires an additional LDFT cornplex
rnultiply operations. In the test code. the DFT1s were performed using a real-sequence
FFT algorithm[49].
In order to use the most efficient FFT algorithrns, the number of sarnples produced
using the IDFT method of Chapter 4 is aiways a power of two. In comparison. tlie
number of useful output samples formed using FIR filtering via the DFT is less than a
power of two due to the removal of aliased samples from the output sequence. When
performing time comparisons between the two methods the same size FFT was used
in both cases, and the cornputation time scaled by LDFT/ [LDFT - (LI - l)] such thar
the comparison represented computation time for an equal number of samples.
In the case of long data sequences, the FFT operations can be performed on
smaller blocks of data using one of two methods known as the overlap-saw method
and the overlap-add met h0d[47], [18j. The overlapadd algorit hm is prorided as the
function f f t f ilt . rn in the SIATLAB Signal Processing Toolbos. which cornputes the
most efficient block length. This function was used in the comparisons of Section 8.4.
8.2.3 A Method Based on Superposition of Sinusoids
Reference (221 uses a randorn sum-of-sinusoid approach to model the Rat fading
Rayleigh channel'. h total of Xe sinusoids. called echos? are used. The instanta-
neous channel impulse response can be mritten as [22, equation 41
mhere O, is the null-phase for each
Doppler frequency for each echo.
echoo Tn is the delay for each echo, and fD, is tlie
Each of { O , } ! {T,} . and {ID,} are sequences of
'The contribution of [22] is a discrete-time frequency-selective fading channel model that combines the effects of transmitter filtering, the physical ciiannel, receiver filtering, and sampling. The random variates needed in the mode1 are provided by the method CO be described. the origin of n-hich is cited as reference [23]
random numbers. In the present case of flat Rayleigh fading: T, = O for al1 n. and
the effect of this channel impulse response is to multiply the transmitted signal by
the sum of al1 echos.
When implementing the rnethod 'i. must be a finite number of sinusoids. The
multipath fading channel response is thus approsimated in t be flat fading case b -
By the Central Limit Theorem [-Elo as the number of terms becomes large the surn
(8.2) approaches a cornpiex Gaussian randorn process. It is shown in [22] that in the
limit as -Ve + cc' if 8, is uniformly distributed over [O. 2;;) and
mhere un is uniformly distributed over [O, 1). the fading process approaches that rep-
resented by the power spectrum (5.1) and JO(-) autocorrelation ( 5 . 2 ) . (The maximum
analog Doppler frequency is denoted 4, here, to be consistent wit h earlier notation.)
To implement the process (8.2) on digital cornputer. Ive substitute t = n/&.
where 4, is the sampling frequency. Equivalently. we can substitute the normalized
Doppler f, for dm in (8.3) and n for t in (8.2). The random generator is designed to
be initialized once a t the beginning of the simulation run, then run without further
random inputs. Hoeher suggests in 1221 that for small Xe the generator be rcinitialized
Yrom time to time, because this irnproves the statistic". Doing this concatenates
many realizations to form a much longer signal record. The statistic is improvetl.
however, because tirne averages are now takeo over many realizations of the fading
process. The time autocorrelation function of any given realization of the random
process can be poor. but when this average is taken over many realizations of the
process. the observed time autocorrelation is much improved. Hence. in practice the
ergodicity assumption for a given realization does not hold. since once initialization
is performed, the tirne average is not necessarily close to the corresponding ensemble
average even for large sequence lengths. Furthermore. each time a new realization
is computed. a discontinuity in the received signal waveform exists. Closely spaced
samples spanning the discontinuity are cornpletely uncorrelated. In many simulation
situations, ergodicity is an important requirement, since the input to a simulation
routine must necessarily be a sequence of time samples, and discontinuities in the
signal record are very undesirable. resulting in high-frequency spectral components.
for example. Hence. ive cornpute the quality measure over a single realization of
the fading process, which will indicate the quality of the approsimating pro bability
density function for a single realization. The average quality of several realizations
was taken for quality cornparison purposes.
8.3 Comparison of the Routines Based on Execu- tion Time
In order to assess the relative computational effort to generate sarnples using different
methods. sequences of length 216 = 65336 samples and of length 22L = 2097182
samples were generated on an CltraSP-4RC machine. (Slightly fewer samples were
generated in the case of FIR filtering using the DFT: as explained in Section 8.2.2).
The normalized maximum Doppler frequencg was f, = 0.05 in each case.
Results for the time cornparisons are presented in Figures 8-1 and 8-9. The mod-
ified IDFT rnethod of Chapter 4 is clearly superior in this regard. The direct FIR
filtering method can be perforrned quickly only for very short filter lengths. and is
obserwd to be very inefficient for long filter lengths. Performing FIR filtering usi~ig
the DFT is seen to be much more efficient than the direct FIR method at long fil-
ter lengths, but roughly three times more time is needed to generate the samples in
cornparison to the IDFT approach. The surn of sinusoids approach was also seen to
require much more effort than the IDFT method, for even a small number of echos.
8.4 Comparison Based on Floating-Point Opera-
t ions
The number of Boating-point operations per sample to obtain output sequences in
SIATLAB n-as also determined for each routine, as a function of sequence length.
In al1 tests using the IDFT. the number of points was restricted to a power of two.
Modified Direct FIR Surn FFT FIR Filtering of
met. hod Filtering using FFT Sinusoids
Figure 8-1. The time to generate 216 complex samples using different generation rnethods.
FFT
Even with this restriction. the number of floating-point operations per sample is
not constant mith the IDFT method. due to the O(X1og.V) time complexity of the
FFT operation. However. it will be observed that the inherent efficiency of the FFT
outweighs the effect of the log M factor for practical sequence lengths.
Direct FIR filtering was performed using the MATLAB function f ilt er. Figure 8-
3 shows the floating-point operations per sample using this direct FIR filtering routine
together with the floating-point operations per sample using the IDFT routine. It
is observed that the IDFT uses mucli fewer floating-point operations to generate the
samples than even relatively short FIR filters producing inferior correlation properties.
The number of operations per sample to perform the direct FIR filtering method is
constant. wit h additional operations required to generat e the filter coefficients n-hich
increase the overall operations per sample for short out put sequences.
Figure 8-4 shows the number of aciating-point operations per sample for the
FIR method using the overlap-add approach implemented in the .LI.-\SLAB func-
tion f f t f iit .m. These results are plotted with those for the IDFT method. and once
again the IDFT method is observed to use fener floating-point operations per sample.
The number of operations per sample is nearly constant for large .Y. due to the fised
FFT length in this method. but once again the increase in operations per saniple
with iV for the IDFT method is seen to have minimal effect on the comparison for
practical sequence lengths.
Finally, Figure 8-5 plots the comparison for the sum-of-sinusoids method. The
Xumber of samples generated
Figure 8-3. The number of floating-point operations to generate samples using the direct FIR filtering method, plotted with the number of floating-point operations to generate samples using the IFFT methodo as a function of the nurnber of samples generated. The number of points in the IFFT method case is always a power of two.
Number of samples generated
Figure 8-4. The number of floating-point operations to generate samples using overlap-add FIR filtering method via f f tf ilt . m, plotted mit h the number of floating- point operations to generate samples using the IFFT method, as a function of the number of samples generated. The number of points in the IFFT method case is always a poiver of two.
Yumber of samples generated
Figure 8 3 . The number of floating-point operations to generate samples using the SOS method, plotted wit h the number of floating-point operations to generate samples using the IFFT method, as a function of the number of samples generated. The number of points in the IFFT method case is always a power of two.
number of floating-point operations per sample for the sum-of-sinusoids metliod is
essentially constant. since initialization of this routine is not computationaliy signifi-
cant. The FFT method is again observed to require fewer operations. even for small
A;.
8.5 Cornparison Based on Quality Measures
The three methods of generating correlated random variates were compared using
the quality measures of Chapter 7 . and these results are presented in Table 8.1.
Perfect performance corresponds to zero dB for al1 three measures. The reference
autocorrelation function for each case is given by equation (5.3). An autocorrelation
sequence length of '200 was considered, a t a normalized maximum Doppler of f, =
0.05. In the case of the IDFT and FIR methods. both a theoretical autocorrelation
function and an empirical1ydr:termined autocorrelat ion function were test ed. The
t heoretical funct ions were ob tained using (3.3 1) and (6.12) respect ively. For the surn
of sinusoids met hod, only an empirical autocorrelation function was used. since the
theoretical function for the method used in [22] is not given for a particular realization
with a finite number of sinusoids. Empirical correlations were found using the method
of [dg] on 220 generated samples. Quality measure results using empirical correlations
did not differ greatly from the corresponding theoretical correlation results: as can be
observed in Table S. 1.
In this cornparison. the IDFT method stands out as being clearly superior to
FFT Method (Theoret ical)
(Empirical)
FIR Filtering lengt h-3 1 (T)
lengt h-3 1 (E)
length-127 (T)
lengt h- 12 7 (E)
length-1023 (T)
length-1023 (E)
lengt h-4095 (T)
length-4095 (E)
Sum of Sinusoids 16 sinusoids (E) - -
64 sinusoids (E)
256 sinusoids (E)
Çmean
Table 8.1. -1 cornparison between the IFFT method. a surn-of-sinusoids method and an FIR filtering method using the developed quality measures, for covariance sequence length 200.
the competing met hods, closely niatching the reference probability density funct ion
over the 200 sample interval. Good quality can be achieved with the other two
methods, but long FIR filter lengths and large numbers of sinusoids were observed to
be necessary for t hese correlation properties to be obtained.
The quality measures were also computed for the FIR filtering method as a func-
tion of filter length. The results for Ç,, are presented in Figure 8-6. For t his esample
the measure Ain was observed to fa11 roughly five to ten percent belon. A.,, in dB.
and the measure was obsen-ed to fa11 five to ten percent above Ç,,,, in dB.
The oscillations observed in Figure 8-6 are due to better performance being observed
when the filter coefficients at either end of the filter sequence were near a zero of the
infinite-length response function (6.1 1) than nhen these coefficients were located near
a local minimum or local maximum of this function.
Given the computed quality of a given generator, the FIR filter length resulting in
equivalent quality can be found from Figure 8-6. Since the FIR filter length controls
the quality of the approximation and computation effort, a comparison of a competing
generator using the filter length parameter can also be used to assess the merits of
novel generation methods.
8.6 Conclusions from the Comparisons
Considering both computational effort and the quality of the generated samples, the
IDFT method of Chapter 4 clearly stands out as being superior to the other tested
Filter Iength Lf
Figure 8-6. The computed quality measure Ç,,, in dB as a fiinction of filter length Lr for the FIR filtering method. A t heoretically-determined autocorrelation sequence length of L = 200 was used.
methods. Xccurate correlation between the generated sarnples is produced over a
wide range of sample lags, and the time requirement to generate the sarnples is less
than the ot her rnethods considered.
One advantage of both the direct FIR filtering method and the sum of sinusoids
rnethod is chat samples can be generated as they are needed. In contrast? the IDFT
method requires that al1 samples be generated using a single FFT operation. Clearlc
however, the reduced storage requirements of the former tao methods corne a t the
espense of overall computational effort andior variate quality. The memory available
on modern workstations and personal cornputers allows qui& generation of a large
number of high-quality samples using the IDFT method. Also. interpolation can
be used in many practical cases (see [28],[26]'[27]) to reduce the required nurnber of
generated channel samples and hence the IFFT size.
FIR filtering using the FFT requires a similar amount of memory to the FFT
method in the case that full-length FFT's are used. but with an overlap-add or
overlap-save method the filtering c m be performed using multiple FFT's of smaller
size. and thus samples are generated in smaller batches. However: Figure 8-1 shows
that the overall computational effort is greater with this FIR method than the IDFT
method. Implementation of overlap-add or overlap-save met hods also requires more
programming effort than the other routines considered here.
In summary, the cornparisons show the IDFT rnethod to be the most efficient and
highest quality method among the tested approaches to correlated Rayleigh variate
generat ion.
Chapter 9
Design of wireless communications systems and the components of wireless commu-
nication systems often involves simulation of a multipath fading channel on a digital
coinputer. -4 common assurnption is that the received signal frorn such a channel has a
Rayleigh amplitude distribution and a uniform phase distribution. This receiwd sig-
nal is also oftm assumed to result from isotropic scattering mith a vertical monopole
antenna. This implies that the simulated signal must eshibit certain correlation prop-
erties. Thus, there is a need for efficient and statistically accurate generation of these
correlated Rayleigh variates on a di@ ta1 cornputer.
One algorithm for generation of these variates which has received midespread use is
that of Smith[$ which uses two calls to an inverse Fast Fourier Transform routine in
the sample generation procedure. The output from the Smith routine has been shown
analytically to be statistically sound in Chapter 3 of this thesis, a result not given
in the original paper. The joint probability density function of the output samples.
including autocorrelation and ergodicity properties, was presented. The specific fil ter
used by Smith. and possible improvements to it. were discussed in Chapter 5 .
h key contribution of this thesis. presented in Chapter 4. is an improvement to the
Srnit h algorithm nhich allows the generation of statistically identical channel samples
with substantially less cornputer execution time and memory
An alternate met hod to the generation of correlated Rayleigh variates. tha t of FIR
filtering a white Gaussian noise sequence. was presented in Chapter 6. This method
can produce autocorrelation properties arbitrarily close t o theoretical values; unfortu-
nately the computational requirements for accurate correlation are impractical. and
the IDFT-based method was determined to be a better choice.
The cornparison and design of techniques for generating correlated variates based
on a multivariate Gaussian distribution (such as Rayleigh variates) is complicated by
the lack of a quantitative mesure of distribution accuracy. Quantitative measures
for evaluation of cornputer generated random variates were developed in Chapter i
for this purpose. These measures were designed to be particularly useful in com-
munication system simulation applications. The measures were applied in Chapter
8 to the IDFT method, the FIR filtering method. and a superposition of sinusoids
method. The DFT method was seen t o compare very favorably mith other rnethods
of generating correlated Rayleigh variates.
The topics presented in this thesis suggest areas which could be the subject of
future investigations. The DFT method is applicable to any correlation funct ion
that is well approsimated by a function mith a strictly positive DFT; designs and
analyses of filters to produce other common autocorrelation functions 11-ould be usefuI
results. It would also be useful to quantitatively investigate the use of interpolation
between channel samples in terms of reduction in computation and effect on statistical
accuracy
Appendix A
Program Code in C for Generation of Correlated Rayleigh Random
Variates by Inverse Discrete
Fourier Transform
#include <t ime . h> #inchde <stdio.h> #indude <math. h> #include <s td l ib .h> #define p i 3.14159265358979
/* Declarations f o r Numerical Recipes Functions */ void f our l (f l o a t data[] , unsigned long m, in t i s ign) ; f l oa t gasdev (long *idim) ;
void rayf ad(unsigned in t nopoints, f l oa t relfdmax , f l oa t *data) C /* nopoints i s the desired number of channel samples */ /* relfdmax i s the max Doppler frequency divided by the sample r a t e */ /* *data i s a pointer t o an a n a y dataC0. .2*nopoints-11 */
f l o a t fdn; /* i n t i m ; /* long idwn; /* f l o a t *f ; /* f l o a t sum; /* i n t i ; /*
/* Seed randorn
Exact m a r Dopper frequency on [O ,nopoints-11 scale */ Index of maximum nonzero positive-frequency coeff ic ient */ Used by uniform random number generator */ Pointer t o f i l t e r array */ Swn of f i l t e r coeff ic ients , used t o normalize f i l t e r */ Counter */
number generator */
/* Find number of nonzero frequency coeff ic ients */ fdn = nopoints * relfdmax; i m = f loor ( fdn) ;
/* Allocate f i l t e r array */ i f ( ( f = (f l o a t *) malloc ( (unsigned i n t ) sizeof (f l oa t ) *im+l) ) ==NULL) i
pr int f (llMalloc could not a l loca te f i l t e r array i n rayf ad" ) ; ex i t (1) ;
1
/* Form f i l t e r */ sum = 0.0; *f = 0.0; f o r (i=l; i < i r n ; i++) (
sum += pov(*(f+i) = 1.0 / s q r t ( sqrt( 1.0 - pow((i/fdn) ,2.0) ) ) , 2 . 0 ) ;
/* Nomalize filter so variance of real and imaginary output is 0.5 */ sum = sqrt (O. 5/ (2.0*sum> ; for (i=1; i<=im; i++) (
* (f+i) *= sum; > /* Multiply filter times data */ for (i = 0; i<=im; i++) { *(data+2*i) = ( *(f+i ) ) * gasdev(%idum) ; * (data+2*i+l) = -( * (f +i) ) * gasdev (tidum) ;
for (i = im+l; i < nopoints-im+l; i++) C *(data+S*i) = 0.0; * (data+S*i+l) = 0.0 ;
3
for (i = nopoints-im; i C nopoints; i++) *(data+2*i) = *(f+nopoints-i) * gasdev
/* Numerical Recipies in C inverse FFT */ /* C a l 1 using (data-1) because NR uses first array index of unity */ fouri(data-1, nopoints, -1) ;
/* Free filter array */ free(f);
/* Complex Gaussian process is returned in data[0..2*nopoints-11. Even indices contain real part ; odd indices contain imaginary part */
/* For Rayleigh envelope output, uncomment the folloving. Output vil1 be in data[O..nopoints-11. */
/* for (i=O; i<nopoints; i++) { */ /* *(data+i)=sqrt(pow(*(data+2*i) ,2.0) + pow(*(data+2*i+l) ,2.0)) ;*/ /* ) */
) /* end of rayfad */
/* Example program t o cal1 rayfad */ void maino 1
f l o a t *data; /* Pointer t o data array */ unsigned i n t i ; /* counter */ unsigned i n t nopoints; /* number of generated samples */ f l oa t f d ; /* Doppler frequency par sample */
/* Number of samples must be a power of two */ nopoints = pow(2.0,16);
/* Allocate data array. This must have P*nopoints elements. */ i f ((data=(£ l o a t *)malloc ((unsigned i n t ) (2*nopoints*sizeof (f l oa t ) ) ) )==NüLL) {
pr in t f ("Malloc could not a l locate da ta a r ray" ) ; ex i t (1 ) ;
1
/* f d is Doppler frequency i n Hz / Sampling frequency in Hertz */ fd = O 05;
/* Generate fading sarnples */ rayfad(nopoints, f d , data) ;
/* Redirect the output t o a file, and the f i l e can be loaded in to */ /* MATLAB as an nopoints x 2 array, u i th r e a l part i n f i r s t colwnn */ /* and imag par t i n second column */ f o r (i=O ; icnopoints; i++) {
pr in t f ("%e %e\n1',*(data+2*i) ,*(data+2*i+l)) ;
/* Free da ta array */ f ree (data) ;
Appendix B
Program Code in MATLAB for Generat ion of Correlated Rayleigh Random Variates by Inverse
Discrete Fourier Transform
function x=rayfad(nopoints,relfdmax); b=rayf ad(nopoints , relf drnax) %nopoints is number of charme1 samples to generate Xrelfdmax is maximum Doppler frequency divided by sample rate
% fdn is exact max Doppler frequency on [O:nopoints-13 scale fdn=nopoints*relfdmax;
% im is index of maximum nonzero positive-frequency coefficient im=f loor (fdn) ;
% Form filter f=i./sqrt(sqrt(l-((0:im-1) ./f&) .-2)) ; f (im+i)=sqrt (im* (pi/2-atan((im-l)/sqrt (2. *im-1)) ) ) ; f=[O f (2 : length(f) ) zeros (1, (nopoints-length(f) ) ) 1 ; f ((nopoints/2+2) :nopoints)=f ((nopointd2) : -1 : 2) ;
% Normalize filter so variance of real or imaginary output (Gaussian) % is 0.5 f=f *sqrt (O. S/sum(f) ) ;
% Multiply f ilter and data *= Cf ' -f '1 . *randn(nopoints ,2) ; x=x(: ,i)+j*x(: ,SI ;
% Take inverse FFT x=ifft (x,nopoints) ;
% Output vector x is complex Gaussian process. For envelope output,
% uncomment the following line % x=abs (x) ;
% end of rayfad %
References
[l] J. 1. Smith: "-4 computer generated rnultipath fading simulation for mobile
radio.'' IEEE Trans. Veh. Technol.. vol. VT-24, no. 3. pp. 39-40. ;\ugust 1975.
[2] G. -4. Arredondo. W. H. Chriss, and E. H. Walker. "A multipath fading sirnulator
for mobile radio.' IEEE Trans. Commun., vol. COM-91. no. 11. pp. 1323-1328.
Xov. 1973.
[3] A. D. Kot and C. Leung, "Optimal partial decision combining in diversity s?s-
tems." IEEE Trans. Commun.. vol. COLI-38. no. 7 . pp. 981-991. duly 1990.
[4] C. S. K. Leung and J . Ng, .'Estimation of block error rates for SCFSK modu-
lation on a VHF/CHF mobile radio channel." IEEE Trans. Veh. Technol.. vol.
38, no. 2? pp. 46-49, May 1989.
[5] C. S. K. Leung and A. Lam, w'Fonvard error correction for an ARQ scherne."
IEEE Trans. Commun., vol. COM-29, no. 10, pp. 1514-1519. Oct. 1951.
[6] G. Be~ielli, "A Go-Back-S protocol for mobile communications." IEEE. Trans.
Veh. Technol., vol. VT--IO, no. 4, pp. 714-720, Nov. 1991.
[il H. B. Li. Y. Iwanami. and T. Ikeda. "Çymbol error rate analysis for .\IPSK under
Rician fading channels wi t h fading compensation based on t ime correlat ion."
IEEE. Trans. Veh. Techn01.~ vol. VT-44: no. 3, pp. 535-5411 h g . 1995.
[8] W. Jakes, Ed., iWacrowave Mobile Comrnunications~ New York: Wile- 1974.
[9] R. H. Clarke, "A statistical theory of mobile-radio reception," Bell Syst. Tech.
J.. vol. 47, pp. 937-1000, July-Aug 1968.
[IO] M. J. Gansa '*A power-spectral theory of propagation in the mobile-radio enri-
ronment." IEEE Trans. Veh. Te~hnol.~ vol. VT-21. no. 1. pp. 27-38, February
1972.
[Il] J. D. Parsons, The Mobile Radio Propagation Channel, Sem York: Halsted.
1992.
il21 B. Sklar, "Rayleigh fading channels in mobile digital communciation systems.
part 1," IEEE Comm. Mag.' vol. 33. no. 7': pp. 90-100: July 1997.
[13] S. W. Halpern, "The effect of having unequal branch gains in practical predetec-
tion diversity systems for mobile radio," IEEE Trans. Veh. Technol., vol. VT-26.
no. 1, pp. 94-103, Feb. 1977.
[14] E. L. Caples. K. E. Massaci. and T. R. Minor. UHF channel simulator for
digital mobile radio," IEEE Trans. Veh. Technol., vol. VT-29, no. 2. pp. 281-289.
May 1980.
[l5] F. Davarian. -'Channel simulation to facilitate mobile-satellite cornmuriicatio~is
research." lEEE Trans. Commun.' vol. COhf-35. no. 1. pp. 47-56. Jan. 1987.
[16] E. Casas and C. Leung, "-1 simple digital fading simulator for mobile radio.'
IEEE Trans. Veh. Technol., vol. VT-39, no. 3. pp. 205-212. h g . 1990.
[17] 'il. R. Karirn. *'Transmission of digital data over a Rayleigh fading channel."
IEEE Trans. Veh. Technol.. vol. VT-31. no. 1. pp. 1-6. Feb. 1982.
[18] T. Hattori and K. Hirade. "Generation method of mutually correlated multipath
fading maves~' Electronics and Communications in Japan. ml. 59-B. no. 9. pp.
69-76? 1976.
1191 K-S. Chung, "Generalized tamed frequency modulation and its application for
mobile radio communications~" IEEE Trans. Veh. Technol.. vol. VT-33. no. 3.
pp. 103-113, h g . 1984.
P O ] SI. R. Karirn, .'Packet communications on a mobile radio channel?'' AT & T
Tech. J.' vol. 65. no. 3' pp. 12-20. May-June 1986.
[21] P. Dent, G. E. Bottomley, and T. Croft, "Jakes fading mode1 revisited." Elec-
tronics Letters, vol. 29. no. 3, pp. 1162-1163. 24th June 1993.
[22] P. Hoeher , .'A statistical discrete-time mode1 for the WSSCS multipat h channel."
IEEE Trans. Veh. Technol.' vol. VT-41, no. 4, pp. 461-468: 'iov. 1992.
r23] H. Schulze. 5tochastic models and digital simulation of mobile channels." in
Proc. Kleznheubacher Berichte? Kleinheubach. 1989. German PTT. Darmstadt.
vol. 32. in German.
[24] 31. Patzold, C. Killat, and F. Laue. *'.A deterministic digital simulation model
for Suzuki processes n i th application to a shadowed Rayleigh land mobile radio
channel." IEEE Trans. Veh. Technol.. vol. t'T-45' no. 2, pp. 315-331. h1ay 1996.
[25] P. M. Crespo and J. Jiménez. Tomputer simulation of radio channels using a
harrnonic decornposition technique," IEEE Trans. Veh. Technol.. vol. YS-44.
no. 3, pp. 411-419. August 1998.
[26] S. A. Fechtel. .'A novel approach to modeling and efficient simulation of
frequency-selective fading radio channels." IEEE J. Select. Areas Commun..
vol. SAC-11. no. 3' pp. 422-431. Apr. 1993.
[27] 41. A. Wickert and J. SI. Jaycobsmeyer, "Efficient Rayleigh mobile channel
simulation using IIR digital filters," in ICSPAT '95. Boston: UA. October 1993.
[28] G. W. K. Colman. S. D. Blostein, and N. C. Beaulieu. ''An hR41.4 multipath
fading s i m u l a t ~ r , ~ in The 7th Annual Virginia Tech Symposium on Wireless
Persona1 Communications, Blacksburg, VA., June 1997: pp. 5-1-5-12.
[29] H. S. Wang and X. Moayeri, "Finite-state bIarkov channel-a useful model for
radio communication channels." IEEE Trans. Veh. Technol.. vol. YS--44, no. 1.
pp. 163-171. February 1995.
[30] H. S. Wang and P.-C. Chang, "On verifying the first-order 11arkovian assumption
for a Rayleigh fading channel model." IEEE Trans. Veh. Technol.. 1-01. VT--43.
no. 2. pp. 333-35'7: 41ay 1996.
[31] H.-Y. Wu and A. Duel-Hallen, "On the performance of coherent and noncoherent
detectors for mobile radio CDSIA channels," in 1996 5th IEEE Int. Conf. on
Universal Personal Communications Record' Cambridge. SIA. September 1996.
vol. 1. pp. 76-80.
[32] C. C. Tan. "Amplitude based blarkov modelling of the mobile radio channel."
M.S. thesis, Queen's University. Kingston? Canada.
[33] S. S. Rappaport. S. Y. Seidel. and K. Takarnizawa. 5tatistical channel impulse
response models for factor- and open plan building radio communication system
design," IEEE Trans. Commun.. ml. 39, no. 3. pp. 794-807. 11ay 1991.
[34] T. S. Rappaport and V. Fung, "Simulation of bit error performance of
FSK? BPSK, and pi14 DQPSK in Bat fading indoor radio channels using a
measurement- based channel model," IEEE Trans. Veh. Technol.. vol. \T-40.
no. 4. pp. 731-740, Xov. 1991.
[33] K. J. Gladstone and J. P. McGeehan? 'Cornputer simulation of multipath fading
in the land mobile radio environment," IEE Proc.. vol. 127-G, no. 6, pp. 323-330.
Dec. 1980.
[36] H. Hashemi. "Simulation of the urban radio propagation channel.' IEEE Trans.
Veh. Technol., vol. VT-28. no. 3, pp. 213-223. Aug. 1979.
[37] C. Loo and Y. Secord, Cornputer models for fading channels wit h applications
to digital transmission." IEEE. Trans. Veh. Technol.. vol. VT--IO. no. -1: pp.
700-707. XOV. 1991.
[38] J. F. Malkovich and A. A. Afifi: -'On tests for multivariate norrnality." J.
American Statistical Association. vol. 68, no. 341. pp. 176-1 79. Slarch 1973.
[39] L. Sachs. Applied Statistics. New York: Springer-Verlag, 1978.
[40] Y. A. Stephens? "EDF statistics for goodness of fit and some cornparisons." J .
American Stat. Assoc., vol. 69. no. 347. pp. 730-737. September 1974.
[41] S. S. Shapiro and 'il. B. Wilk. .'.in Analysis-of-\'ariance test for norrnality
(complete samples) ." Biometrikaal vol. 52: pp. 591-61 1. 1965.
[42] T. W. Anderson. "Some nonparamet ric multivariate procedures based on sta-
tistically equivalent blocks," in Multivar%ate Analysis, P. R. Krishnaiah, Ed.
Academic Press. 1966.
[43] T. W. Anderson, An Introduction to iblultiuanate Statistical Analyszs. 2nd ed..
New York: Wiley, 1984.
[44] D. F. Morrison, ~Wultivariate Statistical Methods, 3rd ed., Xew York: LlcGraw-
Hill, 1990.
[45] A. Papoulis. Probability, Random Variable. and Stochastic pro cesse.^. 3rd ed..
Xem York: McGraw-Hill, 1991.
[46] J. G. Proakis, Digital Communications, 2nd ed.. New York: J,icGram-Hill. 1989.
[.Li'] A. V. Oppenheim and R. W. Schafer, Dzscrete-Time Signal Processing, Engle-
wood Cliffs: Prentice-Hall, 1989.
[48] J . G. Proalcis and D. G. Manolakis. Digital Signal Processing: Principles, Algo-
n'thms, and Applications, Cpper Saddle River, XJ: Prentice-Hall. 1996.
[49] W. Press et. al.. iVumen'cal Reczpes in C. The Art of Scientzfic Cornputing.
Cambridge Cniversity Press. 1992.
[50] L. DeVroye. Non- Uniform Randorn Varàate Generation. New York: Springer-
Verlag, 1986.
[5 11 IEEE ASSP Group Digital Signal Processing Commit tee. Programs for Digital
Signal Processing, Xew York: IEEE Press. 1979.
[Z] M. R. Spiegel? Mathematical Handbook of Formdas and Tables. Xew York:
hIcGraw-Hill, 1968.
[53] W. A. Gardner. Introduction to Random Processes, Xew York: Slacmillan. 1986.
[54] J. Stewart , Single Variable Calculus, Monterey: Brooks/Cole. 1987.
[55] IMSL, Inc., Houston, Texas, FORTRAN Subroutines for Mathematical Applica-
tions, 1991.
157
[56] 5. C. Beaulieu and C. C. Tan. "FFT based generation of bandlimited Gaussian
noise variates," submitted, 1996.
[57] 1. S. Gradshteyn and 1. .LI. Ryzhik, A. Jeffrey. Ed.. Table of Integrah. Series.
and Products, 5th ed., San Deigo: Academic Press, 1994.
[28] Y. C. Beaulieu and D. J . Young; "Esact implementation of the JO(-) fading
model," submitted, 1997.
[59] W. R. Bennett. "Distribution of the sum of randornly phased components."
Qzlart. AppZ. Math.. vol. 5: pp. 383-393. January 1948.
[60j .LI. Slack. "The probability of sinusoidal oscillations cornbined in random phase."
J. IEEE, part III: vol. 93, pp. 76-86? 1946.
[61] 31. Abramowitz and 1. -1. Stegun, Eds.. Handbook of Mathematical Functions.
New York: Dover. 1968, 9th printing.
[62] R. E. Ziemer and W. H. Tranter. Pn'nciples of Communications: Systems. Mod-
ulation. and Noise, Boston: Houghton Mifflin, 1990.
[63] L. L. Scharf. Statistical Signal Processzng, Reading Xlass.: Addison-Wesley.
1991.
[64] G. Strang, Linear Algebra and its Applications, Fort Worth: Harcourt Brace
Jot-anovich. 1988.
[65] M. Berger, Geometry I and II? Berlin: Springer-Verlag, 1987.
188
[66] 3. S. Russell. D. R. Farrier. and J. Howell. "Evaluation of rnultinornial proba-
bilities using Fourier series expansions." App l . Statist.' vol. 34. no. 1. pp. 49-33.
1985.
[61] N. C. Beaulieu, "On the performance of digital detectors with dependent sam-
ples," IEEE. Trans. Commun., vol. COM-36, no. 11. November 1988.
[68] -4. Kelly and 1. Pohl. A Book on C, 3rd ed. Men10 Park. CA: Addison-\Vesle>
1995.
[69] The MathLVorks, Inc. MATLA B Reference Guide. 1992.