Doc
-
Upload
anu-chinnu -
Category
Documents
-
view
25 -
download
0
Transcript of Doc
CHAPTER 1COMMUNCICATION TYPES AND SIGNALLING
SCHEMES
1.1 INTRODUCTION
Communication involves the process of establishing connection or link between 2
points for exchange of information. The electronic system used for communication
purpose is called communication equipment. The function of an information source in a
communication system is to produce the information which has to be transmitted.
The message signal generated from the information source is called as base-band
signal or message signal. The base-band signal may be the combination of 2 or more
message signals. If this signal is transmitted directly, then it is known as base-band
transmission. The base-band signal may be both analog and digital. For transmission of
information, the process of modulation is used with some features of a signal called
carrier is varied in accordance with message signal.
When the carrier wave is continuous in nature, it is called Continuous Wave (CW)
or Analog Modulation. On the other hand, if the carrier wave is a pulse type waveform,
then it is called as Pulse Modulation. Depending upon the message signal communication
is classified as Analog and Digital Communications. Digital communication offers
several advantages such as ruggedness to transmission noise, efficient regeneration of the
coded signal, more security etc.
Digital communication is done mainly via two methods:
Digital base band transmission and Digital band pass transmission.
In Digital base band transmission, the data which is in the form of bits (Digitized
values) is transmitted without any modulation. There are several ways of assigning
waveforms to the digital data which are on-off code, polar RZ, Non polar RZ etc.
1
In Digital band pass transmission, the incoming signal is modulated on to a carrier
with fixed frequency limits imposed by a band pass channel of interest. The
communication channel which is used for pass band data transmission may be a
microwave link or a satellite channel etc.
In any method in digital pass band data transmission, the modulation process
involves switching (keying) the amplitude, frequency or phase of the sinusoidal carrier in
some fashion according to modulated data. Thus there are three basic signaling schemes
known as AMPLITUDE SHIFT KEYING, FREQUENCY SHIFT KEYING and PHASE
SHIFT KEYING depending upon the parameter modified in the carrier.
1.2 M-ARY SIGNALLING SCHEMESIn an M-ary signaling system, the information source emits a sequence of symbols
from an alphabet that consists of M symbols. In an M-ary scheme, the number of possible
signals is M=2n where n is an integer. The symbol duration is T=nTb where Tb is the bit
duration. In pass band data transmission; these signals are generated by changing the
amplitude, phase and frequency of the carrier signal in M discrete steps. Thus we have
M-ary ASK, PSK and FSK.
M-ary signaling schemes are preferred over the binary signaling scheme for transmitting
digital information over band pass channels when the requirement is to conserve
bandwidth at the expense of increased power. Thus when the bandwidth of the channel is
less than the required value, we may consider the M-ary signaling scheme for maximum
efficiency.
1.2.1 QPSK Signal
QPSK is one of the most popular digital modulation techniques used for Satellite
communication and sending data over cable networks. Its popularity comes from both its
easy implementation and resilience to noise.
2
The implementation of QPSK involves changing the phase of the transmitted
waveform. Each finite phase change representing unique digital data. A phase modulated
waveform can be generated by using the digital data to change the phase of a signal while
its frequency and amplitude stay constant. A QPSK modulated carrier undergoes four
distinct changes in phase that are represented as symbols and can take on the values of
π4
, 3 π4
, 5 π4
, 7 π4 . Each symbol represents two binary bits of data. The constellation
diagram of a QPSK modulated carrier is shown
Figure 1.1 Constellation plot of QPSK modulated signal
QPSK signal is given by the equation:
s ( t )=√2Eb
Tcos(2 π f c t+ (2 i−1 ) π
4 );0 ≤t ≤T …………….. (1.1)
= 0 ; else where
Where i= 1,2,3,4 and Eb is the transmitted signal energy per symbol and T is the symbol
duration and fc is the carrier frequency. Each possible value of the phase corresponds to a
unique pair of bits called a di-bit.
3
On expanding the above equation and separating it, we get two orthonormal basic
functions Φ1 and Φ2 given by
Φ1=√ 2T
cos2 π f c t…………………………………………………. (1.2)
Φ2=√ 2T
sin 2π f c t………………………………………………….. (1.3)
There are 4 message points and the associated signal vector are defined by
s=¿ where i=1,2,3,4;… (1.4)
Table 1: various dibits of QPSK signal
Input di-bit Phase of QPSK signal Coordinates of points Message
10 Π/4 +√E/2 -√E/2
00 3Π/4 -√E/2 -√E/2
01 5Π/4 -√E/2 +√E/2
11 7Π/4 +√E/2 +√E/2
The input binary sequence is represented in polar form, with symbols 1 and 0
represented by √E and -√E. The orthonormal basis functions are modulated by the two
binary waves. Finally the two are added to give a resulting QPSK signal.
The symbol duration is given by 2T, which is twice the bit duration of the binary
wave. For a given transmission, a QPSK carrier wave carries twice as many bits of
information as corresponding binary PSK wave.
1.3 QPSK MODULATION
The figure 1.2 shows the block diagram of a typical QPSK transmitter. The
unipolar binary message (data) first converted into a bipolar NRZ (Non-return to-zero)
sequence using unipolar to bipolar converter. The bit stream is then split into two bit
streams I (in-phase) and Q (quadrature). The bit stream in-phase (I) is called “even”
stream and quadrature (Q) is called “odd” stream.
4
The input data go to the Serial-to-parallel converter then it is split up into two.
The two bit stream is fed to the low pass filter. Then the two bit stream after filtering is
fed to the modulator. The filter at the output of the modulator confines the power
spectrum of the QPSK signal within the allocated band. The two modulated bit streams
are summed and fed to the band pass filter and produce a QPSK output.
Figure 1.2 Block diagram of QPSK modulator.
1.4 QPSK DEMODULATION
For QPSK demodulator, a coherent demodulator is taken as an example. In
coherent detection technique the knowledge of the carrier frequency and phase must be
known to the receiver. This can be achieved by using a PLL (phase lock loop) at the
receiver.
A PLL essentially locks to the incoming carrier frequency and tracks the
variations in frequency and phase. For the following simulation, a PLL is not used but
instead we simple use the output of the PLL. For demonstration purposes we simply
assume that the carrier phase recovery is done and simply use the generated reference
frequencies at the receiver (cos (2 Πf c t )) and (sin(2 Πf c t)) .
5
In the demodulator the received signal is multiplied by a reference frequency
generators (cos (2 Πf c t )) and (sin(2 Πf c t)) on separate arms (in-phase and quadrature
arms). The multiplied output on each arm is integrated over one bit period using an
integrator.
A threshold detector makes a decision on each integrated bit based on a threshold.
Finally the bits on the in-phase arm (even bits) and on the quadrature arm (odd bits) are
remapped to form detected information stream. Detector for in-phase arm is shown
below. For quadrature arm the below architecture remains same but sin(2 Πf c t) basis
function must be used instead.
Figure 1.3 Block diagram of QPSK demodulator
6
CHAPTER 2DIGITAL RECEIVERS
2.1 INTRODUCTIONAn error signal can be derived in a DSP based phase locked loop (PLL) from the
ATAN (arctangent) of a detected angle as opposed to approximating the angle by the
SINE of the angle as is the common practice in an analog based PLL. Another example
we can cite is early-late gates used to extract timing information in analog timing
recovery loops. The difference between the outputs of the early gate and the late gate
drives the loop in the direction to set the difference to zero. We recognize that the early-
late gate difference forms an estimate of the derivative of the matched filter output. Thus
rather than form three filters, the matched filter and bracketing early and late matched
filters, we can build two filters, the matched filter and the derivative-matched filter.
These systems may offer significant performance and implementation benefits in
modern hardware and software defined radio systems. We first review conventional
receiver structures and their digital counterparts. We then examine carrier and timing
recovery schemes in terms of modern DSP based implementations. Multirate signal
processing and polyphase filter structures form the core of timing recovery schemes,
while the CORDIC structure forms the core of down-conversion and phase detector
functions required for carrier synchronization.
2.2 GENERATIONS OF DIGITAL RECEIVERS
2.2.1 First Generation
It includes a section to perform analog preconditioning, an interface segment to
convert the preconditioned data to a sampled data representation, a digital post
conditioning segment to minimize the contributions of channel distortion and noise to the
output signal, and a detector to estimate the parameters of the modulated signal.
7
The Detector also supplies feedback information to a carrier recovery loop, a
timing recovery loop, and the equalizer controller. The carrier recovery loop aligns the
frequency and phase of the controlled oscillator (in the final down converter) to the
carrier frequency and phase of the received signal. Similarly, the timing recovery loop
aligns the frequency and phase of the sampling clock so that the position of the data
samples at the output of the digital block coincide with the time location of the maximum
eye opening.
Our first concern is the analog components in the signal-conditioning path. Our
second concern is the analog components in the feedback paths of the carrier and timing
loops. Analog signals are required to control the VCOs of the carrier and timing recovery
loops. The levels of the control signals are determined by the digital loop filters
implemented in the DSP segment of the receiver. The level of the digital control signals
must be brought to the analog domain via Analog to Digital Converters (ADC).
Figure-2.1: Block Diagram of Signal Processing performance in first generation
receivers.
8
2.2.2 Second Generation
In this version of the receiver the ADC is positioned to be the interface to the
analog signal at the I-F stage. The sampling and quantization occurs prior to the
quadrature down conversion. We have two options here. In the first option, we select a
sample rate satisfying the Nyquist criterion for the maximum frequency of the I-F stage.
In the second option we select a sample rate satisfying the Nyquist criterion for the two-
sided bandwidth of the I-F stage, while intentionally violating the Nyquist criterion for
the I-F center frequency.
Figure-2.2: Block Diagram of Signal Processing performance in second generation
receivers.
2.2.3 Third Generation
A unique attribute of the down-sampling operation is that we can change the
phase of the sample locations in the down sampled time series relative to the epochs in
the series. If we are performing M-to-1 down sampling, we have access to the output time
series with M-different sample phase offsets.
We can use the existence of these M possible output series to affect a timing
recovery process. Rather than have the timing recovery loop modify the locations of
samples points during the ADC process, as done in first and second generation receivers,
we can have loop control the phase of the re sampling process in the re-sampling filter.
9
Since the phase of the re-sampling filter is defined by the selection of the phase
weights in the single path structure, the timing recovery process defaults to controlling an
index pointer in the filter's coefficient space. We note that when the polyphase filter is
used as part of the timing loop, the number of stages in the partition must be increased to
satisfy the timing granularity requirements of the timing loop rather than the down
sampling requirement
Figure-2.3: Block Diagram of Signal Processing performance in third generation
receivers.
2.3 OPTIMUM ML RECEIVERS
The ultimate goal of a receiver is to detect the symbol sequence in a receiver
signal disturbed by noise with minimal probability of detection errors. It is known that
this is accomplished when the detector maximizes the posterior probability for all
sequences
aMAP=argmax a p (a∨rf )……………………… (2.1)
The posteriori probability using Bayes’s rule can be written as
p (a|r f )=p (r f|a ) P(a)
p(r f )……………………………(2.2)
10
The MAP (Maximum a posteriori) detector therefore needs to know the a priori
distribution P(a) and the distribution of rf conditioned on the knowledge of the data
sequence a. Since p(rf) does not depend on the data sequence, maximizing p(a|r f) is same
as maximizing p (r f|a ) P(a). For equally probable data sequence then maximizing the a
posteriori probability is the same as maximizing the likelihood function p (r f|a ), and
MAP reduces to ML(Maximum Likelihood).
2.3.1 MAP Receiver
The optimal MAP receiver has no separate synchronization unit. The
synchronization parameters are considered as unwanted parameters which are removed
from the pdf by averaging.
p (a|r f ) α P (a )∫ p (r f|a , θ , ϵ ) p (θ ) p (ϵ ) dθ dϵ………… (2.3)
2.3.2 ML Receiver
The ML receiver jointly estimates the synchronization parameters and the data sequence.
The receiver comprises a digital matched filter, a time-variant interpolator and decimator,
a phase rotation unit and a data sequence estimator. The matched filter operation operates
on samples rf(kTs). the sample value at the correct sampling instant is obtained by digital
interpolation of the matched filter output. The digital interpolator performs the function
of a timing recovery circuit. The data sequence estimator and phase recovery circuit
operate with one sample per symbol. The samples z(nT+ϵT)(1T symbol rate) are obtained
from the matched filter output bt a time-variant decimator.
z (nT+ϵT )= ∑k=−∞
∞
rf ( kT s ) gMF (−k T s+nT +ϵT )…….. (2.4)
z (nT+ϵT )= 1T s
∫−∞
∞
rf (k T s ) gMF (−t+nT +ϵT )dt……… (2.5)
11
CHAPTER 3TIMING RECOVERY
3.1 OVERVIEW
Timing recovery is also called as Clock Recovery. The purpose of the timing
recovery loop is to obtain symbol synchronization. Two quantities must be determined by
the receiver to achieve symbol synchronization. The first is the sampling frequency.
Locking the sampling frequency requires estimating the symbol period so that samples
can be taken at the correct rate.
Although this quantity should be known (e.g., the system’s symbol rate is
specified to be 20 MHz), oscillator drift will introduce deviations from the stated symbol
rate. The other quantity to determine is sampling phase. Locking the sampling phase
involves determining the correct time within a symbol period to take a sample. Real-
world symbol pulse shapes have a peak in the center of the symbol period.
Sampling the symbol at this peak results in the best signal-to-noise-ratio and will
ideally eliminate interference from other symbols. This type of interference is known as
inter-symbol interference.
3.2 TIMING RECOVERY
Assume that the samples {rf(kTs)} contain all information. Due to a shift between
transmitter and receiver clocks, samples at t=kT+ÊT are required. In the analog receiver
the solution for this is to control the sampling instant of the received signal. The sampling
process is synchronized to the symbol timing of the received signal.
A modification of this analog solution would be to derive the timing information
from the samples of the receiver and to control the sampling instant. Such a solution is
called a hybrid timing recovery.
12
Figure 3.1: Timing Recovery Methods
3.3 METHODS OF TIMING RECOVERY
3.3.1 Early-late gate algorithm
This timing recovery algorithm generates its error by using samples that are early
and late compared to the ideal sampling point. The generation of the error requires at
least three samples per symbol. The method of generating the error is illustrated in Figure
3.2. The left plot is for the case where sampling is occurring late. Note that the early and
late samples are at different amplitudes. This difference in amplitude is used to derive an
error for the timing recovery loop.
Once the timing recovery loop converges, the early and late samples will be at
equal amplitudes. The sample to be used for later processing is the sample that lies in the
middle of the early and late samples. One drawback of the early-late gate algorithm is
that it requires at least three samples per symbol. Thus, it is impractical for systems with
high data rates.
13
Figure 3.2 Method of generating error for early-late gate algorithm
3.3.2 Mueller and Muller Algorithm
The Mueller and Muller algorithm only requires one sample per symbol. The
error term is computed using the following equation:
en=( yn• yn−1)−( yn • yn−1 )………….... (3.1)
where yn is the sample from the current symbol and yn-1 is the sample from the
previous symbol. The slicer (decision device) outputs for the current, and previous
symbol are represented by ŷn and ŷn-1, respectively. Examples of the value for the Mueller
and Muller error for the cases of different timing offsets are shown in Figure 3.3, Figure
3.4 and Figure 3.5.
One drawback of this algorithm is that it is sensitive to carrier offsets, and thus
carrier recovery must be performed prior to the Mueller and Muller timing recovery.
Figure 3.3:Timing is fast: en = (–0.8 • 1) – (–1 • 0.5) = –0.3.
14
Figure 3.4: Correct timing: en = (–1 • 1) – (–1 • 1) = 0.
Figure 3.5: Timing is slow: en = (–0.5 • 1) – (–1 • 0.8) = 0.3.
3.3.3 Gardner algorithm
The Gardner algorithm has seen widespread use in many practical timing
recovery loop implementations. The algorithm uses two samples per symbol and has the
advantage of being insensitive to carrier offsets. The timing recovery loop can lock first,
therefore simplifying the task of carrier recovery. The error for the Gardner algorithm is
computed using the following equation:
en=( yn – yn – 2) yn –1…………………. (3.2)
where the spacing between yn and yn-2 is T seconds, and the spacing between yn
and yn-1 is T/2 seconds. The following figures illustrate how the sign of the Gardner error
can be used to determine whether the sampling is correct (Figure 3.6), late (Figure 3.7) or
early (Figure 3.8). Note that the Gardner error is most useful on symbol transitions (when
the symbol goes from positive to negative or vice-versa). The Gardner error is relatively
small when the current and previous symbols have the same polarity.
15
Figure 3.6: Correct timing: en = (–1 –1) • 0 = 0.
Figure 3.7: Timing is late en = (–0.8 –0.8) • (–0.2) = 0.32.
Figure 3.8: Timing is early: en = (–0.8 –0.8) • (0.2) = –0.32.
3.4 TIMING RECOVERY DESCRIPTION
Figure 3.9 shows a typical baseband PAM communication system where
information bits bk are applied to a line encoder which converts them into a sequence of
symbols ak. This sequence enters the transmit filter GT(ω) and then is sent through the
channel C(ω) which distorts the transmitted signal and adds noise. At the receiver, the
signal is filtered by GR(ω) in order to reject the noise components outside the signal
bandwidth and reduced the effect of the ISI. The signal at the output of the receiver filter
is
16
……………. (3.3)
where g(t) is the baseband pulse given by the overall transfer function G(ω)
(Equation 3.4), n(t) is the additive noise, T is the symbol period (transmitter) and εT is the
fractional time delay (unknown) between the transmitter and the receiver, |ε| < ½. The
symbols âk are estimated based upon these samples. They are finally decoded to give the
sequence of bits bk.
..................... (3.4) `
Fig 3.9 Basic communication system for baseband PAM
The receiver does not know a priori the optimum sampling instants {kT+ εT}.
Therefore, the receiver must incorporate a timing recovery circuit or clock or symbol
synchronizer which estimates the fractional delay ε from the received signal.
Two main categories of clock synchronizers are then distinguished depending on
their operating principle: error tracking (feedback) and feedforward synchronizers.
3.4.1 Feedforward Synchronizer
Figure 3.10 shows the basic architecture of the feedforward synchronizer. Its main
component is the timing detector which computes directly the instantaneous value of the
fractional delay ε from the incoming data. The noisy measurements are averaged to yield
the estimate and sent as control signal to a reference signal generator. The generated
clock is finally used by the data sampler
17
Fig 3.10 Feedforward (open-loop) Synchronizer
3.4.2 Feedback Synchronizer
The main component of the feedback synchronizer is the timing error detector,
which compares the incoming PAM data with the reference signal, as shown in Figure
3.11. Its output gives the sign and magnitude of the timing error e=ε− ε .The filtered
timing error is used to control the data sampler. Hence, feedback synchronizers use the
same principle than a classical PLL
Fig 3.11 Feedback (closed loop) Synchronizer
The main difference between these two synchronizer implementations is now evident.
The feedback synchronizer minimizes the timing error signal, the reference signal is used
to correct itself thanks to the closed loop; the feedforward synchronizer estimates directly
the timing from the incoming data and generates directly the reference signal, no
feedback is needed.
18
3.5 CONCEPTS USED IN TIMING RECOVERY
Figure 3.12 basic blocks of timing recovery
3.5.1 Interpolation
Interpolation is the process of defining a function that takes on specified values at
specified points. Interpolation involves discovering a pattern in a set of data points to
estimate a value between two points. Linear interpolation is one of the simplest ways to
interpolate—a line connecting two points is used to estimate intermediate values. Higher-
order polynomials can replace linear functions for more accurate, but more complicated,
results. Interpolation can be contrasted with extrapolation, which is used to estimate
values outside of a set of points instead of between them.
A discrete set of data points has points with two or more coordinates. In a typical
XY scatter plot, the horizontal variable is x and the vertical variable is y. Data points with
both an x and y coordinate can be plotted on this graph for easy visualization. In practical
applications, both x and y represent finite real-world quantities. X generally represents an
independent variable, such as time or space, while y represents a dependent variable
The task of the interpolator is to compute the optimum samples y(nT+ εˆT) from a
set of received samples x(mTS) .
19
Figure 3.13: Digital Interpolator Filter
The interpolating filter has an ideal impulse response of the form of the sampled
si(x). It can be thought as a FIR filter with infinite taps whose values depend on the
fractional delay μ.
………… (3.5)
3.5.1.1 Interpolation Types
There are many different interpolation methods, some of which are described
below. Some of the concerns to take into account when choosing an appropriate
algorithm are: How accurate is the method? How expensive is it? How smooth is the
interpolant? How many data points are needed?
A) Piecewise Constant Interpolation
The simplest interpolation method is to locate the nearest data value, and assign
the same value. In simple problems, this method is unlikely to be used, as linear
interpolation is almost as easy, but in higher dimensional multivariate interpolation, this
could be a favorable choice for its speed and simplicity.
Figure 3.14: Piecewise constant interpolation or nearest-neighbor interpolation
20
B) Linear Interpolation
Linear interpolation is the simplest method of getting values at positions in
between the data points. The points are simply joined by straight line segments. Each
segment (bounded by two data points) can be interpolated independently. The parameter
mu defines where to estimate the value on the interpolated line, it is 0 at the first point
and 1 and the second point. For interpolated values between the two points mu ranges
between 0 and 1.
Figure 3.15: Plot of the data with linear interpolation
C) Polynomial Interpolation
Polynomial interpolation is a generalization of linear interpolation. Note that the
linear interpolant is a linear function. If we have n data points, there is exactly one
polynomial of degree at most n−1 going through all the data points. The interpolation
error is proportional to the distance between the data points to the power n. Furthermore,
the interpolant is a polynomial and thus infinitely differentiable. So, we see that
polynomial interpolation overcomes most of the problems of linear interpolation.
Polynomial interpolation also has some disadvantages. Calculating the interpolating
polynomial is computationally expensive compared to linear interpolation.
Figure 3.16 Plot of the data with polynomial interpolation
21
D) Cubic Spline Interpolation
Spline interpolation uses low-degree polynomials in each of the intervals, and
chooses the polynomial pieces such that they fit smoothly together. The resulting
function is called a spline
For instance, the natural cubic spline is piecewise cubic and twice continuously
differentiable. Furthermore, its second derivative is zero at the end points. Like
polynomial interpolation, spline interpolation incurs a smaller error than linear
interpolation and the interpolant is smoother. However, the interpolant is easier to
evaluate than the high-degree polynomials used in polynomial interpolation.
Figure 3.17: Plot of the data with spline interpolation
3.5.2 The Farrow Structure
The Farrow structure is an efficient structure to implement polynomial based
interpolation filter. Farrow proposed a structure of FIR filters in cascade where the output
of each filter is obtained after a delay of a single unit from the previous filter output. The
Farrow structure is shown in Fig.3.18. where C0 ( z ) ,C1 (z ) , C2 ( z ) , …. , CM (z ) are m+1 FIR
filters and each filter has a polyphase structure. Fig. 3.19. shows the polyphase structure
of a single FIR filter where
CM (−N2 ) , CM (−N
2+1) ,……,C M (−N
2−2) , CM (−N
2−1)
22
are the coefficients of polyphase structure of M+1th FIR filter.
V 0 (n ) ,V 1 (n ) , V 2 ( n ) ,…. , V m (n ) are the outputs of FIR filters respectively. The continuous
valued parameter µ is called the fractional interval and has values lie between 0 and 1. It
controls the fractional factor by which the input sampled signal is to be interpolated and
used to determine the time interval between the interpolated output sample and the
previous input sample.
Figure 3.18 Farrow structure
Figure 3.19 Poly phase structure for (M+1)th FIR filter
The output signal y(m) of the Farrow structure depends upon the input sampled
signal x(n) and response of FIR filters hc(t). Mathematically, the output signal y(m) is
given by
23
……… (3.6)Where it is assumed that n is the central sample of the interval
−N T x /2≤ t ≤ (−N T x /2 )−T x
and N is an even integer. The impulse response of the Farrow filter is expressed in terms
of the coefficients of individual FIR filters and the μ factor.
…(3.7)
where C0 ( k ) ,C1 (k ) , C2 (k ) , …. , CM ( k )are the coefficients and M ≤ N −1 is the degree of
polynomial. When the polynomial coefficients are known, it is very easy to compute the
impulse response for each interval.
vm(n) represents the input/output relationship of the each FIR filter shown in Fig. 2 whose
impulse response coefficients are cm(k). Hence transfer function of each FIR filter is
given by
………. (3.8)
Cm(z) is independent of the value of µ and is fixed for a given design. The only variable
parameter in the design is µ. The output signal y(m) is obtained by adding the
multiplication of the outputs of FIR filters by the powers of µ respectively.
24
3.5.3 Loop Filter
Each loop has four essential components: the polyphase matched filter, the timing
error detector, the loop filter, and the controller. In each case, the polyphase matched
filter operates on samples arriving every TS seconds. The timing error detector operates
on the matched filter outputs which are output from the polyphase filterbank every T/N
seconds and outputs a timing error every T seconds or once per sample corresponding to
the current estimate of the optimum sampling instant.
The output of the timing error detector is possibly upsampled and used to drive
the loop filter and loop controller. There are three natural possibilities for the loop control
when using an M–stage polyphase matched filter: The loop filter and controller can be
run at MN samples/symbol, N samples/symbol, or 1 sample/symbol. Each choice results
in a slightly different architecture
Interpolation control is performed by a pair of counters whose increments are
synchronized to the input sample clock with period. The counter is a decrementing
modulo-1 counter with underflow period approximately Ti=T/N.
The period of the underflow is altered by the output of the loop filter to align
every Nth underflow. The underflow condition indicates that an interpolant should be
computed and passed on the timing error detector. When the loop is in lock, the
underflow condition will be detected when the polyphase index is pointing to the
filterbank filter with the sample corresponding to the optimum sampling time.
3.5.4 Controller (Number Controlled Oscillator)
A Number Controlled Oscillator (NCO) is a digital signal generator which creates
a synchronous (i.e. clocked), discrete-time, discrete-valued representation of a waveform,
usually sinusoidal. NCOs are often used in conjunction with a digital-to-analog converter
(DAC) at the output to create a direct digital synthesizer (DDS).
25
An NCO generally consists of two parts:
A phase accumulator (PA), which adds to the value held at its output a frequency
control value at each clock sample.
A phase-to-amplitude converter (PAC), which uses the phase accumulator output
word (phase word) usually as an index into a waveform look-up table (LUT) to
provide a corresponding amplitude sample. Sometimes interpolation is used with
the look-up table to provide better accuracy and reduce phase error noise
When clocked, the phase accumulator (PA) creates a modulo-2N saw-tooth
waveform which is then converted by the phase-to-amplitude converter (PAC) to a
sampled sinusoid, where N is the number of bits carried in the phase accumulator. N sets
the NCO frequency resolution and is normally much larger than the number of bits
defining the memory space of the PAC look-up table. If the PAC capacity is 2M, the PA
output word must be truncated to M bits
However, the truncated bits can be used for interpolation. The truncation of the
phase output word does not affect the frequency accuracy but produces a time-varying
periodic phase error.
26
CHAPTER 4
TECHNICAL APPROACHThis chapter deals with various concepts used in developing the project. The
descriptions of these concepts are discussed individually.
4.1 DIGITAL RECEIVER FOR A PAM SIGNAL
The signal in the digital receivers is A/D converted to baseband immediately after
down conversion. In a first stage the signal is down converted to (approximately)
baseband by multiplying it with the complex output of an oscillator. The frequency of the
oscillator is possibly controlled by a frequency control loop. Due to the residual
frequency errors the baseband signal is slowly rotating at an angular frequency equal to
the frequency difference between transmitter and receiver oscillators. The signal then
enters an analog prefilter F (w) before it is sampled and quantized. All subsequent signal
processing operations are performed digitally at the fixed processing rate of 1/Ts.
The signal rf(t) at the output of the prefilter F (w) is the sum of useful signal sf(t)
plus noise n(t). Since rf(t) is band limited the noise n(t) is also band limited. The noise
samples {n(kTs)}, in general, are therefore correlated, i.e., statistically dependent. This
implies that they carry information which must be taken into account further processed
on the matched filter.
The condition on the prefilter F (w) and its corresponding sampling rate 1/Ts
indeed does not require T/Ts to be rational. In other words, sampling is asynchronous with
respect to the transmitter clock.
The digital receiver does not require to have a clock frequency equal to the
symbol rate 1/T. The only existing clock rate at the receiver is 1/Ts which is unrelated to
symbol rate 1/T. In other words, the ratio T/Ts is irrational in general; any assumption
that T is exact multiple of Ts oversimplifies the timing recovery problem of a fully digital
receiver
27
Figure 4.1 Block diagram of a typical digital PAM receiver
4.2 DIGITAL TIMING RECOVERY
The feedback digital timing recovery is shown in the figure. A time continuous,
PAM signal x(t) is received. Symbol pulses in x(t) are uniformly spaced at intervals T.
for simplicity x(t) is assumed to be a real, baseband signal.
28
Figure 4.2: Elements of digital timing recovery
Assume x(t) to be band limited so that it can be sampled at the rate 1/T s without
aliasing. If x(t) is not adequately band limited, aliasing will introduce distortion.
Interpolation is not an appropriate technique to be applied to wide-band signals.
Samples x ( mT s )=x (m ) are taken at uniform intervals Ts. the ratio T/Ts is assumed
to be irrational. These signal samples are applied to the interpolator, which computes
interpolants, designated y (k T i)= y (k ) at intervals Ti. We assume that T i=TK where K is
small integer. The data filter employs the interpolants to compute the strobes that are
used for data and timing recovery.
In the sequel, the interval Ti between interpolants is treated as constant. A
practical modem must be able to adjust the interval so that the strobes can be brought into
synchronism with the data symbols of the signal.
All the elements within the feedback loop contribute to the synchronization
process. Timing error is measured by Timing Error Detector (TED) and filtered in the
loop filter, whose output drives the controller. The interpolator obtains instructions for its
computations from the controller.
4.2.1 Interpolation
In truly digital timing recovery there exist only clock ticks at t=kTs
incommensurate with the symbol rate 1/T. The shifted samples must be obtained from
those asynchronous samples {r (kTs)} solely by an algorithm operation on these samples
rather than shifting a physical clock. The shifting is only one of two parts of the timing
recovery operation. The other part is concerned with the problem of obtaining samples of
the matched filter output z (kT+ÊT) at symbol rate 1/T from the signal samples taken at
rate 1/Ts, as ultimately requires for detection and decoding.
29
The entire operation of digital timing recovery is best understood by emphasizing
that the only time scale available at the receiver is defined by units of T s and, therefore,
the transmitter time scale defined by units of T must be expressed in terms of units of Ts.
The argument of these samples is
nT +ε T=T s [n TTs
+ ε TTs ]=L
∫¿(n TTs +ε T
Ts )+µn=m n+µn¿ ………….. (4.1)
Where mn =Lint(.) means the largest integer less than or equal to the real number in the
argument and µn is a fractional difference
Figure 4.3: Timing Scale
The situation is illustrated in the figure where the transmitter time scale, defined
at the receiver of T, is shifted by a constant amount of (ε 0T), and the time scale at the
receiver is defined by the multiples of Ts. Two important observations cam be made due
to the fact that T and Ts are incommensurate. First, we observe that the relative time shift
30
µnTs is time variable despite the fact that (ε 0T) is constant. Second, we observe that the
time instances mnTs form a completely irregular pattern on the axis. This irregular patter
is required to obtain an average of exactly T between the output samples of the matched
filter, given a time quantization of Ts.
Digital recovery comprises two basic functions:
4.2.1.1 Estimation
The fractional time delay ε 0has to be estimates. The estimate ε is used as if it were the
true value ε 0.the parameters (mn,µn) follows immediately from ε .
4.2.1.2 Interpolation and Decimation
From the samples {rf(kTs)} a set of samples { rf(kTs+ µnTs)} must be computed. This
operation is called interpolation and can be performed by a digital, time-variant filter
HI(ejwTs, µnTs). the time shift µn is time-variant. The index n corresponds to the nth data
symbol.
Only the subset {y(mnTs)}={z(kTs+ µnTs)} of values is required for further
processing. This operation is called decimation.
4.2.1.3 Farrow structure for cubic interpolation
Another approach, better suited for signal interpolation by machine, has been
devised by Farrow. Let the impulse response be piecewise polynomial in each Ts-
segment, i=I 1¿ I 2.
hI (t )=hI [ (i+μk ) T s ]=∑l=0
N
bl (i ) μkl ………………… (4.2)
The interpolant equation is given as
y (k T i)= y [ (mk+μk ) T s ]=∑i= I1
I2
x [ ( mk−i )T s ]h I [ (i+μk ) T s ]………. (4.3)
31
Where I=I2-I1+1
Substituting equ (4.2) in equ (4.3) and rearrange terms to show that the interpolants can
be computed from
y (k )=∑i=I1
I2
x ( mk−i )∑l=0
N
bl ( i ) μkl .
¿∑l=0
N
μkl ∑
i=I1
I2
bl (i ) x ( mk−i ) ………………… (4.4)
¿∑l=0
N
v (l ) μkl v ( l )=∑
i= I1
I2
bl (i ) x (mk−i )
The coefficients bf( i ) arc fixed numbers, independent of µk , determined solely
by the filter’s impulse response hI(t). There are NI such coefficients if all impulse-
response segments are described by polynomials of the same degree N.
Equation (4.4) is itself a polynomial in µk. Nested evaluation of (5.4) is the most-
efficient approach. For a cubic interpolation:
y (k )= [ {v (3 ) μk+v (2 ) } μk+v (1 ) ] μk+v (0)…………. (4.5)
A block diagram for hardware evaluation of (4.4) is shown in Fig. 4.4. This Farrow
Structure consists of N + 1 columns of FIR transversal filters, each column with fixed tap
coefficients. Each FIR column has I taps. Since tap weights are fixed, they might readily
be implemented by table look-up, rather than as actual multipliers.
32
Figure 4.4 Farrow structure for cubic interpolator
In general, I and N are independent; I is set by duration of impulse response and
N is determined by the chosen filter piecewise polynomials. In the special case that the
approximating polynomial (not the filter polynomials) interpolates the base points, then
I = N + 1 and the structure is square.
A Farrow structure needs to transfer only the one variable µk itself for each
interpolation, instead of I filter coefficients as required by a stored-filter implementation.
The transfer problem is greatly reduced, hut at a cost of increased complexity in the
interpolator structure.
A Farrow decomposition can be found for any polynomial based filter. Farrow
coefficients {bl(i)} are shown in Table 4.1 for the particular case of a cubic interpolating
polynomial
Table 4.1 Farrow coefficients bl(i) for cubic interpolator
I l=0 l=1 l=2 l=3
-2 0 -1/6 0 1/6
-1 0 1 1/2 -1/2
0 1 -1/2 -1 ½
1 0 -1/3 1/2 -1/6
33
4.2.2 Timing Error Estimation
4.2.2.1 ML Synchronization Algorithms
Conceptually, the systematic derivation of ML synchronizers is straight forward.
The likelihood function must be averaged over the unwanted parameters. For example,
Joint estimation of (θ , ε ) :
p (r f ∨θ , ε )= ∑for all sequenceof a
P (a ) p (rr∨a , θ , ε ) ………. (4.6)
Phase estimation:
p (r f ∨θ )=∫[ ∑for all sequenceof a
P ( a ) p (rr∨a , θ , ε ) p (ε )]dε……. (4.7)
Timing estimation:
p (r f ∨ε )=∫ [ ∑forall sequence of a
P (a ) p ( rr∨a ,θ , ε ) p (θ )]dθ…… (4.8)
In principle, a more realistic channel model and time-variable parameters could be
taken into account, but it turns out that this approach is mathematically are too
complicated. In view of the often crude approximation made to arrive at a synchronizer
algorithm, it makes no sense to consider accurate channel models. Instead, one derives
synchronization algorithms under idealized conditions and later on analyses the
performances of these algorithms when used in conjunction with realistic channels.
We assume Nyquist pulses and a prefilter |F ( ω)|2 being symmetric about 1/2Ts. In
this case the likelihood functions, assume the simplified form
p (r f ∨a , ε , θ )∝ exp {−1σn
2 [2 ℜ[∑n=0
N−1
|h0,0|2|an|
2−2an
¿ zn (ε ) e− jθ]]}…. (4.9)
34
with the short hand notation zn (ε )=z (nT+εT ).
A first approximation of the likelihood function equation (4.9) is obtained for
large values of N. We can therefore discard the term ∑n
|an|2 →∑ E [|an|
2 ]=constant,
from maximization to obtain the objective function
L (a , θ . ε )=exp {−2σn
2 ℜ[∑n=0
N−1
an¿ zn (ε ) e− jθ ]}……………………. (4.10)
In most digital receivers timing recovery is done prior to phase recovery.
Provided timing is known, one sample per symbol of the matched filter output is
sufficient for carrier phase estimation and symbol detection. To minimize the
computational load in the receiver, carrier phase estimation and correction must be made
at the lowest sampling rate possible, which is the symbol rate 1/T. They will be either
DD/DA(Decision Directed/Data Aided) or NDA(Non Data Aided).
Systematically deriving synchronization algorithms may therefore be understood
as a task of finding suitable approximations. The synchronizers can be classified into two
main categories:
1. Class DD/DA : Decision Directed(DD) or Data Aided(DA)
2. Class NDA: Non Data Aided(NDA)
All DD algorithms require an initial parameter estimate before starting the detection
process. To obtain a reliable estimate, one may end a preamble of known symbols. Class
NDA algorithms are obtained if one actually performs(exactly or approximately) the
averaging operation.
4.2.2.2 NDA Timing Parameter Estimation
The objective function for the synchronization parameters (θ , ε ) is given by
35
L (a , θ . ε )=exp{−2σn
2 ℜ[∑n=0
N−1
an¿ zn (ε ) e− jθ ]}………………………………. (4.11)
=∏n=0
N−1
exp {−2σ n
2 ℜ [an¿ zn ( ε ) e− jθ ]}
In first step we derive data and phase-independent timing estimators. The estimate of ε is
obtained by removing the unwanted parameters aand θ. To remove the data dependency
the above equation is to be multiplied by P ( ai ), where ( ai ) is the ith of M symbols, and
sum over the M possibilities. The likelihood function reads
L (θ , ε )=∏n=0
N−1
∑i=1
M
exp {−2σn
2 ℜ [ an¿i zn (ε ) e− jθ ] P ( ai )}………………………. (4.12)
There are various avenues to obtain approximations to above equation. Assuming
M-PSK modulation with M>2 the probabilities
P ( ai )= 1M for ai =e
j 2 πiM i= 1,2, …… ,M…………………… (4.13)
The maximum likelihood parameter for NDA integrated over entire range of 2π,
independent of θ and arg (zn (ε ) ) is given as
L (ε )=∏n=0
N−1
∫−π+θ−arg (z n( ε ))
π+θ−arg ( zn (ε ) )
exp{−2σn
2 [|zn (ε )|cos (x ) ]}dx…………. (4.14)
¿∏
n=0
N −1
I 0(|zn (ε )an¿|
σn2
2 )The result is same for all phase modulations (M-PSK) but not for M-QAM. In
order to obtain an NDA synchronization algorithm for M-QAM, we would have to
average over the symbols which are not possible in closed form.
36
The equation is further simplified by a series expansion of modified Bessel
function. Taking the logarithm, expanding into a series
I 0 ( x ) ≈ 1+ x2
2 |x|≪1 and discarding any constant irrelevant for the
estimation yields
NDA: ε=arg maxε
L1 ( ε )………………………………………………. (4.15)
≈ arg maxε
∑n=0
N −1
|zn (ε )|2
Averaging over a uniformly distributed phase yields the non-coherent (NC)
timing estimator.
L1 ( ε )=∑n=0
N−1
|zn (ε )|2……………………… (4.16)
Thus the NDA expression when applied to Fourier series is thus defined by
ε=arg maxε
(c0+2ℜ [ c1 e j 2πε ] )……………………. (4.17)
Since c0 and the absolute value of |c1| are independent of ε the maximum of above
equation is assumed for
ε=−12 π
arg c1…………………………….. (4.18)
Here ε is the timing error estimation parameter which determines which signal to
be interpolated. The coefficient c1 is the Fourier transform coefficient which is discussed
below.
It is quite remarkable that no maximum search is needed to find ε since it is
explicitly given by the argument of Fourier coefficient c1. The coefficient c1 is defined by
a summation of (2L+1) integrals. The coefficients c0 and c1 can be computed by a
Discrete Fourier Transforms (DFT).
37
The integer Ms the (nominal) ratio between sampling and symbol rate, M s=TT s
.
For the samples taken at kTs we obtain
c1=∑l=−L
L
∫0
1
|z (¿+εT )|2 e− j 2πε dε………………….. (4.19)
¿ ∑l=−L
L [ 1M s
∑k=0
M s−1
|z ([l M s+k ]T s )|2e
− j(2 πM s)k ]
A particularly simple implementation is found for Ms=4. For this value we obtain a
multiplication-free realization of the estimator.
c1=∑l=−L
L [ 1M s
∑k=0
3
|z ([4 l+k ]T s )|2 (− j )k ] e− j( 2π
4 )k=(− j )k……. (4.20)
Here c0 and c1 are the Fourier transform coefficients and are discussed as above.
The equation derived is used for the estimation of least error through Fourier
transformation method.
4.2.3 Controller (Number Controlled Oscillator)
A controller provides the interpolator with a information needed to perform the
computations. An interpolant is computed from equation (4.3)
Using I adjacent x(m) of the signal and I samples of the impulse response hI(t) of
the interpolating filter. The correct set of signal samples is identified by the base point
index mk and the correct set of filter samples is identified by the fractional interval µk.
Thus the figure 4.4 is responsible for determining mk and µk, and making that information
available to the interpolator.
Once mk and µk have been identified by the controller, then the other elements
load the selected signal and impulse response samples into the interpolation filter
structure for computations.
38
Figure 4.5: Timing processor
The necessary control can be provided by a number controlled oscillator (NCO).
The signal samples are uniformly clocked through a shift register at the rate 1/Ts and the
NCO is clocked at the rate synchronized to 1/Ts. the interpolator is never called upon to
perform upsampling, then the NCO clock period can be Ts. if upsampling is ever required,
then a higher NCO clock rate is needed. The NCO clock rates in downsampling at the
rate 1/Ts are modified in such a way to accommodate upsampling.
The NCO is operated so that its average period is Ti. Recycling the NCO register
indicates that a new interpolant is to be computed, using the signal samples currently
residing in the interpolator’s shift register. Thus, the base point index is identified by
flagging the correct set of signal samples rather than explicitly computing mk.
4.2.3.1 Extraction of µk
39
Fractional interval µk can be calculated from the contents of NCO’s register. The
NCO register contents computed at the mth clock tick as η(m) , and the NCO control word
as W(m). Then the NCO difference equation is
η (m )=[η (m−1 )−W (m−1)] mod−1……………. (4.21)
Control word W(m) (a positive fraction) is adjusted by the timing recovery loop
so that the output of the data filter is strobed at near optimal timing. Under loop
equlibrium conditions, W(m) will be nearly constant. Contents of the NCO will be
decremented by an amount of W(m) each Ts seconds and the register will underflow each
1/W(m) clock ticks, on average. Thus the NCO period is T i=T s
W (m) and so
W (m ) ≈T s
T i
Figure 4.6: NCO relations
This is to say, W (m) is a synchronizer’s estimate of the average frequency of
interpolation 1/Ti, expressed relative to the sampling frequency 1/Ts. The control word is
40
an estimate because it is produced from filtering of multiple, noisy measurements of
timing error. The figure 4.5 determines the estimation of µk which is extracted from
NCO, which is a plot of time-continuous η(t) verses continuous time. In the figure 4.5
mkTs is the time of the sampled clock pulse immediately preceding the k th interpolation
time T i=(mk+μk ) T s . NCO register contents decrease through zero at t=kTi, and the zero
crossing (underflow) becomes known at the next clock tick at time (mk+1 ) T s. Register
contents η(mk ) and η(mk+1 ) are available at the clock ticks.
From similar triangles in figure 4.5 it can be seen that
μk T s
η (mk )=
(1−μk )T s
1−η (mk+1 )…………………………
(4.22)
which can be solved for μk as
μk=η ( mk )
1−η ( mk+1 )+η (mk )=
η ( mk )W (mk)
………………….. (4.23)
An estimate for μkcan be obtained by performing the indicated division of the two
numbers η (mk ) and W(mk )that are both available from the NCO.
To avoid division, 1W (m)
≈T i
T s ; nominal value of this ratio is designated as ξ0. All
though the exact Ti/Ts is unknown and irrational, the nominal value ξ0, expressed to finite
precision, can often be an excellent approximation to the true value.
Therefore the fractional interval can be approximated by μk ≈ ξ0 η ( mk ). If the
deviation of ξ0 is too large, then a first order correction
μk ≈ ξ0 η ( mk ) [2−ξ0W ( mk−1 ) ]………………… (4.24)
41
reduces the standard deviation in μkto ∆ ξ2
(ξ02 √12 ) , again without requiring a division.
CHAPTER 5
INTRODUCTION TO SIMULINKSimulink is integrated with MATLAB, providing immediate access to an
extensive range of tools that let you develop algorithms, analyze and visualize
simulations, create batch processing scripts, customize the modeling environment, and
define signal, parameter, and test data.
Simulink is a software package for modeling, simulating, and analyzing
dynamical systems. It supports linear and nonlinear systems, modeled in continuous time,
sampled time, or a hybrid of the two. Systems can also be multirate, i.e., have different
parts that are sampled or updated at different rates.
For modeling, Simulink provides a graphical user interface (GUI) for building
models as block diagrams, using click-and-drag mouse operations. With this interface,
you can draw the models just as you would with pencil and paper (or as most textbooks
depict them). Simulink includes a comprehensive block library of sinks, sources, linear
and nonlinear components, and connectors. You can also customize and create your own
blocks
5.1 WORKING WITH MODELSSimulink lets you to create, model, and maintain a detailed block diagram of the
system using a comprehensive set of predefined blocks. Simulink software includes an
extensive library of functions commonly used in modeling a system. These include:
Continuous and discrete dynamics blocks, such as Integration and Unit Delay
42
Algorithmic blocks, such as Sum, Product, and Lookup Table
Structural blocks, such as Mux, Switch, and Bus Selector
It is possible to customize these built-in blocks or create new ones directly in
Simulink and place them into new libraries. Additional blocksets (available separately)
extend Simulink with specific functionality for aerospace, communications, radio
frequency, signal processing, video and image processing, and other applications.
Figure 5.1 Simulink library
To create a new model, select Model from the new submenu of the Simulink library
window's File menu. To create a new model on Windows, select the New Model button
on the Library Browser's toolbar
43
Figure 5.2 Opening Simulink New file
Simulink opens a new model window.
You will need to copy blocks into the model from the following Simulink block libraries:
• Sources library (the Sine Wave block)
• Sinks library (the Scope block)
• Continuous library (the Integrator block)
• Signals & Systems library (the Mux block)
To copy the Sine Wave block from the Library Browser, first expand the Library
Browser tree to display the blocks in the Sources library. Do this by clicking on the
Sources node to display the Sources library blocks. Finally, click on the Sine Wave node
to select the Sine Wave block.
Here is how the Library Browser should look after you have done this.
44
Figure 5.3 Simulink library browser
5.2 HOW SIMULINK WORKS
Simulink updates the state of each block of the model at each time step, starting
from the first value and ending to the final value of the time span. In time discrete
simulations, Simulink is able to work with two kind of data processing:
sample based processing: each block elaborates one sample in one time step
frame based processing: signals are propagated through a model in batches of
samples. Samples are processed at the same time in blocks that are suited for
working with frames. An M-by-1 frame-based vector represents M consecutive
samples
Systems can also be multirate, i.e., have different parts that are sampled or updated at
different rates.
5.2.1 Sample-Based Vs Frame-Based Processing
Sample based processing has the advantages of working with very simple models,
of being available with all blocks, and it does not provide problems with feedback
loops; it has the disadvantage to make the simulation slow
Frame based processing has the advantage of maximizing the efficiency of the
system by distributing the fixed process overhead across many samples, but it is
45
not supported from all blocks and it introduces a certain amount of latency in
signals, with potential problems in systems with feedback
Figure 5.4 sample and frame based operation
5.3 BLOCKS USED
5.3.1 Random Integer Generator
The Random Integer Generator block generates uniformly distributed random
integers in the range [0, M-1], where M is the M-ary number defined in the dialog box.
The M-ary number can be either a scalar or a vector. If it is a scalar, then all output
random variables are independent and identically distributed (i.i.d.). If the M-ary number
46
is a vector, then its length must equal the length of the Initial seed; in this case each
output has its own output range.
5.3.2 QPSK Modulator Baseband
The QPSK Modulator Baseband block modulates using the quaternary phase shift
keying method. The output is a baseband representation of the modulated signal. If the
Input type parameter is set to Integer, then valid input values are 0, 1, 2, and 3. If
Constellation ordering is set to Binary, for input m the output symbol is
exp(jθ + jπm/2)
where θ is the Phase offset parameter. In this case, the input can be either a scalar or a
frame-based column vector. For integer inputs, the block can accept the data types int8,
uint8, int16, uint16, int32, uint32, single, and double. For bit inputs, the block can accept
int8, uint8, int16, uint16, int32, uint32, Boolean, single, and double.
5.3.3 Digital Filter
The Digital Filter block independently filters each channel of the input signal with
a specified digital IIR or FIR filter. The block can implement static filters with fixed
coefficients, as well as time-varying filters with coefficients that change over time. This
block filters each channel of the input signal independently over time. The output frame
status and dimensions are always the same as those of the input signal that is filtered.
5.3.4 Delay
47
The Delay block delays a discrete-time input by the number of samples or frames
specified in the Delay units and Delay parameters. The Delay value must be an integer
value greater than or equal to zero. When you enter a value of zero for the Delay
parameter, any initial conditions you might have entered have no effect on the output.
The Delay block allows you to set the initial conditions of the signal that is being
delayed. The initial conditions must be numeric.
5.3.5 Slider Gain
The Slider Gain block allows you to vary a scalar gain during a simulation using a
slider. The block accepts one input and generates one output.
Parameters and Dialog Box
Low: The lower limit of the slider range. The default is 0.
High: The upper limit of the slider range. The default is 2.
5.3.6 Gain
48
The Gain block multiplies the input by a constant value (gain). The input and the
gain can each be a scalar, vector, or matrix. You specify the value of the gain in the Gain
parameter. The Multiplication parameter lets you specify element-wise or matrix
multiplication. For matrix multiplication, this parameter also lets you indicate the order of
the multiplicands.
5.3.7 Display
The Display block shows the value of its input on its icon. The amount of data
displayed and the time steps at which the data is displayed are determined by the
Decimation block parameter and the Sample Time property
The Decimation parameter enables you to display data at every nth sample and
the Sample Time property, settable with set_param, enables you to specify a sampling
interval at which to display points.
5.3.8 Scope
The Scope block displays its input with respect to simulation time. The Scope
block can have multiple axes (one per port) and all axes have a common time range with
independent y-axes. The Scope block allows you to adjust the amount of time and the
range of input values displayed. You can move and resize the Scope window and you can
modify the Scope's parameter values during the simulation.
49
5.3.9 Constant
The Constant block generates a real or complex constant value. The block
generates scalar (one-element array), vector (1-D array), or matrix (2-D array) output,
depending on the dimensionality of the Constant value parameter and the setting of the
Interpret vector parameters as 1-D parameter.
5.3.10 Discrete Filter
The Discrete Filter block models Infinite Impulse Response (IIR) and Finite
Impulse Response (FIR) filters using a direct form II structure (also known as "control
canonical form"). You specify the filter as a ratio of polynomials in z-1. You can specify
that the block have a scalar output or vector output where the elements correspond to a
set of filters that have the same denominator polynomial but different numerator
polynomials.
CHAPTER 6DESIGNING THE MODEL
6.1 DESIGN STEPS
6.1.1 Discrete Time Transmitter
50
Figure 6.1 Discrete Time Transmitter
The above block diagram shows the functioning of a transmitter which generates
the input signal as that of a transmitter which is implemented in real-time systems. The
first block of the discrete time transmitter is the timing control unit.
The block diagram below shows the design of timing control unit.
Figure 6.2 Timing control unit
The first block used in the timing control unit is 1/D and the parameters are
entered in the blocks used. The DSP constant block generates the output a discrete-time
or continuous-time constant.
51
Figure 6.3 Discrete time constant.
The three inputs are fed to the adder (-, -, +) which further operates and is
produced to the delay. The delay parameters are set accordingly as shown in the below
figure 6.4. This delay unit is used for delaying the discrete-time input by specified
number of samples or frames.
Figure 6.4 Delay block
The delay along with the constant block (which is default set to 1) is applied to
the math function block which includes logarithmic, exponential, power and modulus
functions.
52
Figure 6.5 Math function
The discrete sampled at the output is given to the relational operator and
compared if the signal has any error in it and further computations are done. This block
applies the selected relational operator to the inputs and outputs.
Figure 6.6 Relational operator
The output after the relational operation is the symbol clock which acts as the
enable for the symbol generator which in turn generates the samples. And a delay control
53
output is generated from the timing control unit is thus given to the farrow variable delay
which is used in computation of minimum fractional delay.
The second block in the transmitter section designed is the symbol generator. The
main blocks in this are random integer generator, QPSK modulator baseband, gain. The
block designed is shown in the below figure 6.7.
Figure 6.7 Symbol Generator
The random integer generator, generates random uniformly distributed integers in
range [0, M-1], where M is the M-ary number. The M-ary number here we specify is 4.
Figure 6.8 Random integer generator.
54
Thus for M=4, we generate a QPSK signal. Hence this QPSK signal should be fed
to the QPSK modulator baseband block for further operation. The functional parameters
block is shown in the below figure 6.9.
Figure 6.9 QPSK modulator baseband.
The basic functioning of the QPSK modulator baseband block is that, it modulates
the input signal using Quaternary phase shift keying method. The input can be either bits
or integer. Here we apply integer. For the sample-based integer input, input must be a
scalar; hence the output from the random integer generator should be a scalar. The output
of the QPSK modulator baseband is fed to the gain block, which is shown in the figure
6.10.
55
Figure 6.10 Gain for interpolation.
The gain value to be applied here is D*sqrt (2) with a multiplication parameter
(K. *u) and fed to the input of farrow variable delay.
The farrow structure output is fed to the Root raised cosine (RRC) filter.
Figure 6.11 Root raised cosine filter
56
The root raised cosine filter, independently filter each channel of input over time
using a specified filter implementation.
6.1.2 DownsamplerThe transmitter output is given to the downsampler. The downsampler coefficient
is 2, i.e., here we downsample with a factor of 2. First we apply an FIR filter to the input
signal, then downsample by an integer value factor. The filter is implemented using an
efficient polyphase FIR decimation structure.
Figure 6.12 RRC with D=2
After the downsampling operation the output of the downsampler is fed to the
farrow variable delay. The structure of that is shown in the below figure 6.13.
57
6.1.3 Farrow Variable Delay
Figure 6.13 Farrow Structure
A delay is introduced in the above structure is shown in figure 6.13. The
downsampler output is given as one of the input to the farrow structure. The fractional
delay is another input applied to the farrow structure which is internally fed to the
Horner’s rule for the removal of fractional delay. The Horner’s rule is shown in the below
figure 6.14.
Figure 6.14 Horner’s Rule
The filter coefficients are derived internally through the RRC filter which are b3,
b2, b1, b0. After the computation of fractional delay we give the output to the Timing
Error Detector. The blocks used in the derivation of the equation are multipliers and
adders. Thus fractional delay coefficients are produced for each and every filter
correspondingly. The filter coefficients are given as input in the block as shown in the
below figure 6.15.
58
Figure 6.15 Filter coefficients
6.1.4 Timing Error Detector and Loop Filter
Figure 6.16 Detection section
The above circuit is timing error estimation circuit designed. The output of the
farrow variable delay is divided in to in-phase and quadrature components of the delay.
Hence processed output is given to the timing error detector.
Figure 6.17 Timing Error Detector
59
The circuit design of the timing error detector is given in below figure 6.17.
The output of the magnitude square delays is applied to the digital filter and thus
applied to trigonometric block for the minimum error delay for which the function has
more than one argument for hyperbolic and trigonometric functions. Here we go with
design of atan (arc tangent).
Figure 6.18 trigonometric function block
After the timing error is produced, it is applied to the loop filter block which is
designed for filtering the error. The smoothing filter is designed in such a way that the
transfer function generated is used in minimizing the error. The figure 6.19 shows the
parameter which generates a transfer function internally.
Figure 6.19 Smoothing filter
60
The loop filter block diagram is shown in the below figure 6.20 which is designed
for the minimal error production.
Figure 6.20 Loop Filter
The output of the loop filter is produced to the timing control input for the timing
control unit block. The timing error produced is given to the plot.
6.1.5 Timing Control Unit
Figure 6.21 Number Controlled Oscillator
61
The design of timing control unit comprises the same as the, block designed in
transmitter section. Here we go with the discrete filter with coefficients designed
particularly for transfer function and display of the minimal error is displayed using a
display or a scope.
6.1.6 Symbol Sampling
Figure 6.22 Symbol Sampler
The symbol clock used here to enable and the data which is produced at the
farrow variable delay is transferred directly to the plot. The output is the in-phase and
quadrature components.
6.1.7 Plots
Figure 6.23 Plots
62
The plots from different sections are as follows.
One set of outputs displayed here are the inputs applied to the circuit (symbol
clock and I/Q plot)
Another set of outputs are the actual outputs (timing error, timing control input
and NCO output) that should be observed for the given inputs.
Adjust the plots by changing the values for corresponding plots as shown in
below figure 6.24.
Figure 6.24 Plots parameters and coefficients.
Constellation plot is one major block which is used for observing the output of
QPSK modulated signal to be demodulated for the variable noise. Thus the block
properties are very important to make the output visible. Choosing number of terms, to
represent the axis etc are possible. Figure 6.25 below is the block which determines the
parameters to be changed for the implementation of constellation (scatter plot).
63
Figure 6.25 Discrete-time scatter plot
The default values obtained while running the program will be shown in the
MATLAB window- workspace link. The values of the coefficients b0, b1, b2, b3, D,
Fsymbol etc. as shown in the below figure.
Figure 6.26 workspace
64
CHAPTER 7WORKING OF TIMING RECOVERY CIRCUIT
Figure 7.1 Simulink model for timing recovery
65
7.1 Discrete Time TransmitterThe symbol error generated is applied to the discrete time transmitter. The source
frequency is F = 1 MHz for sampling rate D=8.
In the timing control unit, the sample time is 1/(D*F symbol). Hence for every
1µsec it overflows. If the symbol error is zero or less than zero, overflow occurs after
every 8 clock cycles as per symbol rate of in special case less than or greater than 8
cycles due to error generated.
When overflowed the symbol out is high. When the enable is high, it captures and
produces the output to scaling block.
The second output from the timing control unit is the delay control which is
produced after the scaling operation. This delay is given as input for product operation of
Horner’s rule in Farrow structure.
The symbol out acts as enables to the symbol generator which produces a signal
with randomly generated values and QPSK modulation and given as input to filter state
whose output is given as another product to the Farrow structure.
The output of Farrow structure is given as input to the digital filter (Root Raised
Cosine filter), then the output from discrete time transmitter is as input to raised cosine
filter.
In the root raised cosine (RRC) filter consists of phase and frequency offset. In
the phase offset the complex number generated from symbol generator is shifted by
jπ/100 and the frequency offset varies with time (2πfct).
The phase and frequency offset are calculated using ML(maximum likelihood)
parameter estimation. In ML the parameter to be calculated is assumed to be the nearest
value.
66
The output of transmitter is given to a down sampler and then to the Farrow
structure following the same operation with the delay which is the feedback is obtained
from the timing control unit.
7.2 Timing Error Detector and Loop FilterThe outputs from the Farrow structure are passed on to the timing error detector
where the errors are processed through the filters after arithmetic operations and the
continuous errors are operated with atan operation where atan= tan-1(imag/real).
The output is given to an unwrapper section where the 2π factor in the error is
removed with the arithmetic, delay and relational operations and then given to loop filter.
The error obtained is the timing error which is directly plotted using scope.
In the loop filter the error is filtered using a transfer function which is obtained
from the delay. This filtered value is given as input to the timing control unit and also
plotted on scope.
7.3 Timing Control Unit
The operation in the timing control unit is the Number Control Oscillator (NCO).
The symbol with least error is detected using NCO. The operation of NCO is that the
register overflows for the least error and clock out is given back to the timing error
detector to calculate the next least error possible.
A display is used to show the error at the particular instant i.e for variable error
gain introduced at transmitter. The symbol clock and the delay are plotted on the scope.
67
7.4 Symbol SamplingThe input to the symbol sampling comes from symbol out clock from timing
control unit and from the Farrow structure. In this section the data samples are produced.
Whenever the symbol out clock is high it acts as enable to the symbol sampler and after
delay it is plotted as delay clock on the scope and the enable high the in-phase and
quadrature (I/Q) component is obtained and is plotted.
7.5 PlotsThe scope blocks are used to plot the signals generated. Here we use two scopes
one with two inputs and the other with three inputs and a scatter plot.
The inputs to the two input scatter plot are one the symbol clock from the Timing
Control Unit and the other is the In-phase and Quadrature component from the Symbol
Sampling after converting from complex to real.
The inputs to the three other scope are the Timing error and timing control input
from the Timing error detector and NCO output from Timing control unit.The scatter plot
is the plot obtained by plotting the real and imaginary components of the output of
symbol sampler on a X-Y graph.
7.6 Advantages
At some point in a digital communications receiver, the received analog signal
must be sampled before performing equalization and decoding. Sampling at the
wrong times can have a devastating impact on overall performance. Hence the
problem of synchronization in baseband transmission systems is reduced by the
timing recovery concept.
At high SNR, the decisions provided by a symbol detector used in a PLL are
reliable enough for the timing recovery unit to perform well. Thus, the
68
conventional receiver is sufficient to be used in a system operating at high SNR
because of its simplicity.
The timing recovery which functions at lower SNR rates is more advantageous
because it helps in reduce the cost of operation in magnetic recording systems.
The oversampling techniques for symbol timing recovery have other advantages
over the interpolation method:
1. When the symbol data is recovered, we also recover the timing clock. The
recovered timing clock will be used in the network termination to synchronize
the transmitting data to the master clock in the line termination. Extra circuits
will be needed for this job in the interpolation Method
2. It takes advantages of oversampling techniques (∑X modulation and
broadband sampling) which are becoming increasingly important in meeting
the stringent requirements for digital receivers. By incorporating symbol
timing recovery in the required decimation filter, we can have simple
implementation.
7.7 DISADVANTAGES
Theoretically, joint maximum-likelihood (ML) estimation of the timing offset and
the data sequence is a preferred method of synchronization but its complexity is
huge.
The performance of timing recovery scheme decreases for the low signal-to-noise
ratio (SNR), i.e. timing recovery at low SNR is more difficult. Thus, the need for
efficient timing recovery schemes becomes increasingly important in improving
the performance of timing recovery.
At low SNR and moderate to severe timing offset models, timing recovery is very
difficult because of the phenomenon called a cycle slip. When a cycle slip occurs,
69
the receiver adds or drops symbols, and this causes a burst of errors in data
detection process.
The received data signal practically undergoes many kinds of impairments, such
as timing offset, frequency offset, additive noise, etc., the amount of timing
information embedded in the received data signal tends to be decreased.
7.8 APPLICATIONS
7.8.1 SOFTWARE DEFINED RADIOS
Software defined radios (SDR) are highly configurable hardware platforms that
provide the technology for realizing the rapidly expanding third (and future) generation
digital wireless communication infrastructure. Many sophisticated signal processing tasks
are performed in a SDR, including advanced compression algorithms, power control,
channel estimation, equalization, forward error control and protocol management.
While there is a plethora of silicon alternatives available for implementing the
various functions in a SDR, field programmable gate arrays (FPGAs) are an attractive
option for many of these tasks for reasons of performance, power consumption and
configurability.
The software in a SDR defines the system personality, but currently, the
implementation is often a mix of analog hardware, ASICs, FPGAs and DSP software.
The rapid uptake of state-of-the-art semiconductor process technology by FPGA
manufacturers is opening-up new opportunities for the effective insertion of FPGAs in
the SDR signal conditioning chain. Functions frequently performed by ASICs and DSP
processors can now be done by configurable logic.
Timing recovery and carrier recovery are the main algorithms used in the designing
of SDR receivers which are flexibly implemented on FPGA’s, ASIC’s.
70
7.8.2 MAGNETIC RECORDING SYSTEMS
The performance of the proposed timing recovery schemes is observed in a
magnetic recording system. This system is considered because magnetic recording is a
primary method of storage for a variety of applications, including desktop, mobile, and
server systems.
Timing recovery in magnetic recording systems is an increasingly critical problem
because of the growing data rate to be supported. Improving the performance of timing
recovery will give rise to improved reliability of an entire recording system, which in turn
results in increased storage capacity.
This experiment will help us decide whether or not the proposed schemes are
worth being employed in real-life applications, if compared to the conventional schemes
used in today’s magnetic recording read-channel chip architectures.
7.8.3 TIMING RECOVERY FOR FAST CONVERGENCE
It is desirable for timing recovery to achieve synchronization as fast as possible.
This means that all the initial phase and frequency offsets in a system during acquisition,
and any phase and frequency changes during tracking should be recovered very quickly
(i.e., within a fewer number of samples).
To improve the performance of conventional timing recovery, we exploit the idea
of oversampling the received analog signal by twice the symbol rate to get more timing
information. Because the oversampled system requires a fractionally-spaced equalizer
instead of a T-spaced equalizer, it will also get all the benefits from a fractionally spaced
equalizer.
For example, it is insensitive to a constant timing offset in the system, as opposed
to a T-spaced equalizer. With this idea, we propose the oversampled PSP-based timing
recovery scheme to achieve a fast convergence rate in the application
71
CHAPTER 8
RESULTS
The outputs observed here are
1. First set of output consists of symbol clock, in-phase and quadrature components.
2. The second set of outputs consists of timing error, timing control input and NCO
output.
In this section, we observe the plots at different error gains namely, at least gain
(-1), at error gain of -0.5, at zero error gain, at error gain of 0.2, at error gain of 0.5 and at
the maximum error gain (1). And we also observe the constellation plot (scatter plot) for
different values of error gains.
The below are the different plots of symbol clock, in-phase and quadrature
components, timing error, timing control input, NCO output at different error gains.
8.1 PLOTS AT-1 ERROR GAIN (LEAST GAIN)
Figure 8.1 plots obtained at least error gain of -1
72
The error gain provided here is the least error gain i.e. ε=−1. For which the in-
phase and quadrature components are out of phase. The least symbol error rate in
percentage is -1.001 approximately.
Hence the frequency offset which shifts jπ /100 gives a complex number which is
made real components and displayed in I /Q plot. Hence the phase shift occurs in the
components at 2 π f c t time.
The NCO plot which starts from 0 extends to2π f c t . The equation for which the
NCO is determined for in-phase and quadrature components is given as
y=∫ sin ( 2 π f c t+θ ) dθ
Hence for higher order of θ the equation becomes y=∫ sin (4 π f c t )dθ and there
will be no more value of θ to be integrated. Hence the value 4 π f c t is maximum value
and there by the wave drops to zero and the process repeats further. The operation of
NCO is common in all aspects and varies with the error gain respectively.
8.2 PLOTS AT -0.5 ERROR GAIN
Figure 8.2 plots obtained at least error gain of -0.5
73
The value of error gain provided here isε=−0.5. The least symbol error rate in
percentage is -0.4956 approximately. Here the timing offset value is changed accordingly
which in turn shifts the NCO output above zero.
The I /Q plot components are out of phase with the same factor jπ /100 and the
symbol clock is constant in all aspects.
8.3 PLOTS AT ZERO ERROR GAIN
Figure 8.3 plots obtained at least error gain of 0
The value of error gain provided here isε=0. The symbol clock is running
constantly and the I /Q plot components in-phase and quadrature components are running
at same instant, as the error is zero the original signal is transmitted here. The least
symbol error rate in percentage is 0.001774 approximately.
The NCO output here is at interval4 π f c t which is the maximum level when the
error is zero (approx.). The overflow occurs after 1µsec, i.e. after every 8-clock cycles as
the error gain is zero.
74
8.4 PLOTS AT 0.2 ERROR GAIN
Figure 8.4 plots obtained at least error gain of 0.2
The value of error gain provided here isε=0.2. The least symbol error rate in
percentage is 0.1991 approximately. Here the timing and frequency offset changes and
increases such that the in-phase and quadrature components move in one phase and shifts
at particular instant.
The NCO output reaches very slowly to zero from the maximum component i.e.
the components when in same phase makes the output reach slowly to zero. The NCO
output is shifted here by 1800 accordingly with the I /Q data.
The overflow here occurs accordingly for the result of summation of the output
from the loop filter and the factor 1/(D∗F symbol) which should be subtracted and given
to the NCO.
75
8.5 PLOTS AT 0.5 ERROR GAIN
Figure 8.5 plots obtained at least error gain of 0.5
The value of error gain provided here isε=0.5. The least symbol error rate in
percentage is 0.5013 approximately. When the error gain is more than the desired output
the NCO controls the error rate accordingly for increase in the error.
8.6 PLOTS AT 1 ERROR GAIN (MAXIMUM GAIN)
Figure 8.6 plots obtained at maximum error gain of 1
76
The error gain provided here is the maximum error gain i.e. ε=1. The least
symbol error rate in percentage is 0.9952 approximately. Hence the timing offset and
frequency offset are very less which can be observed in the plot.
8.7 CONSTELLATION PLOTThis plot is also called as scatter plot. The output of the I /Q is given to the scatter
plot to observe where the components are present in the four quadrants.
The basic description for this plot is related with the QPSK modulation and
demodulation, i.e. in four quadrants at the angles π4
, 3 π4
, 5 π4
, 7π4 the components are
present.The sine and cosine waves when in particular quadrants the in-phase and
quadrature components are given as ( 1√2
, 1√2 )in Quadrant-I, (−1
√2, 1√2 ) in Quadrant-II,
(−1√2
,− 1√2 ) in Quadrant-III, ( 1
√2,− 1
√2 ) in Quadrant-IV.
Thus at these particular values the output is shown in below plots for different
values of symbol error rate i.e. at least error gain (-1), zero error gain and maximum error
gain.
8.7.1 Scatter Plot at Least Error Gain (-1)
77
Figure 8.7: Scatter plot at least error gain
All the components in the four quadrants are scattered as shown in figure 8.7. The
error gain here is least value i.e. -1. The X-axis is the in-phase component and the Y-axis
here is the Quadrature component.
We cannot distinguish the components of particular quadrant at the beginning.
The components settle after particular timing offset in respective places.
8.7.2 Scatter Plot at Zero Error Gain
Figure 8.8: scatter plot at zero error gain
The components in the four quadrants are operated without any error, i.e. at zero
error gain (assumed). Here exactly the components are placed at their respective
positions. There are no disturbances in the signal which is processed.
The quadrant components for the respective angles are given as
1. (1, 1) in Quadrant-I (angle = 450)
2. (-1, 1) in Quadrant-II (angle = 1350)
3. (-1, -1) in Quadrant-III (angle = 2250)
4. (1, -1) in Quadrant-IV (angle = 3150)
78
8.7.3 Scatter Plot at Maximum Error Gain (1)
Figure 8.9: Scatter plot at maximum error gain
As shown in the figure 8.9, the components in the plot are scattered because the
error gain is maximum here. We cannot distinguish the components of each quadrant
individually how they are grouped into. Hence the error estimate should always be less.
And in later process the components settle down at their respective positions.
Hence the timing offset and the frequency offset are the main concepts used for
determining the result.
79
CONCLUSION
Timing recovery is the process of synchronizing the sampler with the received
analog signal. Sampling at the wrong times can have a devastating impact on overall
performance. Improving the performance of timing recovery will give rise to improved
reliability of an entire system. In this work, we developed and investigated the method of
extracting the data using timing recovery.
The summary of the whole project is discussed below.
Chapter 2, is an overview of basic types of signaling schemes that are used in
communication. We discuss about the phase shift keying which is one among three basic
signaling schemes (ASK, FSK and PSK). And further we will see about the M-ary
signaling scheme which is the basic concept for developing QPSK signal. Here we also
discuss about the QPSK modulation and demodulation schemes which is the basic
concept used in developing the concept timing recovery.
Chapter 3 is a brief overview of generations of digital receivers. In this project we
develop the Simulink model for the latest generation receiver (third generation) for
communication and we also put forth the derivations and discussions about the ML
(Maximum-Likelihood) receivers and MAP (Maximum a posteriori) receivers.
Chapter 4 is dealt with the basics of timing recovery which is the proposed
scheme in the project for the transmission of data. Here we discuss about concepts used
for extracting data that is transmitted from transmitter to receiver. We also see about the
timing recovery methods namely Early-late-gate algorithm, Mueller and Muller algorithm
and Gardener algorithm and the drawbacks for each method. Thus the basic operations of
concepts used in timing recovery such as Interpolation, TED, Loop filter and Number
controlled oscillator are discussed thoroughly.
Chapter 5 is an approach to develop digital timing recovery and dealt with the
problem when and where to interpolate the signal and implementing the Farrow structure
80
for cubic interpolation. We also derive the equations for Timing error estimation( ε) using
ML synchronization algorithm. The extraction of base-point index (mk) and fractional
interval (µk) for the number controlled oscillator using NDA (Non-data-aided) method is
also discussed in this project which gives the value where the least error is possible.
Chapter 6 is dealt with the software used i.e., Simulink, which is integrated with
MATLAB, which helps in modeling, simulating, analyzing dynamical systems. And also
the description of basic blocks used in the creation of Simulink model for timing recovery
such as Random integer generator, QPSK modulator base-band, discrete filter, etc.
Chapter 7 is about the Simulink model that is created for achieving timing
recovery in digital receivers. And working of each sub-blocks of the model is explained
in very detail.
Hence this project helps in eliminating the error (approximately) that is possibly
incurred with the signal that is transmitted to the receiver.
Choosing an appropriate timing recovery method determines the quality of data
that is received correctly. Simulink is the best software implementation used for the
timing recovery process for Maximum Likelihood (ML) receiver.
Thus timing recovery is the basic concept that establishes better synchronization
between data and the clock pulses recovered so that they may be received and processed
correctly.
81
FUTURE SCOPE
Sampling at the wrong times can have a devastating impact on overall
performance of a receiver. The timing recovery process discussed in this project is the
main concept used in all the digital receivers. Hence the implementation of timing
recovery along with the carrier recovery is possibly expected.
The algorithm developed in this project along with carrier recovery is very
flexible and can be improved, modified such that it can handle any kind of data that is
coming in. implementation of this algorithm on FPGA (Field Programmable Gate Array)
module is also expected which can be created by incorporating the code into the chip,
which is the advanced technology that is used in present day implementations.
Discussed below are the advanced methods of timing recovery that are developed
and are under process for implementation.
PER-SURVIVOR PROCESSING (PSP)
A new timing recovery scheme based on per-survivor processing (PSP), which
jointly performs timing recovery and equalization, by embedding a separate decision-
directed PLL into each survivor of a Viterbi algorithm (used for decoding convolution
codes, in baseband detection for wireless systems, and also for detection of recorded data
in magnetic disk drives.).
The proposed scheme is shown to perform better than conventional timing
recovery, especially when the SNR is low and the timing error is large. An important
advantage of this technique is its amenability to real-time implementation.
82
ITERATIVE TIMING RECOVERY
A new iterative timing recovery scheme that exploits the presence of the error-
control code; in doing so, it can perform even better than the PSP scheme. The proposed
iterative timing recovery scheme is realized by embedding the timing recovery process
into a trellis-based soft-output equalizer using PSP. Then, this module iteratively
exchanges soft information with the error control decoder, as in conventional turbo
equalization. The resulting system jointly performs the functions of timing recovery,
equalization, and decoding. The proposed iterative timing recovery scheme is better than
previously reported iterative timing recovery schemes, especially when the timing error is
severe.
NBM SCHEME
Joint ML estimation of the timing offset and the data sequence, which jointly
performs timing recovery, equalization, and error-correction decoding, is a preferred
method of synchronization, but its complexity is problematic. A solution based on the
EM algorithm is also complex. Fortunately, a solution to this problem with complexity
comparable to the conventional receiver has been proposed which is referred to as the
NBM scheme. The NBM scheme is realized by embedding the timing recovery step inside
the turbo equalizer so as to perform timing recovery, equalization, and error-correction
decoding jointly.
The key idea of the NBM scheme is as follows. For every turbo iteration, the
turbo equalizer will produce the decisions that are more reliable than the decisions from
the PLL. These better decisions are fed back to the timing recovery unit to improve the
timing estimates. Then, the new timing estimates are used to refine the samples. These
better samples will be employed to improve the performance of the turbo equalizer in the
next iteration. This process repeats for as many as iterations needed.
83
REFERENCES
1. Digital Communications by Heinrich Meyr, Marc Moeneclaey, Stefan A. Fechtel.
2. Synchronization Techniques for Digital Receivers by Umberto Mengali.
3. Floyd M. Gardner, “Interpolation in digital modems. I. Fundamentals” IEEE
Transactions on Communications, Vol 41, No. 3, March 1993.
4. Lars Erup, , Floyd M. Gardner, , and Robert A. Harris, “Interpolation in Digital
Modems-Part 2: Implementation and Performance”, IEEE Transactions on
Communications, Vol 41, No. 6, June 1993.
5. All Digital Timing Recovery and FPGA Implementation , Daniel Cárdenas, German
Arevalo
6. Signal Processing in Next Generation Digital Receivers and Transmitters by Fred
Harris
7. Fredric J. Harris, and Michael Rice “Multirate Digital Filters for Symbol Timing
Synchronization in Software Defined Radios” IEEE Journal on Selected Areas in
Communications, VOL. 19, NO. 12, December 2001
8. R. W. Schafer and L.R. Rahiner, “A digital signal processing approach to
interpolation,” Proc. IEEE, v. 61. pp. h92-702, June 1973.
9. O.E. Agazzi et. al., “A digital signal processor for an ANSI standard ISDN
transceiver,” IEEEJ. Solid-State Circuits, vol. 24, pp. 1605-1613, Dec. 1989.
10. F.M. Gardner, “A BPSK/QPSK timing-error detector for sampled receivers,” IEEE
Trarrs. Commun., vol. COM-34, pp. 423429, May 1986.
11. J. Tiernan, F. Harris, and D. Becker, “Digital receiver for variable rate symbol
synchronization,” Patent US5 504 785, Apr. 1996.
12. F. J. Harris, “Multirate digital filters used for timing recovery in digital receivers,” in
Proc. Asilomar Conf. Signals, Systems, and Computers, 2000, pp. 246–251.
84