Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on...

29
Homing Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation possible. Also, the number of targets which may be engaged simultaneously need not be restricted to the limited number of sensors on the launch platform, nor is there such a severe constraint on platform manoeuvre as would be expected from a line of sight system. The degree to which these advantages are realised depends on how the system is implemented. In general, we don’t get something for nothing and we can expect a fully autonomous fire and forget missile to raise its own issues, such as aborting the engagement when targets are incorrectly identified (e.g. friendly fire). Apart from fairly short range systems, where positive target identification is possible before launch, we should expect the missile to be supported by additional sensors on the launch platform. The level of actual autonomy appropriate depends on the nature of the threat, the launch platform constraints and the operating environment. Generic Sensor Types The design and selection of sensors is a major topic in its own right, and we would be deviating from our system level presentation if we presented more than a rather sketchy introduction. As mentioned elsewhere the classification of sensors reflects their influence on the overall system (the degree of potential autonomy). The three principal categories are: Passive Semi-active (or bi-static) Active Passive A passive sensor detects the energy (of whatever form) emitted by the target. This may be radar or communications emissions, infra red, visible light or ultra violet. In a sense visible light band emissions are dominated by reflections from the Sun or other light source rather than actual passive radiation. Anti-radiation seekers are a subject of on-going classified research, and will not be discussed further. Early infra-red detectors were fairly insensitive and could only detect the hot jet pipe of the target, against a clear sky background, so could only be used for air to air or surface to air engagements. The atmosphere is fairly opaque to infra-red except in the 3 to 5 micron and 8 to 13micron wavelength windows. The 8 to 13 band is known as the ‘thermal band’, because it corresponds to the black body radiation peak at ambient temperature, so that the energy available in this band is relatively high. This makes possible the use of an uncooled sensor.

Transcript of Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on...

Page 1: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Homing Guidance

Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’

operation possible. Also, the number of targets which may be engaged simultaneously need not be

restricted to the limited number of sensors on the launch platform, nor is there such a severe constraint

on platform manoeuvre as would be expected from a line of sight system.

The degree to which these advantages are realised depends on how the system is implemented. In

general, we don’t get something for nothing and we can expect a fully autonomous fire and forget

missile to raise its own issues, such as aborting the engagement when targets are incorrectly identified

(e.g. friendly fire). Apart from fairly short range systems, where positive target identification is possible

before launch, we should expect the missile to be supported by additional sensors on the launch

platform. The level of actual autonomy appropriate depends on the nature of the threat, the launch

platform constraints and the operating environment.

Generic Sensor Types The design and selection of sensors is a major topic in its own right, and we would be deviating from our

system level presentation if we presented more than a rather sketchy introduction.

As mentioned elsewhere the classification of sensors reflects their influence on the overall system (the

degree of potential autonomy). The three principal categories are:

Passive

Semi-active (or bi-static)

Active

Passive A passive sensor detects the energy (of whatever form) emitted by the target. This may be radar or

communications emissions, infra red, visible light or ultra violet. In a sense visible light band emissions

are dominated by reflections from the Sun or other light source rather than actual passive radiation.

Anti-radiation seekers are a subject of on-going classified research, and will not be discussed further.

Early infra-red detectors were fairly insensitive and could only detect the hot jet pipe of the target,

against a clear sky background, so could only be used for air to air or surface to air engagements. The

atmosphere is fairly opaque to infra-red except in the 3 to 5 micron and 8 to 13micron wavelength

windows. The 8 to 13 band is known as the ‘thermal band’, because it corresponds to the black body

radiation peak at ambient temperature, so that the energy available in this band is relatively high. This

makes possible the use of an uncooled sensor.

Page 2: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Early seekers used a spinning reticle in front of a single sensing element to detect the target bore sight

error, hence modulating the heat source provided a simple means of jamming. Various scanning

arrangements have also been used such as line scan and rosette scan to improve the resistance to

jamming whilst still using a single element or line of elements. Since the development of charge coupled

devices in the 1990s, modern sensors universally have become imaging arrays of elements.

An imaging sensor is not restricted to contrast against a sky background, but can employ pattern

recognition techniques to distinguish the target from the background, which is radiating a similar

amount of energy. Indeed, it is the contrast of the target with its background which is the principal

problem, particularly in land clutter. Sea clutter is not such a problem, except at shallow dive angles, as

there is usually ample temperature difference between the sea and a ship target.

Since contrast is what is usually required in an imaging sensor, the thermal waveband loses some of its

appeal. Also low cost IR sensors typically employ a micro-bolometer array, which has a significant

integration time. This may not matter for reconnaissance missions but imposes severe performance

constraints on a guidance loop. On the other hand higher performance arrays, having short integration

times usually require the sensor to be cooled, which adds complexity. Bearing in mind the optics

associated with IR involve such exotic materials as sapphire and germanium and must be much larger for

the longer wavelength than for the shorter, the 3-5 micron band using a cooled sensor becomes more

attractive.

The choice is highly application specific, so will not be discussed further, except to say the current

practice of trying to down-select seekers on the basis of power point, hand-waving and UML will get us

nowhere.

With the Sun to illuminate the scene, there is usually plenty of signal power in the visible band. Modern

CCD cameras will actually still work even in starlight, so except for the most overcast dark nights, we

should expect a visible band sensor to work. They are inexpensive, since there is an extensive mass

market, and the optics use ordinary glass, which is opaque to infra-red. On the down side operating in

the 0.4 to 0.8 micron waveband renders the sensor vulnerable to obscurants (smoke), mist, fog and rain.

The short wavelength implies the optics can be quite compact compared with infra-red sensors, and

could be used as part of a more sophisticated multi-band seeker. Since identification markings are

usually only visible in this waveband, a visible band sensor provides a means of positive target

identification, which could be relayed back to the launch platform to abort the attack.

Up until the development of focal plane arrays, the only sensor which could be used to discriminate the

target from the background in a land scenario was the human eyeball, so anti-tank systems tended to be

line of sight or semi-active. Television guided weapons, like the old Martel missile, relayed the image

back to the human operator, a form of guidance which would still be effective, but as far as the degree

of autonomy is concerned, is probably better classified as a semi-active system.

Ultra violet is usually associated with detecting targets in arctic conditions, where IR signal level and

contrast is poor, and white-painted targets are difficult to see in the visible band.

Page 3: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Semi-Active The optical type of sensor is restricted by the low level of target emissions, atmospheric turbulence,

attenuation and precipitation to fairly short ranges, so would be restricted to the terminal stage of

flight. This opens up the can of worms called target acquisition, which we shall not discuss.

Early missiles were universally ‘lock before launch’, where the correct target acquisition was the

responsibility of the launch platform. Longer range passive and active systems are expected to have an

autonomous mid course phase before turning the seeker on. This, not surprisingly, is often referred to

as ‘lock after launch’.

In a semi-active system, the target is illuminated from a separate source, which may or may not be the

launch platform. In the air-to surface case it almost certainly will not be the launch platform which

illuminates (paints) the target. A special forces operative on the ground near the target may use a laser

designator which fires a coded pulse, or a separate aircraft may illuminate the target at a safe stand -off.

Unless one has a death-wish it is not advisable to emit radiation, which can be detected, when operating

deep within enemy airspace.

Semi-active radar air-to air systems have been deployed, using a steerable antenna, which yields some

launch platform freedom to manoeuvre, but the trend is to active homing, ‘fire and forget’, allowing the

launch platform to remain relatively covert.

Semi active has a number of advantages, firstly when used against surface targets, it can use human

pattern recognition to identify the target, and invariably the launch platform is capable of generating

considerably more power than is available in passive or active systems. The seeker is significantly

cheaper than is required for an active system, and the launch platform may provide additional kinematic

information on the current engagement conditions.

The need to illuminate the target throughout the engagement restricts the number of missiles which can

be kept in the air simultaneously, and exposes the launch platform to the risk of attack by anti-radiation

missiles.

These disadvantages may be overcome by only illuminating the target in the end game, with mid-course

command guidance. The defence would be scheduled so that the illuminator is switched from target to

target whilst each missile approaches its own end game. In this way it is possible to have something like

four missiles in the air for each illuminator.

The platform vulnerability issues have tended to dominate current thinking, so that semi-active systems

are probably on their way out.

Active Active homing guidance requires the illumination energy source to be carried on board, which severely

restricts the power available, typically requiring a separate mid course phase to bring the missile close

enough to the target for successful acquisition. The seeker is generally very expensive. The limitation to

short range keeps the most of the flight covert, giving the target minimal warning of the missile’s

Page 4: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

presence. The launch platform need not emit radiation, and hence can remain relatively covert,

although presumably there must be some means of initial target detection and tracking before the

missile can be launched.

Kinematic Limitations Proportional navigation does not impose such severe kinematic requirements on the missile as line of

sight guided missiles, and it is fair to say that most of the system limitations arise from the sensor

limitations. Range is limited by sensitivity and target signature in the waveband and aspect of interest,

although long integration times can improve detection, the extra delay influences guidance loop

bandwidth. Increased sensitivity may involve narrow field of view, which may put excessive bandwidth

requirements on the sensor pointing loop, or reduce the maximum sight line spin which can be

tolerated. The width of the field of view usually also determines the sensor resolution. In addition to

the instantaneous field of view, the sensor is typically mounted in a gimbal to decouple it from the body

motion. The mounting has limits of travel which limit the maximum look angle (or ‘angle of regard’

according to the terminally pretentious) which imposes a kinematic constraint on the engagement

geometry.

Radar sensors, if using continuous wave (CW) modulation, would use Doppler as the target discriminant,

and would lose the target in clutter when engaging side-on. Also semi-active systems require a rear

phase reference to determine the missiles own Doppler with respect to surface clutter. This implies a

body to beam limitation, which may limit the angular separation between the illuminator and the launch

point.

Pursuit could impose limitations, if it were used. However, if used in the end game, the missile would

first be command guided into the enemy tail aspect, otherwise it might be used, under

inertial/command guidance, to reduce the look angle to within the ambit of the gimbal limits before

switching to proportional navigation in the end game.

When considering kinematic limitations, the sight line spin limit and look angle limit are probably the

most important.

Homing SAM systems can be characterised by forward range/crossing range in the same way as line of

sight systems, but more generally, the launch platform motion has a significant effect on the

engagement, so it is more informative to consider the air to air case.

The engagement is depicted in its simplest form as a constant velocity target as the same altitude as the

attacker (see Figure 1). Early missiles tended to be lock before launch, so the missile was launched along

the sight line from launch platform to target. This constraint need not necessarily apply to modern

missiles.

The angle between this sight line and the target velocity vector is called the aspect angle, conventionally

it is measured with respect to head-on, it is measured in the plane of the engagement (known as the fly

plane).

Page 5: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 1 : Basic Co-Altitude Engagement

The elevation angle of the sight line for the more general case of the attacker and target at different

altitudes is known as the ‘snap’ angle. Missile performance is a function of altitude and engagement

geometry, so the aircraft is fitted with a sophisticated fire control which provides cues to the pilot as to

when the target is feasible.

Figure 2 : Ideal Collision Course

Engaging at an aspect angle α, the missile will settle down near a collision course as shown in Figure 2, in

which R is the initial range, Um the missile speed, UT the target speed and tgo the time to intercept. The

look angle, λ is given by:

Page 6: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

sinsinm

T

U

U

This is a maximum with side-on engagements. Note that if the missile is expected to catch the target in

a tail chase it must obviously fly faster. A speed advantage of 2:1 implies a look angle of 30˚should be

adequate.

With a steerable dish antenna, the larger the look angle, the smaller the antenna aperture, so increased

look angle implies inferior angular resolution, so there is clearly a trade-off. A staring phased array fixed

with respect to the body suffers from broadening of the beam and grating lobes when steered to large

angles. Optical sensors in all wavebands tend to require much smaller apertures and can be steered to

large look angles.

If the target has the speed advantage over the missile, it can only be engaged close to head-on.

As far as sight line spin is concerned, we note that proportional navigation tends to reduce it, so that the

maximum value is expected at launch, and the seeker would fail to acquire the target.

The initial sight line spin rate for launch along the line of sight is given by:

R

UT sin0

This is likely to limit minimum engagement range.

Analysis

Objectives With line of sight guidance we found that the rms miss distance can be calculated from the noise

sources using the well-known noise integral. However a homing system is characterised by having time

varying coefficients. When deriving guidance laws, it was convenient to express the problem in terms of

time to go. However, this is useless for performance analysis, because it yields either infinite or zero

miss distance. This is fine for deriving ideal navigation algorithms, and indicating suitable values for the

parameters, but completely useless for finding the miss distance of any specific system.

We want a means of finding out how system noise feeds through to miss distance which applies to the

time-varying case.

The actual guidance loop is made up of both constant coefficient and time varying elements. Indeed,

the missile lags largely account for the kinematic miss distance. This is the reason why discussing values

of navigation constant without reference to the missile response is so much nonsense. It is only the

inherent robustness of proportional navigation which renders it insensitive to small changes in

navigation constant, so that such discussion is, in any case, irrelevant.

Page 7: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

In order to illustrate the effect of missile lag, we consider the extreme case of a pure delay. This is

mathematically the simplest case to deal with. As our interest is in homing, and not in exotic

mathematical methods for their own sake, this seems a wholly appropriate line of enquiry.

We recall the equation for Proportional navigation homing was derived from the zero effort miss

distance:

0mm x

tT

N

dt

dx

Where xm is the zero effort miss, T the initial time to go, N the navigation constant and t the elapsed

time.

The solution is:

0m

N

m xT

tTx

A pure delay; τ implies that the actual miss distance corresponds to the value at τ seconds to go:

0m

N

m xT

x

This is now a function of the initial time to go. Differentiating this with respect to T:

m

N

m xT

N

TN

dT

dx1

1

We now have an expression for miss distance as a function of the duration of the fly out:

0mm x

T

N

dT

dx

This is the same as our forwards time miss distance equation. Curious.

The Adjoint Equation A moment’s reflection reveals that we aren’t interested in the single result for a single fly out, as might

be expected from simulating the system in forward time. What we actually want is the miss distance as

a function of flight duration, which yields far more insight into system behaviour.

The system is assumed to be time varying governed by a linear, homogenous equation:

xtAx

Page 8: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Where x is the vector of n system states, which in general, will contain many more elements than the

single state of our example.

Supposing we have been through the exercise of simulating the system forwards in time covering the

ambit of flight times of interest, we will have generated an array of solutions:

0,0 xttx ii

Or, more generally:

2121 , txtttx

Where is the array of solutions, each element of which is an n×n matrix. This could conceptually be

generated by applying an impulse at each start value and summing the effect on each end value, and

repeating the process for every start time and every state.

As the interval between successive solutions becomes infinitesimal, and the number of elements

becomes infinite, we talk of a transition matrix. We can think of this as the ‘brute force and ignorance’

approach, which is all too common.

212211212 ,,, txtttttxtttx

From which:

Itttt 1221 ,,

Figure 3 : Illustration of Transition Matrix

The time samples at which the solutions are stored are effectively the indices of the array Φ, so the

reverse order of the indices is exactly the same as the transpose of the transition matrix. So we may

write:

Page 9: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

T1

In other words, the inverse of a transition matrix is its transpose.

We want to specify the end state (i.e. apply an impulse at the final miss distance) and find out its

sensitivity to each of the inputs, as functions of time to go.

The final state is related to the state at a specified time to go by:

tTxTtTTx ,

The state at the time to go which would result in the end state is:

TxTtTtTx T ,

Writing in terms of time to go; tgo=(T-t):

00, xttx go

T

go

The problem reduces to finding the equation governing x(T-t).

The forward time equation is known, and yields the solution:

0,0 xttx

Where x(0) is the state at the start of the interval of times to go of interest, i.e. the start of the single fly

out of duration T.

Differentiating:

xtAxdt

td

dt

dx0

,0

It follows that:

ttAdt

td,0

,0

The end state is given by

0,0, xtTtTx

Or: TxTttx TT ),,00

Differentiating:

Page 10: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

0,

,0,,0

dt

TtdtTt

dt

td TTT

T

0,

,0,)(,0dt

TtdtTttAt

TTTTT

We should not expect the transition matrix to be singular, so:

TttAdt

Ttd TTT

,)(,

Now for the sample at time t, an increment in forward time is the same as an equal decrement in time

to go, so:

TttAdt

Ttd TT

go

T

,,

The time sample t is precisely the same as the sample T-t in the final column of the transition matrix

shown in Figure 3, so we are justified in writing:

0,0,

go

T

go

T

go

go

T

ttAdt

td

By implication:

TxtAtx go

T

go

The state expressed in terms of time to go is usually called the ‘adjoint’ state to distinguish it from the

forward time state. It may be considered the sensitivity of the end state to the forward time state

expressed as a function of time to go. The equation above is known as the adjoint equation, and forms

the basis of a powerful method of analysis of homing guidance systems.

We now see why the single state example with the missile lag represented as a pure delay yields an

adjoint equation which is identical to the orginal system equation the system ‘matrix’ is a scalar which is

obviously equal to its own transpose.

Adjoint Method Whilst the modern trend is to brute force and ignorance, generally imposed by those who seem

suspicious of powerful mathematical techniques, we shall unrepentantly continue to follow the path of

wisdom.

Analytical solutions of terminal controller problems are rare, and the important ones may be found in

the textbooks. When we start mixing time-varying coefficients and constant coefficients, we invariably

Page 11: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

must resort to the computer. The adjoint method indicates that the best approach is not the over-

priced computer game, but techniques which yield the maximum insight for the minimum investment of

effort.

As a simple illustration, consider representing the missile response as a first order lag:

tTtTU

f

fNUf

c

y

ycy

2

1

Where τ is the missile lag.

Or in matrix form:

y

c

c

yf

tTtTU

NU

f

21

1

Figure 4 : Comparison of Forward Loop and Adjoint Loop

The adjoint equation is:

y

y

t

NU

tU

y

y f

go

c

gocf

2

11

Page 12: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

The notation y with the subscript of the corresponding forward state is used to indicate that it is a

sensitivity of the end state to the forward time state.

We usually draw out the control loop so that the cause/effect relationships and information flows are

easier to identify than is possible with the modern practice of lumping everything into a single matrix.

The construction of the adjoint loop from the forward loop is particularly easy. The main rules are:

Make sure all comparators are converted into summing junctions by including

gains of value -1, as appropriate

Replace all summing junctions with simple connectors and simple connectors

with summing junctions

Reverse all signal flow directions so that inputs become outputs and vice versa.

Inputs are usually step functions rather than impulses so an extra integrator

may be needed in the adjoint loop.

These rules have been applied to the missile lag + sight line kinematics system, resulting in Figure 4.

Crude Seeker Model Much of the complexity of the guidance is to be found in the seeker processing, so we shall restrict

ourselves to a simple representation.

The seeker sensor is often mounted in a gimbal to isolate it from the missile body motion, although

there is a trend towards body fixed staring arrays. This measures the direction of the target relative to

the bore sight, and steers the gimbal to cancel the boresight error.

Consider a pointing loop in which the torque applied to the gimbal is proportional to the boresight error

and the boresight angular velocity. The bore sight error is::

T

Where φT is the target sight line direction and φ is the bore sight direction with respect to inertial axes.

The equation of motion is:

vTTI

Where I is the moment of inertia and, the Tx are the torques proportional to state x.

The pointing loop is presented in transfer function form in Figure 5.

Page 13: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 5 : Simple Pointing Loop

The transfer function relating the input target sight line direction to the boresight error may be found:

v

TKs

K

ssss

1

i.e: sGKsKs

Kss

v

v

T

2

The final value theorem yields the result that the long term bore sight error is the value of:

sssG T

As; s→0.

If the sightline is fixed:

s

s TT

For which the steady state bore sight error is zero. However, a constant sight line spin, φT = ωt, has the

Laplace transform:

2s

sT

This yields the steady state boresight error:

K

K v

Page 14: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

i.e. the bore sight error is an estimator of the sight line spin rate.

If the gimbal rate is measured with, say, a gyro, there will be an inevitable bias on the rate estimate.

This is denoted δ in Figure 5. The transfer function from gyro bias to bore sight error is:

KsKs

Ks

v

v

2

Applying the final value theorem to this shows that the steady state bias on bore sight error becomes:

K

K v

bias

In order to minimise the effect of gyro bias, the bandwidth of the pointing loop must be as high as

possible. There will be restrictions on the available servo motor power, and noise in the loop from the

sensor, making high bandwidth only achievable at considerable cost. Modern low cost solid state gyros

may look attractive, but the extra bandwidth required to suppress the bias may involve considerably

greater expense than was saved with the cheaper components.

Early infra-red homing missiles mounted the sensor in a spinning gyroscope, and used the currents

needed to drive the torque motors as estimates of the sensor angular velocity. This approach was

synergetic with the spinning reticle needed to estimate target direction.

High loop gain is needed for high seeker bandwidth and suppression of biases, whilst low gain is needed

to suppress noise. The compromise adopted depends on the specific target set and engagement

conditions, for the missile in question. We should expect the gains to be functions of time to go, as the

measurement noise is expected to reduce as the range reduces, and higher seeker bandwidth is needed

to accommodate the higher effective bandwidth of the sight line kinematics.

It is evident that designing a seeker for proportional navigation, which will fit in the limited space

available is no mean feat, and seekers are usually very expensive items as a consequence. Pursuit just

requires the measurement of the boresight error, with perhaps some means of estimating the body

orientation with respect to the velocity vector, and is far less demanding.

Page 15: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Representative Homing Loops

Figure 6 : Homing Loop KInematics

The analysis of the loop begins with the description of the sight line kinematics which is presented in

Figure 6. We shall consider pursuit, PN using the seeker kinematics derived in the previous section and

PN using a more sophisticated seeker.

Sight Line Kinematics

Figure 7 : Sight line Kinematics Forward Model

The zero effort miss distance is given by:

mTmTm yytTyyx

Page 16: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Also:

yTmT ffyy

These equations are represented in transfer function form in Figure 7, we have taken miss distance as

the output.

The adjoint model of the sightline kinematics follows the rules already presented (see Figure 8).

However, an extra integrator is introduced into the lateral velocity node. This is because the actual

input to the forward model is a step unction and not an impulse. This integrator should really have been

present in the forward loop, but in practice it is more convenient to deal with a step function than an

impulse when simulating the system numerically.

This output from the adjoint model is the sensitivity of the end state to initial aiming error, i.e. the

deviation of the initial velocity from the aim direction. It has been converted to an angle in radians by

dividing by the missile speed.

Note that there is no equivalent sensitivity output corresponding to the position offset. This is because

the adjoint system generates the relationship between start state and end state, and avoids all the usual

intermediate time steps. All runs begin on the axis, because our aim direction axis is defined as the line

from launch point to the target at the start of the simulation. It is therefore nonsense to imagine any

initial lateral displacement at the beginning of flight. This is simply an intermediate adjoint state which

has no useful interpretation in the forward simulation.

Figure 8 : Adjoint of Sight line Kinematics

Page 17: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

All the mysticism of adjoint simulation disappears when we have a clear understanding of precisely what

the results mean. We are not ‘running time backwards’. Adjoint time is the time of flight. We are

producing the sensitivities of end state to start state for an ambit of times of flight, and doing so in a

single run, rather than running the forward model thousands of times.

Since the miss distance input to the adjoint model is an impulse, the feed forward gain (T-t) to the

lateral velocity state is redundant, as it is zero when the impulse is present, so this may be safely

omitted.

Pursuit If used at all, pursuit would engage the target from the rear, having manoeuvred into the tail region

under inertial/command guidance. Range at acquisition is expected to be short, so the seeker signal

processing is not expected to introduce significant delays, and the sensor is expected to be body

mounted so there are no gimbal dynamics to concern us.

We assume the sight line can be resolved into velocity axes. There are many ways of achieving this, for

example mounting the sensor in an aerodynamically stable forebody, or using angle of attack vanes.

The missile will take a finite time to respond, so it appears the only the autopilot lag need be included,

to begin with, at least.

Figure 9 : Pursuit - Forward Model

The pursuit loop is presented in Figure 9. It is in principle very simple, but pays the penalty of limited

coverage. However, it is possible that by putting some intelligence into the remainder of the system (i.e.

command guiding on to a tail chase), such low cost weapons could be exploited effectively.

Page 18: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 10 : Pursuit Adjoint Loop

Constant coefficient transfer functions are self-adjoint, so there is no need to re-arrange the signal flows

within the autopilot lag.

Note that we have also taken the lateral acceleration as an output. This is because we expect pursuit to

fail as a consequence of control saturation rather than miss distance. The adjoint system will therefore

be run both for miss distance and terminal latax. In order to proceed we need some representative

values of system parameters.

The target is expected to have a finite response time so we have included a first order lag to represent

this.

We shall arbitrarily select the missile speed as 600ms-1, as we should expect it to have some speed

advantage over potential targets. An autopilot bandwidth of 10Hz doesn’t seem too ambitious and a

Butterworth pole pattern appears appropriate We know that the pursuit gain must be high, so say

10000.0 ms-2/radian (1g per milliradian), to ensure the velocity vector points at the target at all times.

The target is presumably much larger than the missile, so we shall assume a time constant of 0.1

seconds.

The closing speed is one of the parameters we wish to investigate explicitly. At this stage we wish to

investigate the validity of the control saturation criterion:

2c

m

U

U

We shall consider a 350ms-1 target in head-on and tail chase, corresponding to closing speeds of 950ms-1

and 250ms-1 respectively.

Page 19: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

About Impulses A further point is the representation of the impulse needed as the input to the adjoint model. Impulses

are, like weightless blocks, inextensible strings and frictionless pulleys, mathematical fictions which

make the sums less cumbersome. They have no actual existence, so that when we come to model them,

we need to know how close an approximation to the ideal is ‘good enough’ for our purposes. The best

we can do is a rectangular pulse having duration equal to a single time step.

Evidently, what constitutes ‘instantaneous’ depends on the time interval which is of current interest. In

the absence of better information, time intervals of 100th this value are instantaneous, and anything

which is 100 times this value is constant. Quite often we can get a fair idea of what is going on when the

multiplication factor is reduced to 10.

As the sage put it; it is better to use an approximation and know the truth within a few percent, than to

insist on an exact answer and know nothing about the truth. Most numerical work is concerned, not

with getting the answer, but deciding on how accurately the answer is calculated.

Provided the triangular pulse has the same effect as a true impulse, there should be no problem. As it is

the spectral content of the signal which concerns us, our best bet is to examine it in the frequency

domain. The Laplace transform of a true impulse is unity. For a rectangular pulse it is:

s

e

ss

edtesI

sTT sTsT 11

0

The model dynamics is not concerned with frequencies above, say ωm, which we take as the upper

bound on s:

!2

2TTI nm

This is near enough a true impulse if:

2Tm

Where δ is the allowable error in the impulse. However, the simulation time step is typically selected on

the basis of one tenth the shortest characteristic time constant of the system, which is a more stringent

requirement than maintaining the fidelity of the impulse. So for the level of detail and accuracy

expected from a linear analysis, we are justified in using a rectangular pulse equal in duration to the

time step and having amplitude equal to the reciprocal of the time step.

In order to avoid modelling impulses altogether, it is easier to assign the state variable of the integrators

having the adjoint miss distance as input to unity.

Page 20: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Pursuit Results Considering first the head-on case, the miss distance results are misleading. The sensitivity to initial

aiming error is practically zero.

Figure 11 : Head-On Case - Miss due to Aiming Error

At first sight, pursuit appears to achieve a direct hit regardless of the aiming error.

Figure 12 ; Head on Case - Miss Distance due to Target Manoeuvre

The target manoeuvre sensitivity is not so encouraging, as it appears to indicate that a mere 1g will

cause about 45m miss distance.

Page 21: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 13 : Final Latax Sensitivity to Aiming Error

The aiming error also has very little effect on the terminal lateral acceleration.

Figure 14: Head On Engagement Final Latax Sensitivity to Target Acceleration

As can be seen from Figure 14, target acceleration has a catastrophic effect on the terminal lateral

acceleration, requiring of the order of 100 times the target manoeuvre capability.

Quite apart from the glaringly obvious effect of target manoeuvre, all these plots have the undesirable

feature that the end state worsens the longer the fly-out. This implies that the guidance error builds up

Page 22: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

during the flight and is only corrected near the end. The longer the period before the guidance becomes

effective, the greater the built-up guidance error. This is not a satisfactory behaviour.

Figure 15 : Tail Chase Miss Distance Sensitivity to Aiming Error

For the tail chase, the absolute values of miss distance sensitivity are much smaller than for the head-on

case. More importantly, it reduces with increased time to go, implying the majority of the course

correction occurs early in the flight. This is a much more satisfactory behaviour.

Figure 16 : Tail Chase Miss Distance Sensitivity to Target Acceleration

The target acceleration sensitivity in the tail chase is much reduced; the target would need to pull 100g

to increase the miss distance to 4m or greater.

Page 23: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 17 : Tail Chase Final Latax Sensitivity to Aiming Error

In the tail chase the lateral acceleration sensitivity plots are similar to the miss distance sensitivity plots,

because, as we have seen, control saturation does not occur under these circumstances, as is

commonly, and erroneously, believed.

Figure 18 : Tail Chase - Sensitivity of Final Lateral Acceleration to Target Manoeuvre

We note that the peak terminal lateral acceleration is 0.6 that of the target in the tail chase. The target

actually needs greater manoeuvre capability than the missile in order to escape.

Mark Twain once wrote that it wasn’t what people don’t know that makes them stupid, it’s what they

do know that ain’t so. These results illustrate the closing speed condition for pursuit guidance to work,

which was discovered during the initial analysis Indeed, when operated within its feasible region it

Page 24: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

exhibits considerable robustness to target manoeuvre. This might in part explain why in predator/prey

encounters in Nature, millions of years of evolution has settled on pursuit as the navigation law of

choice.

Proportional Navigation Proportional navigation systems have been analysed competently elsewhere. We shall restrict ourselves

to a basic loop, with target acceleration, aiming error and the sensor angular measurement noise. In

fact, there are additional noise sources such as the gyro bias on the seeker angular velocity

measurement.

Guidance loop bandwidth is often limited by the parasitic loop due to radome aberration. This

introduces an error on the bore sight angle measurement, which depends on the orientation of the body

with respect to the sight line. These effects would need to be considered in the specific system under

consideration.

More modern systems would include more sophisticated sight line filtering, than is implicit in the seeker

model used here, but to avoid the risk of divulging proprietary information, this will not be discussed.

The seeker model uses bore sight error as the estimate of sight line spin, effectively differentiating a

noisy sensor measurement.

Figure 19 : Basic Proportional Navigation Loop

In order to prevent the noise from reaching the autopilot servos it is common practice to include a low

pass filter between the seeker measurements and autopilot input. This is known as the inter-loop

coupling filter. More sophisticated seeker processing ought to render this filter redundant.

Page 25: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

Figure 20 : PN Adjoint Loop

Using The Adjoint Results Proportional navigation has been thoroughly investigated elsewhere; it is by far the commonest

navigation law in use. However, our objective is to introduce ideas which are not widely known, so we

shall concentrate on how the adjoint simulation results may be used in noise studies.

The adjoint solution is an impulse response (h(t)) of the final state to the input at the time to go. We

can therefore find the effect of the miss distance due to an input y(t):

dttythT

Tx

T

0

1

Assuming the noise source is uncorrelated we have, for the variance of the end state:

dttthT

T y

T

x

2

0

22 1

The adjoint solution furnishes the means of identifying the effect of each of the noise sources on the

end state, enabling the most critical disturbances, and the time at which they have greatest effect, to be

found.

Estimation of Sight Line Angular Velocity However the sight line spin is estimated, it must be with respect to inertial axes. There are many ways

of achieving this. Most methods require the sensor to be mounted in a gimbal, so that it is space

Page 26: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

stabilised. This not only ensures that measurement is with respect to inertial axes, but also isolates the

sensor from the missile body motion (except for radome aberration effects). This places restrictions on

the sensor aperture, and is also very expensive. We have seen how a high bandwidth pointing loop can

serve to reduce the effect of gyro bias on sight line spin estimation. However, the cost and limitations of

the practical seeker has motivated considerable research into so-called ‘strap-down’ homing guidance.

By ‘strap-down’ we mean the sensor is rigidly mounted on the body of the missile, as in pursuit

guidance. This requires an accurate measurement of the body angular velocity, implying a much higher

quality sensor than would otherwise be required. Much effort has been applied to estimating the gyro

bias, but fundamentally the problem is akin to trying to lift oneself into the air by pulling on one’s own

boot laces.

One approach to reducing bias is to use an observer based on the sight line kinematics to improve the

quality of sight line spin estimate.

We know that the sight line spin is expected to vary with time according to:

goc

y

go tU

f

t

2

This takes feedback from the lateral acceleration (usually required for the autopilot and/or inertial

measurement unit). An observer based on these kinematics merely ensures the sight line spin estimate

does not fluctuate more rapidly than expected, so effectively filters most of the noise, but has no means

of observing biases.

With extensive mid-course guidance under inertial navigation, modern missiles are likely to employ

reasonable quality inertial navigation units, whose biases can potentially be estimated using GPS. If

there is an accurate estimate of the missile body orientation, perhaps it may be possible to exploit it in

the terminal guidance.

The sensor measures the orientation of the sight line with respect to the body, whilst the IMU measures

the orientation with respect to fixed (inertial) axes. The orientation is defined, for our purposes, as a

3×3 direction cosine matrix, the rows of which are the unit vectors along the sight line x, y and z axes,

referred to inertial axes. The methods used to calculate the direction cosine matrices may be found in

any reference on inertial navigation.

The rate of change of the direction cosine matrix T is given by:

TT

Where Ω is the anti-symmetric matrix:

Page 27: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

0

0

0

xy

xz

yz

Where: ωx, ωy and ωz are the components of the angular velocity vector, in sight line axes.

We notice:

Q

zzyzx

zyyyx

zxyxx

22

22

22

2

And: 23

Q2224

If the angular velocity is constant over the observation interval, the direction cosine matrix may be

calculated from its value at the start using a matrix exponential:

0exp TttT

Expanding the matrix exponential:

44

33

22

!4!32exp

ttttIt

Collecting up terms:

Qtt

It2

cos1sinexp

Post multiplying the matrix update by TT(0), and recalling that, for a direction cosine matrix, the

transpose is equal to the inverse.

tTtT T exp0

Where: Δ characterises the attitude change over interval t.

We find:

2222

23

cos13 zyx

tTrace

From which the angular velocity magnitude is:

Page 28: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

2

1cos

1 1 Trace

t

Also:

t

t

t

x

y

z

sin2

sin2

sin2

3223

1331

2112

The sight line direction relative to the body ought to be unbiased, and the IMU could conceivably use

GPS, landmarks, Sun/ and or star shots to obtain reference directions from which biases in the IMU

could be determined, so it might be possible to achieve a reasonably unbiased sight line spin using this

type of approach. Whether the cost of an accurate IMU is, in practice, any less than a conventional

gimbal mount remains a moot point.

Experience has shown that ‘obvious’ cost savings are only realisable at the expense of performance.

Everything is indeed possible when you don’t know what you are talking about.

Concluding Comments Although this note was introduced with a superficial review of sensor types, nothing in the methods

presented pre-suppose anything about the actual sensor employed. That is an issue specific to the

system considered, which would be tackled once the really fundamental system constraints have been

addressed.

Modern systems engineering, rather than flowing down subsystem requirements from the primary

function of hitting the target, tend to decree from the outset, which components shall be used, in the

forlorn hope the resulting system will actually work. Starting from existing hardware elements rather,

than determining what those elements should be, has become the ‘best practice’ as decreed by

management consultants who have never designed a weapon system in their lives, but presume to

dictate to the organisations who have been building successful systems for years, how to go about their

business.

In particular, we see that most of the useful insights, and top level parameter estimations, are derived

using problem-specific one-off codes, and not the references standard model, which emerges once this

understanding has been gained. The idea that a reference standard model resembling a vastly over-

priced computer game is the essential basis for valid analysis indicates just how ignorant of

concept/feasibility work its protagonists must be.

Page 29: Homing Guidance - gvigurs.files.wordpress.comHoming Guidance Introduction Placing the main sensor on the missile introduces a degree of autonomy which makes ‘fire and forget’ operation

The technically naive have encroached, and now dictate, to wiser minds how systems ought to be

developed. They may indeed produce the finest databases and human/machine interfaces, but their

presumption has done untold damage to a once thriving industry.