PID Controller back up

148
Laplace Transform The Laplace transform is an integral transform perhaps second only to the Fourier transform in its utility in solving physical problems. The Laplace transform is particularly useful in solving linear ordinary differential equations such as those arising in the analysis of electronic circuits. The (unilateral) Laplace transform (not to be confused with the Lie derivative, also commonly denoted ) is defined by ( 1 ) where: is defined for (Abramowitz and Stegun 1972). The unilateral Laplace transform is almost always what is meant by "the" Laplace transform, although a bilateral Laplace transformm is sometimes also defined as : ( 2 ) (Oppenheim et al. 1997). The unilateral Laplace transform is implemented in Mathematica as LaplaceTransform[f[t], t, s]. The inverse Laplace transform is known as the Bromwich integral, sometimes known as the Fourier-Mellin integral (see also the related Duhamel's convolution principle). A table of several important one-sided Laplace transforms is given below: 1

Transcript of PID Controller back up

Page 1: PID Controller back up

Laplace Transform

The Laplace transform is an integral transform perhaps second only to the Fourier transform in its utility in solving physical problems. The Laplace transform is particularly useful in solving linear ordinary differential equations such as those arising in the analysis of electronic circuits.

The (unilateral) Laplace transform (not to be confused with the Lie derivative, also commonly denoted ) is defined by

(1)

where:

is defined for (Abramowitz and Stegun 1972).

The unilateral Laplace transform is almost always what is meant by "the" Laplace transform, although a bilateral Laplace transformm is sometimes also defined as :

(2)

(Oppenheim et al. 1997). The unilateral Laplace transform is implemented in Mathematica as LaplaceTransform[f[t], t, s].

The inverse Laplace transform is known as the Bromwich integral, sometimes known as the Fourier-Mellin integral (see also the related Duhamel's convolution principle).

A table of several important one-sided Laplace transforms is given below:

conditions

1

Page 2: PID Controller back up

1

In the above table, is the zeroth-order Bessel function of the first kind, is

the delta function, and is the Heaviside step function.

The Laplace transform has many important properties. The Laplace transform existence theorem states that, if is piecewise continuous on every finite interval in

satisfying

(3)

2

Page 3: PID Controller back up

for all , then exists for all . The Laplace transform is also unique, in the sense that, given two functions and with the same transform so that

(4)

then Lerch's theorem guarantees that the integral

(5)

vanishes for all for a null function defined by

(6)

The Laplace transform is linear since

(7)

(8)

(9)

The Laplace transform of a convolution is given by

(10) (11)

Now consider differentiation. Let be continuously differentiable times in . If , then

(12)

This can be proved by integration by parts,

(13)

(14)

(15)

(16)

(17)

(18)

Continuing for higher-order derivatives then gives

(19

3

Page 4: PID Controller back up

) This property can be used to transform differential equations into algebraic equations, a procedure known as the Heaviside calculus, which can then be inverse transformed to obtain the solution. For example, applying the Laplace transform to the equation

(20)

gives

(21)

(22)

which can be rearranged to

(23)

If this equation can be inverse Laplace transformed, then the original differential equation is solved.

The Laplace transform satisfied a number of useful properties. Consider exponentiation. If for (i.e., is the Laplace transform of ), then

for . This follows from

(24)

(25)

(26)

The Laplace transform also has nice properties when applied to integrals of functions. If is piecewise continuous and , then

(27)

Fourier Transform

The Fourier transform is a generalization of the complex Fourier series in the limit as . Replace the discrete with the continuous while letting . Then

change the sum to an integral, and the equations become

(1)

4

Page 5: PID Controller back up

(2)

Here,

(3)

(4)

is called the forward ( ) Fourier transform, and

(5)

(6)

is called the inverse ( ) Fourier transform. The notation is introduced in Trott (2004, p. xxxiv), and and are sometimes also used to denote the Fourier transform and inverse Fourier transform, respectively (Krantz 1999, p. 202).

Note that some authors (especially physicists) prefer to write the transform in terms of angular frequency instead of the oscillation frequency . However, this destroys the symmetry, resulting in the transform pair

(7)

(8)

(9)

(10)

To restore the symmetry of the transforms, the convention

(11)

(12)

(13)

(14)

is sometimes used (Mathews and Walker 1970, p. 102).

5

Page 6: PID Controller back up

In general, the Fourier transform pair may be defined using two arbitrary constants and as

(15)

(16)

The Fourier transform of a function is implemented as FourierTransform[f, x, k], and different choices of and can be used by passing the optional FourierParameters-> a, b option. By default, Mathematica takes FourierParameters as . Unfortunately, a number of other conventions are in widespread use. For example,

is used in modern physics, is used in pure mathematics and systems engineering, is used in probability theory for the computation of the characteristic function, is used in classical physics, and is used in signal processing. In this work, following Bracewell (1999, pp. 6-7), it is always assumed that and unless otherwise stated. This choice often results in greatly simplified transforms of common functions such as 1, , etc.

Since any function can be split up into even and odd portions and ,

(17)

(18)

a Fourier transform can always be expressed in terms of the Fourier cosine transform and Fourier sine transform as

(19)

A function has a forward and inverse Fourier transform such that

(20)

provided that

6

Page 7: PID Controller back up

1. exists.

2. There are a finite number of discontinuities.

3. The function has bounded variation. A sufficient weaker condition is fulfillment of the Lipschitz condition

(Ramirez 1985, p. 29). The smoother a function (i.e., the larger the number of continuous derivatives), the more compact its Fourier transform.

The Fourier transform is linear, since if and have Fourier transforms and , then

(21) (22)

Therefore,

(23) (24)

The Fourier transform is also symmetric since implies:.

Let denote the convolution, then the transforms of convolutions of functions have particularly nice transforms,

(25) (26) (27) (28)

The first of these is derived as follows:

(29)

(30)

7

Page 8: PID Controller back up

(31)

(32)

where .

There is also a somewhat surprising and extremely important relationship between the autocorrelation and the Fourier transform known as the Wiener-Khinchin theorem. Let

, and denote the complex conjugate of , then the Fourier transform of the absolute square of is given by

(33)

The Fourier transform of a derivative of a function is simply related to the transform of the function itself. Consider

(34)

Now use integration by parts

(35)

with

(36) (37)

and

(38) (39)

then

(40)

The first term consists of an oscillating function times . But if the function is bounded so that

(41)

(as any physically significant signal must be), then the term vanishes, leaving

8

Page 9: PID Controller back up

(42) (43)

This process can be iterated for the th derivative to yield

(44)

The important modulation theorem of Fourier transforms allows to be expressed in terms of as follows,

(45) (46) (47) (48)

Since the derivative of the Fourier transform is given by

(49)

it follows that

(50)

Iterating gives the general formula

(51)

(52)

The variance of a Fourier transform is

(53)

and it is true that

(54)

If has the Fourier transform , then the Fourier transform has the shift property

(55)

9

Page 10: PID Controller back up

(56)

so has the Fourier transform

(57)

If has a Fourier transform , then the Fourier transform obeys a similarity theorem.

(58)

so has the Fourier transform

(59)

The "equivalent width" of a Fourier transform is :

(60)

(61)

The "autocorrelation width" is

(62)

(63)

where denotes the cross-correlation of and and is the complex conjugate.

Any operation on which leaves its area unchanged leaves unchanged, since

(64)

The following table summarized some common Fourier transform pairs.

FunctionFourier transform--1 1

Fourier transform--cosine

Fourier transform--delta function

Fourier transform--exponential function

10

Page 11: PID Controller back up

Fourier transform--Gaussian

Fourier transform--Heaviside step function

Fourier transform--inverse function

Fourier transform--Lorentzian function

Fourier transform--ramp function

Fourier transform--sine

In two dimensions, the Fourier transform becomes

(65) (66) (67)

Similarly, the -dimensional Fourier transform can be defined for , by

(68)

The (unilateral) -transform of a sequence is defined as

(1)

This definition is implemented in Mathematica as ZTransform[a, n, z]. Similarly, the inverse -transform is implemented as InverseZTransform[A, z, n].

"The" -transform generally refers to the unilateral Z-transform. Unfortunately, there are a number of other conventions. Bracewell (1999) uses the term " -transform" (with a lower case ) to refer to the unilateral -transform. Girling (1987, p. 425) defines the transform in terms of samples of a continuous function. Worse yet, some authors define the -transform as the bilateral Z-transform.

In general, the inverse -transform of a sequence is not unique unless its region of convergence is specified (Zwillinger 1996, p. 545). If the -transform of a function is

11

Page 12: PID Controller back up

known analytically, the inverse -transform can be computed using the contour integral

(2)

where is a closed contour surrounding the origin of the complex plane in the domain of analyticity of (Zwillinger 1996, p. 545)

The unilateral transform is important in many applications because the generating function of a sequence of numbers is given precisely by , the -transform of in the variable (Germundsson 2000). In other words, the inverse -transform of a function gives precisely the sequence of terms in the series expansion of . So, for example, the terms of the series of are given by

(3)

Girling (1987) defines a variant of the unilateral -transform that operates on a continuous function sampled at regular intervals ,

(4)

where is the Laplace transformm,

(5) (6)

the one-sided shah function with period is given by

(7)

and is the Kronecker delta, giving

(8)

An alternative equivalent definition is

(9)

12

Page 13: PID Controller back up

where

(10)

This definition is essentially equivalent to the usual one by taking .

The following table summarizes the -transforms for some common functions (Girling 1987, pp. 426-427; Bracewell 1999). Here, is the Kronecker delta, is the Heaviside step function, and is the polylogarithm.

1

1

The -transform of the general power function can be computed analytically as

(11) (12) (13)

where the are Eulerian numbers and is a polylogarithm. Amazingly, the -transforms of are therefore generators for Euler's number triangle.

The -transform satisfies a number of important properties, including linearity

(14)

translation 13

Page 14: PID Controller back up

(15) (16) (17) (18)

scaling

(19)

and multiplication by powers of

(20) (21)

(Girling 1987, p. 425; Zwillinger 1996, p. 544).

The discrete Fourier transform is a special case of the -transform with

(22)

and a -transform with

(23)

for is called a fractional Fourier transform.

The (unilateral) -transform of a sequence is defined as

(1)

This definition is implemented in Mathematica as ZTransform[a, n, z]. Similarly, the inverse -transform is implemented as InverseZTransform[A, z, n].

"The" -transform generally refers to the unilateral Z-transform. Unfortunately, there are a number of other conventions. Bracewell (1999) uses the term " -transform" (with a lower case ) to refer to the unilateral -transform. Girling (1987, p. 425) defines the transform in terms of samples of a continuous function. Worse yet, some authors define the -transform as the bilateral Z-transform.

In general, the inverse -transform of a sequence is not unique unless its region of convergence is specified (Zwillinger 1996, p. 545). If the -transform of a function is

14

Page 15: PID Controller back up

known analytically, the inverse -transform can be computed using the contour integral

(2)

where is a closed contour surrounding the origin of the complex plane in the domain of analyticity of (Zwillinger 1996, p. 545)

The unilateral transform is important in many applications because the generating function of a sequence of numbers is given precisely by , the -transform of in the variable (Germundsson 2000). In other words, the inverse -transform of a function gives precisely the sequence of terms in the series expansion of . So, for example, the terms of the series of are given by

(3)

Girling (1987) defines a variant of the unilateral -transform that operates on a continuous function sampled at regular intervals ,

(4)

where is the Laplace transformm,

(5)

(6)

the one-sided shah function with period is given by

(7)

and is the Kronecker delta, giving

(8)

An alternative equivalent definition is

(9)

where

(10)

This definition is essentially equivalent to the usual one by taking .

15

Page 16: PID Controller back up

The following table summarizes the -transforms for some common functions (Girling 1987, pp. 426-427; Bracewell 1999). Here, is the Kronecker delta, is the Heaviside step function, and is the polylogarithm.

1

1

The -transform of the general power function can be computed analytically as

(11)

(12)

(13)

where the are Eulerian numbers and is a polylogarithm. Amazingly, the -transforms of are therefore generators for Euler's number triangle.

The -transform satisfies a number of important properties, including linearity

(14)

translation

(15)

(16)

16

Page 17: PID Controller back up

(17)

(18)

scaling

(19)

and multiplication by powers of

(20)

(21)

(Girling 1987, p. 425; Zwillinger 1996, p. 544).

The discrete Fourier transform is a special case of the -transform with

(22)

and a -transform with

(23)

for is called a fractional Fourier transform.

1. What is a PID Controller?

17

Page 18: PID Controller back up

A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems.

A PID is the most commonly used feedback controller. A PID controller calculates an "error" value as the difference between a measured process variable and a desired setpoint. The controller attempts to minimize the error by adjusting the process control inputs.

2. PID Controller Algorithm

The PID controller calculation (algorithm) involves three separate parameters, and is accordingly sometimes called three-term control:

Proportional, P which depends on the present error;

Integral, I which depends on the accumulation of past errors;

Derivative, D which is a prediction of future errors, based on the current rate of change.

The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.

In the absence of knowledge of the underlying process, a PID controller is the best controller. By tuning the three constants in the PID controller algorithm, the controller can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or system stability.

Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are fairly common, since derivative action is sensitive to measurement noise, whereas the absence of an integral value may prevent the system from reaching its target value due to the control action.

3. PID Control loop basics

A familiar example of a control loop is the action taken when adjusting hot and cold faucet valves to maintain the faucet water at the desired temperature. This typically involves the mixing of two process streams, the hot and cold water. The person touches the water to sense or measure its temperature. Based on this feedback they

18

Page 19: PID Controller back up

perform a control action to adjust the hot and cold water valves until the process temperature stabilizes at the desired value.

Sensing water temperature is analogous to taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The input to the process (the water valve position) is called the manipulated variable (MV). The difference between the temperature measurement and the setpoint is the error (e) and quantifies whether the water is too hot or too cold and by how much.

After measuring the temperature (PV), and then calculating the error, the controller decides when to change the tap position (MV) and by how much. When the controller first turns the valve on, it may turn the hot valve only slightly if warm water is desired, or it may open the valve all the way if very hot water is desired. This is an example of a simple proportional control. In the event that hot water does not arrive quickly, the controller may try to speed-up the process by opening up the hot water valve more-and-more as time goes by. This is an example of an integral control.

Making a change that is too large when the error is small is equivalent to a high gain controller and will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the oscillations increase with time then the system is unstable, whereas if they decrease the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable.

In the interest of achieving a gradual convergence at the desired temperature (SP), the controller may wish to damp the anticipated future oscillations. So in order to compensate for this effect, the controller may elect to temper their adjustments. This can be thought of as a derivative control method.

If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances. Generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feedwater temperature constitute a disturbance to the faucet temperature control process.

In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists.

19

Page 20: PID Controller back up

4. PID controller theory

The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). Hence:

where

Pout, Iout, and Dout are the contributions to the output from the PID controller from each of the three terms, as defined below.

4a. The Proportional term

Plot of PV vs time, for three values of Kp (Ki and Kd held constant)

The proportional term (sometimes called gain) makes a change to the output that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain.

The proportional term is given by:

where

Pout: Proportional term of outputKp: Proportional gain, a tuning parameterSP: Setpoint, the desired valuePV: Process value (or process variable), the measured valuee: Error = SP − PVt: Time or instantaneous time (the present)

20

Page 21: PID Controller back up

A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive (or sensitive) controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances.

Droop

A pure proportional controller will not always settle at its target value, but may retain a steady-state error. Specifically, the process gain - drift in the absence of control, such as cooling of a furnace towards room temperature, biases a pure proportional controller. If the process gain is down, as in cooling, then the bias will be below the set point, hence the term "droop".

Droop is proportional to process gain and inversely proportional to proportional gain. Specifically the steady-state error is given by:

e = G / Kp

Droop is an inherent defect of purely proportional control. Droop may be mitigated by adding a compensating bias term (setting the setpoint above the true desired value), or corrected by adding an integration term (in a PI or PID controller), which effectively computes a bias adaptively.

Despite droop, both tuning theory and industrial practice indicate that it is the proportional term that should contribute the bulk of the output change.

4b. The Integral term

Plot of PV vs time, for three values of Ki (Kp and Kd held constant)

21

Page 22: PID Controller back up

The contribution from the integral term (sometimes called reset) is proportional to both the magnitude of the error and the duration of the error. Summing the instantaneous error over time (integrating the error) gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain and added to the controller output. The magnitude of the contribution of the integral term to the overall control action is determined by the integral gain, Ki.

The integral term is given by:

where

Iout: Integral term of outputKi: Integral gain, a tuning parameterSP: Setpoint, the desired valuePV: Process value (or process variable), the measured valuee: Error = SP − PVt: Time or instantaneous time (the present)τ: a dummy integration variable

The integral term (when added to the proportional term) accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a proportional only controller. However, since the integral term is responding to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (cross over the setpoint and then create a deviation in the other direction). For further notes regarding integral gain tuning and controller stability ( see loop tuning).

4c. The Derivative term

Plot of PV vs time, for three values of Kd (Kp and Ki held constant)

22

Page 23: PID Controller back up

The rate of change of the process error is calculated by determining the slope of the error over time (i.e., its first derivative with respect to time) and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term (sometimes called rate) to the overall control action is termed the derivative gain, Kd. The derivative term is given by:

where

Dout: Derivative term of outputKd: Derivative gain, a tuning parameterSP: Setpoint, the desired valuePV: Process value (or process variable), the measured valuee: Error = SP − PVt: Time or instantaneous time (the present)

The derivative term slows the rate of change of the controller output and this effect is most noticeable close to the controller setpoint. Hence, derivative control is used to reduce the magnitude of the overshoot produced by the integral component and improve the combined controller-process stability. However, differentiation of a signal amplifies noise and thus this term in the controller is highly sensitive to noise in the error term, and can cause a process to become unstable if the noise and the derivative gain are sufficiently large. Hence an approximation to a differentiator with a limited bandwidth is more commonly used. Such a circuit is known as a Phase-Lead compensator.

4d. SummaryThe proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining u(t) as the controller output, the final form of the PID algorithm is:

where the tuning parameters are:

Proportional gain, Kp

Larger values typically mean faster response since the larger the error, the larger the proportional term compensation. An excessively large proportional gain will lead to process instability and oscillation.

Integral gain, Ki

Larger values imply steady state errors are eliminated more quickly. The trade-off is larger overshoot: any negative error integrated during transient response must be integrated away by positive error before reaching steady state.

23

Page 24: PID Controller back up

Derivative gain, Kd

Larger values decrease overshoot, but slow down transient response and may lead to instability due to signal noise amplification in the differentiation of the error.

5. Loop tuningTuning a control loop is the adjustment of its control parameters (gain/proportional band, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (bounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.

Some processes have a degree of non-linearity and so parameters that work well at full-load conditions don't work when the process is starting up from no-load; this can be corrected by gain scheduling (using different parameters in different operating regions). PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning.

PID tuning is a difficult problem, even though there are only three parameters and in principle is simple to describe, because it must satisfy complex criteria within the limitations of PID control. There are accordingly various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional manual methods for loop tuning.

5a. StabilityIf the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable, i.e. its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.

Generally, stability of response (the reverse of instability) is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.

5b. Optimum behaviorThe optimum behavior on a process change or setpoint change varies depending on the application.

Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes) – these refer to how well the controlled variable tracks the desired value. Specific criteria for command

24

Page 25: PID Controller back up

tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.

5c. Overview of methodsThere are several methods for tuning a PID loop. The most effective methods generally involve the development of some form of process model, then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively inefficient, particularly if the loops have response times on the order of minutes or longer.

The choice of method will depend largely on whether or not the loop can be taken "offline" for tuning, and the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.

Choosing a Tuning MethodMethod Advantages DisadvantagesManual Tuning

No math required. Online method.Requires experienced

personnel.

Ziegler–Nichols

Proven Method. Online method.Process upset, some trial-and-error, very aggressive

tuning.

Software Tools

Consistent tuning. Online or offline method. May include valve and sensor analysis. Allow

simulation before downloading. Can support Non-Steady State (NSS) Tuning.

Some cost and training involved.

Cohen-Coon

Good process models.Some math. Offline

method. Only good for first-order processes.

5d. Manual tuning

If the system must remain online, one tuning method is to first set Ki and Kd values to zero. Increase the Kp until the output of the loop oscillates, then the Kp should be set to approximately half of that value for a "quarter amplitude decay" type response. Then increase Ki until any offset is correct in sufficient time for the process. However, too much Ki will cause instability. Finally, increase Kd, if required, until the loop is acceptably quick to reach its reference after a load disturbance. However, too much Kd will cause excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot

25

Page 26: PID Controller back up

accept overshoot, in which case an over-damped closed-loop system is required, which will require a Kp setting significantly less than half that of the Kp setting causing oscillation.

Effects of increasing a parameter independently

Parameter Rise time OvershootSettling

timeSteady-state

errorStability

Kp Decrease IncreaseSmall

changeDecrease Degrade

Ki Decrease Increase IncreaseDecrease

significantlyDegrade

KdMinor

decreaseMinor

decreaseMinor

decreaseNo effect in theory

Improve if Kd

small

5e. Ziegler–Nichols method

Another heuristic tuning method is formally known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the Ki and Kd gains are first set to zero. The P gain is increased until it reaches the ultimate gain, Ku, at which the output of the loop starts to oscillate. Ku and the oscillation period Pu are used to set the gains as shown:

Ziegler–Nichols method

Control Type Kp Ki Kd

P 0.50Ku - -

PI 0.45Ku 1.2Kp / Pu -

PID 0.60Ku 2Kp / Pu KpPu / 8

These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, the integral and derivative time parameters Ti and Td are only dependent on the oscillation period Pu.

26

Page 27: PID Controller back up

5f. PID tuning software

Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages will gather the data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.

Mathematical PID loop tuning induces an impulse in the system, and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can literally take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.

Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.

Advances in automated PID Loop Tuning software also deliver algorithms for tuning PID Loops in a dynamic or Non-Steady State (NSS) scenario. The software will model the dynamics of a process, through a disturbance, and calculate PID control parameters in response.

6. Modifications to the PID algorithm

The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.

6a. Integral windup

One common problem resulting from the ideal PID implementations is integral windup, where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound. This problem can be addressed by:

Initializing the controller integral to a desired value

Increasing the setpoint in a suitable ramp

Disabling the integral function until the PV has entered the controllable region

Limiting the time period over which the integral error is calculated

27

Page 28: PID Controller back up

Preventing the integral term from accumulating above or below pre-determined bounds

6b. Freezing the integral function in case of disturbancesIf a PID loop is used to control the temperature of an electric resistance furnace, the system has stabilized and then the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate this error by introducing another error in the positive direction. This can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.

6c. Replacing the integral function by a model based partOften the time-response of the system is approximately known. Then it is an advantage to simulate this time-response with a model and to calculate some unknown parameter from the actual response of the system. If for instance the system is an electrical furnace the response of the difference between furnace temperature and ambient temperature to changes of the electrical power will be similar to that of a simple RC low-pass filter multiplied by an unknown proportional coefficient. The actual electrical power supplied to the furnace is delayed by a low-pass filter to simulate the response of the temperature of the furnace and then the actual temperature minus the ambient temperature is divided by this low-pass filtered electrical power. Then, the result is stabilized by another low-pass filter leading to an estimation of the proportional coefficient. With this estimation, it is possible to calculate the required electrical power by dividing the set-point of the temperature minus the ambient temperature by this coefficient. The result can then be used instead of the integral function. This also achieves a control error of zero in the steady-state, but avoids integral windup and can give a significantly improved control action compared to an optimized PID controller. This type of controller does work properly in an open loop situation which causes integral windup with an integral function. This is an advantage if, for example, the heating of a furnace has to be reduced for some time because of the failure of a heating element, or if the controller is used as an advisory system to a human operator who may not switch it to closed-loop operation. It may also be useful if the controller is inside a branch of a complex control system that may be temporarily inactive.

Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or a deadband in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is

28

Page 29: PID Controller back up

accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.

The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate the following modifications:

6d. Derivative of outputIn this case the PID controller measures the derivative of the output quantity, rather than the derivative of the error. The output is always continuous (i.e., never has a step change). For this to be effective, the derivative of the output must have the same sign as the derivative of the error.

6e. Setpoint rampingIn this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first order differential ramp function. This avoids the discontinuity present in a simple step change.

6f. Setpoint weightingSetpoint weighting uses different multipliers for the error depending on which element of the controller it is used in. The error in the integral term must be the true control error to avoid steady-state control errors. This affects the controller's setpoint response. These parameters do not affect the response to load disturbances and measurement noise.

7. History

PID controllers date to 1890s governor design. PID controllers were subsequently developed in automatic ship steering. One of the earliest examples of a PID-type controller was developed by Elmer Sperry in 1911, while the first published theoretical analysis of a PID controller was by Russian American engineer Nicolas Minorsky, in (Minorsky 1922). Minorsky was designing automatic steering systems for the US Navy, and based his analysis on observations of a helmsman, observing that the helmsman controlled the ship not only based on the current error, but also on past error and current rate of change; this was then made mathematical by Minorsky. His goal was stability, not general control, which significantly simplified the problem. While proportional control provides stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to droop), which required adding the integral term. Finally, the derivative term was added to improve control.

Trials were carried out on the USS New Mexico, with the controller controlling the angular velocity (not angle) of the rudder. PI control yielded sustained yaw (angular

29

Page 30: PID Controller back up

error) of ±2°, while adding D yielded yaw of ±1/6°, better than most helmsmen could achieve.

The Navy ultimately did not adopt the system, due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.

8. Limitations of PID control

While PID controllers are applicable to many control problems, and often perform satisfactorily without any improvements or even tuning, they can perform poorly in some applications, and do not in general provide optimal control. The fundamental difficulty with PID control is that it is a feedback system, with constant parameters, and no direct knowledge of the process, and thus overall performance is reactive and a compromise – while PID control is the best controller with no model of the process, better performance can be obtained by incorporating a model of the process.

The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.

PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.

8a. Linearity

Another problem faced with PID controllers is that they are linear, and in particular symmetric. Thus, performance of PID controllers in non-linear systems (such as HVAC systems) is variable. For example, in temperature control, a common use case is active heating (via a heating element) but passive cooling (heating off, but no cooling), so overshoot can only be corrected slowly – it cannot be forced downward. In this case the PID should be tuned to be overdamped, to prevent or reduce overshoot, though this reduces performance (it increases settling time).

8b. Noise in derivative

A problem with the derivative term is that small amounts of measurement or process noise can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise

30

Page 31: PID Controller back up

components. However, low-pass filtering and derivative control can cancel each other out, so reducing noise by instrumentation means is a much better choice. Alternatively, a nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some case, the differential band can be turned off in many systems with little loss of control. This is equivalent to using the PID controller as a PI controller.

9. Improvements

9a. Feed-forward

The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller can be used primarily to respond to whatever difference or error remains between the setpoint (SP) and the actual value of the process variable (PV). Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response and stability.

For example, in most motion control systems, in order to accelerate a mechanical load under control, more force or torque is required from the prime mover, motor, or actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force or torque being applied by the prime mover, then it is beneficial to take the instantaneous acceleration desired for the load, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the prime mover regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive, stable and reliable control system.

9b. Other improvements

In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance.

31

Page 32: PID Controller back up

10. Cascade control

One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. In cascade control there are two PIDs arranged with one PID controlling the set point of another. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as set point, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controller.

11. Physical implementation of PID control

In the early history of automatic process control the PID controller was implemented as a mechanical device. These mechanical controllers used a lever, spring and a mass and were often energized by compressed air. These pneumatic controllers were once the industry standard.

Electronic analog controllers can be made from a solid-state or tube amplifier, a capacitor and a resistance. Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Nowadays, electronic controllers have largely been replaced by digital controllers implemented with microcontrollers.

Most modern PID controllers in industry are implemented in programmable logic controllers (PLCs) or as a panel-mounted digital controller. Software implementations have the advantages that they are relatively cheap and are flexible with respect to the implementation of the PID algorithm.

Variable voltages may be applied by the time proportioning form of Pulse-width modulation (PWM) – a cycle time is fixed, and variation is achieved by varying the proportion of the time during this cycle that the controller outputs +1 (or −1) instead of 0. On a digital system the possible proportions are discrete – e.g., increments of .1 second within a 2 second cycle time yields 20 possible steps: percentage increments of 5% – so there is a discretization error, but for high enough time resolution this yields satisfactory performance.

32

Page 33: PID Controller back up

12. Alternative nomenclature and PID forms:

12a. Ideal versus standard PID form

The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the Kp gain is applied to the Iout, and Dout terms, yielding:

where

Ti is the integral timeTd is the derivative time

In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The addition of the proportional and derivative components effectively predicts the error value at Td seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in Ti seconds (or samples). The resulting compensated single error value is scaled by the single gain Kp.

In the ideal parallel form, shown in the controller theory section

the gain parameters are related to the parameters of the standard form through

and . This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the least physical interpretation and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.

12b. Laplace form of the PID controller

Sometimes it is useful to write the PID regulator in Laplace transform form:

Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.

33

Page 34: PID Controller back up

12c. PID Pole Zero Cancellation

The PID equation can be written in this form:

When this form is used it is easy to determine the closed loop transfer function.

If

Then

This can be very useful to remove unstable poles

12d. Series/interacting form

Another representation of the PID controller is the series, or interacting form

where the parameters are related to the parameters of the standard form through

, , and

with

.

This form essentially consists of a PD and PI controller in series, and it made early (analog) controllers easier to build. When the controllers later became digital, many kept using the interacting form.

34

Page 35: PID Controller back up

12e. Discrete implementation

The analysis for designing a digital implementation of a PID controller in a Microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretised. Approximations for first-order derivatives are made by backward finite differences. The integral term is discretised, with a sampling time Δt,as follows,

The derivative term is approximated as,

Thus, a velocity algorithm for implementation of the discretised PID controller in a MCU is obtained by differentiating u(t), using the numerical definitions of the first and second derivative and solving for u(tk) and finally obtaining:

Pseudocode

Here is a simple software loop that implements the PID algorithm in its 'ideal, parallel' form:

previous_error = 0integral = 0start: error = setpoint - actual_position integral = integral + (error*dt) derivative = (error - previous_error)/dt output = (Kp*error) + (Ki*integral) + (Kd*derivative) previous_error = error wait(dt) goto start

13. PI controller

Basic block of a PI controller.35

Page 36: PID Controller back up

A PI Controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.

The controller output is given by

where Δ is the error or deviation of actual measured value (PV) from the set-point (SP).

Δ = SP - PV.

A PI controller can be modelled easily in software such as Simulink using a "flow chart" box involving Laplace operators:

where:

G = KP = proportional gainG / τ = KI = integral gain

Setting a value for G is often a trade off between decreasing overshoot and increasing settling time.

The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.

Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.

IntroductionThe illustration below shows the characteristics of each: the proportional (P), the integral (I), and the derivative (D) controls, and how to use them to obtain a desired response.Consider the following unity feedback system:

Plant: A system to be controlledController: Provides the excitation for the plant; Designed to control the overall system behavior

36

Page 37: PID Controller back up

14. The three-term controllerThe transfer function of the PID controller looks like the following:

Kp = Proportional gain

KI = Integral gain

Kd = Derivative gain

First, let's take a look at how the PID controller works in a closed-loop system using the schematic shown above. The variable (e) represents the tracking error, the difference between the desired input value (R) and the actual output (Y). This error signal (e) will be sent to the PID controller, and the controller computes both the derivative and the integral of this error signal. The signal (u) just past the controller is now equal to the proportional gain (Kp) times the magnitude of the error plus the integral gain (Ki) times the integral of the error plus the derivative gain (Kd) times the derivative of the error.

This signal (u) will be sent to the plant, and the new output (Y) will be obtained. This new output (Y) will be sent back to the sensor again to find the new error signal (e). The controller takes this new error signal and computes its derivative and its integral again. This process goes on and on.

15. The characteristics of P, I, and D controllers

A proportional controller (Kp) will have the effect of reducing the rise time and will reduce ,but never eliminate, the steady-state error. An integral control (Ki) will have the effect of eliminating the steady-state error, but it may make the transient response worse. A derivative control (Kd) will have the effect of increasing the stability of the system, reducing the overshoot, and improving the transient response. Effects of each of controllers Kp, Kd, and Ki on a closed-loop system are summarized in the table shown below.

CL RESPONSE RISE TIME OVERSHOOT SETTLING TIME S-S ERROR

Kp Decrease Increase Small Change Decrease

Ki Decrease Increase Increase Eliminate

Kd Small Change Decrease Decrease Small Change

37

Page 38: PID Controller back up

Note that these correlations may not be exactly accurate, because Kp, Ki, and Kd are dependent of each other. In fact, changing one of these variables can change the effect of the other two. For this reason, the table should only be used as a reference when you are determining the values for Ki, Kp and Kd.

16. Modeling Tutorial Matlab can be used to represent a physical system or a model. To begin with, let's start with a review of how to represent a physical system as a set of differential equations.

16a. Train system

In this example, consider a toy train consisting of an engine and a car. Assuming that the train only travels in one direction, we want to apply control to the train so that it has a smooth start-up and stop, along with a constant-speed ride.

The mass of the engine and the car will be represented by M1 and M2, respectively. The two are held together by a spring, which has the stiffness coefficient of k. F represents the force applied by the engine, and the Greek letter, mu (which will also be represented by the letter u), represents the coefficient of rolling friction.

Free body diagram and Newton's law

The system can be represented by following Free Body Diagrams.

From Newton's law, we know that the sum of forces acting on a mass equals the mass times its acceleration. In this case, the forces acting on M1 are the spring, the friction and the force applied by the engine. The forces acting on M2 are the spring and the friction. In the vertical direction, the gravitational force is cancelled by the normal force applied by the ground, so that there will be no acceleration in the vertical direction. The equations of motion in the horizontal direction are the followings:

38

Page 39: PID Controller back up

State-variable and output equations

This set of system equations can now be manipulated into state-variable form. Knowing state-variables are X1 and X2 and the input is F, state-variable equations will look like the following:

Let the output of the system be the velocity of the engine. Then the output equation will become:

1. Transfer function

To find the transfer function of the system, first, take Laplace transforms of above state-variable and output equations.

Using these equations, derive the transfer function Y(s)/F(s) in terms of constants.

Note: When finding the transfer function, zero initial conditions must be assumed.

The transfer function should look like the one shown below.

2. State-space

Another method to solve the problem is to use the state-space form. Four matrices A, B, C, and D characterize the system behavior, and will be used to solve the problem. The state-space forms that were manipulated from the state-variable and the output equations is shown below.

39

Page 40: PID Controller back up

Matlab representation

Now we will show you how to enter the equations derived above into an m-file for Matlab. Since Matlab can not manipulate symbolic variables, let's assign numerical values to each of the variables. Let

M1 = 1 kg

M2 = 0.5 kg

k = 1 N/sec

F= 1 N

u = 0.002 sec/m

g = 9.8 m/s^2

Create a new m-file and enter the following commands:

M1=1;M2=0.5;k=1;F=1;u=0.002;g=9.8;

Now you have one of two choices:

1) Use the transfer function, or

2) Use the state-space form to solve the problem.

If you choose to use the transfer function, add the following commands onto the end of the m-file which you have just created:

40

Page 41: PID Controller back up

num=[M2 M2*u*g 1];den=[M1*M2 2*M1*M2*u*g M1*k+M1*M2*u*u*g*g+M2*k M1*k*u*g+M2*k*u*g];

If you choose to use the state-space form, add the following commands at the end of the m-file, instead of num and den matrices shown above:

A=[ 0 1 0 0; -k/M1 -u*g k/M1 0; 0 0 0 1; k/M2 0 -k/M2 -u*g]; B=[ 0; 1/M1; 0; 0]; C=[0 1 0 0]; D=[0];

Note: See the Matlab basics tutorial to learn more about entering matrices.

Continue solving the problemNow, you are ready to obtain the system output (with an addition of few more commands). It should be noted that many operations can be done using either the transfer function or the state-space model. Furthermore, it is simple to transfer between the two if the other form of representation is required

16b. Root Locus Tutorial Key Matlab commands used in this tutorial: cloop, rlocfind, rlocus, sgrid, step

Matlab commands from the control system toolbox are highlighted in red.

Closed-Loop Poles

The root locus of an (open-loop) transfer function H(s) is a plot of the locations (locus) of all possible closed loop poles with proportional gain k and unity feedback:

The closed-loop transfer function is:

41

Page 42: PID Controller back up

and thus the poles of the closed loop system are values of s such that 1 + K H(s) = 0.

If we write H(s) = b(s)/a(s), then this equation has the form:

Let n = order of a(s) and m = order of b(s) [the order of a polynomial is the highest power of s that appears in it].

Consider all positive values of k. In the limit as k -> 0, the poles of the closed-loop system are a(s) = 0 or the poles of H(s). In the limit as k -> infinity, the poles of the closed-loop system are b(s) = 0 or the zeros of H(s).

No matter what we pick k to be, the closed-loop system must always have n poles, where n is the number of poles of H(s). The root locus must have n branches, each branch starts at a pole of H(s) and goes to a zero of H(s). If H(s) has more poles than zeros (as is often the case), m < n and we say that H(s) has zeros at infinity. In this case, the limit of H(s) as s -> infinity is zero. The number of zeros at infinity is n-m, the number of poles minus the number of zeros, and is the number of branches of the root locus that go to infinity (asymptotes).

Since the root locus is actually the locations of all possible closed loop poles, from the root locus we can select a gain such that our closed-loop system will perform the way we want. If any of the selected poles are on the right half plane, the closed-loop system will be unstable. The poles that are closest to the imaginary axis have the greatest influence on the closed-loop response, so even though the system has three or four poles, it may still act like a second or even first order system depending on the location(s) of the dominant pole(s).

Plotting the root locus of a transfer function

Consider an open loop system which has a transfer function of

To design a feed-back controller for the system by using the root locus method, say design criteria are 5% overshoot and 1 second rise time, make a Matlab file called rl.m. Enter the transfer function, and the command to plot the root locus:

42

Page 43: PID Controller back up

num=[1 7];den=conv(conv([1 0],[1 5]),conv([1 15],[1 20]));rlocus(num,den)axis([-22 3 -15 15])

Choosing a value of K from the root locus

The plot above shows all possible closed-loop pole locations for a pure proportional controller. Obviously not all of those closed-loop poles will satisfy our design criteria. To determine what part of the locus is acceptable, we can use the command sgrid(Zeta,Wn) to plot lines of constant damping ratio and natural frequency. Its two arguments are the damping ratio (Zeta) and natural frequency (Wn) [these may be vectors if you want to look at a range of acceptable values]. In our problem, we need an overshoot less than 5% (which means a damping ratio Zeta of greater than 0.7) and a rise time of 1 second (which means a natural frequency Wn greater than 1.8). Enter in the Matlab command window:

zeta=0.7;Wn=1.8;sgrid(zeta, Wn)

43

Page 44: PID Controller back up

On the plot above, the two white dotted lines at about a 45 degree angle indicate pole locations with Zeta = 0.7; in between these lines, the poles will have Zeta > 0.7 and outside of the lines Zeta < 0.7. The semicircle indicates pole locations with a natural frequency Wn = 1.8; inside the circle, Wn < 1.8 and outside the circle Wn > 1.8.

Going back to our problem, to make the overshoot less than 5%, the poles have to be in between the two white dotted lines, and to make the rise time shorter than 1 second, the poles have to be outside of the white dotted semicircle. So now we know only the part of the locus outside of the semicircle and in between the two lines are acceptable. All the poles in this location are in the left-half plane, so the closed-loop system will be stable.

From the plot above we see that there is part of the root locus inside the desired region. So in this case we need only a proportional controller to move the poles to the desired region. You can use rlocfind command in Matlab to choose the desired poles on the locus:

[kd,poles] = rlocfind(num,den) Click on the plot the point where you want the closed-loop pole to be. You may want to select the points indicated in the plot below to satisfy the design criteria.

Note that since the root locus may has more than one branch, when you select a pole, you may want to find out where the other pole (poles) are. Remember they will affect the response too. From the plot above we see that all the poles selected (all the white "+") are at reasonable positions. We can go ahead and use the chosen kd as our proportional controller.

44

Page 45: PID Controller back up

Closed-loop response

In order to find the step response, you need to know the closed-loop transfer function. You could compute this using the rules of block diagrams, or let Matlab do it for you:

[numCL, denCL] = cloop((kd)*num, den)

The two arguments to the function cloop are the numerator and denominator of the open-loop system. You need to include the proportional gain that you have chosen. Unity feedback is assumed.

If you have a non-unity feedback situation, look at the help file for the Matlab function feedback, which can find the closed-loop transfer function with a gain in the feedback loop.

Check out the step response of your closed-loop system:

step(numCL,denCL)

As we expected, this response has an overshoot less than 5% and a rise time less than 1 second.

16c. State Space Tutorial State-space equationsControl design using pole placement Introducing the reference input Observer design

Key Matlab commands used in this tutorial: acker, lsim, place, plot, rscale

Matlab commands from the control system toolbox are highlighted in red.Non-standard Matlab commands used in this tutorial are highlighted in green.

45

Page 46: PID Controller back up

State-space equations

There are several different ways to describe a system of linear differential equations. The state-space representation is given by the equations:

where x is an n by 1 vector representing the state (commonly position and velocity variables in mechanical systems), u is a scalar representing the input (commonly a force or torque in mechanical systems), and y is a scalar representing the output. The matrices A (n by n), B (n by 1), and C (1 by n) determine the relationships between the state and input and output variables. Note that there are n first-order differential equations. State space representation can also be used for systems with multiple inputs and outputs (MIMO), but we will only use single-input, single-output (SISO) systems in these tutorials.

To introduce the state space design method, we will use the magnetically suspended ball as an example. The current through the coils induces a magnetic force which can balance the force of gravity and cause the ball (which is made of a magnetic material) to be suspended in midair. The modeling of this system has been established in many control text books (including Automatic Control Systems by B. C. Kuo, the seventh edition). The equations for the system are given by:

where h is the vertical position of the ball, i is the current through the electromagnet, V is the applied voltage, M is the mass of the ball, g is gravity, L is the inductance, R is the resistance, and K is a coefficient that determines the magnetic force exerted on the ball. For simplicity, we will choose values M = 0.05 Kg, K = 0.0001, L = 0.01 H, R = 1 Ohm, g = 9.81 m/sec^2 . The system is at equilibrium (the ball is suspended in midair) whenever h = K i^2/Mg (at which point dh/dt = 0). We linearize the equations about the point h = 0.01 m (where the nominal current is about 7 amp) and get the state space equations:

46

Page 47: PID Controller back up

where:

is the set of state variables for the system (a 3x1 vector), u is the input voltage

(delta V), and y (the output), is delta h. Enter the system matrices into a m-file.

A = [ 0 1 0 980 0 -2.8 0 0 -100];B = [0 0 100];C = [1 0 0];

One of the first things you want to do with the state equations is find the poles of the system; these are the values of s where det(sI - A) = 0, or the eigenvalues of the A matrix:

poles = eig(A)You should get the following three poles:

poles =

31.3050 -31.3050 -100.0000

One of the poles is in the right-half plane, which means that the system is unstable in open-loop.

To check out what happens to this unstable system when there is a nonzero initial condition, add the following lines to your m-file,

t = 0:0.01:2;u = 0*t;x0 = [0.005 0 0];[y,x] = lsim(A,B,C,0,u,t,x0);h = x(:,2); %Delta-h is the output of interestplot(t,h)

and run the file again.

47

Page 48: PID Controller back up

It looks like the distance between the ball and the electromagnet will go to infinity, but probably the ball hits the table or the floor first (and also probably goes out of the range where our linearization is valid).

Control design using pole placement

Let's build a controller for this system. The schematic of a full-state feedback system is the following:

Recall that the characteristic polynomial for this closed-loop system is the determinant of (sI-(A-BK)). Since the matrices A and B*K are both 3 by 3 matrices, there will be 3 poles for the system. By using full-state feedback we can place the poles anywhere we want. We could use the Matlab function place to find the control matrix, K, which will give the desired poles.

Before attempting this method, we have to decide where we want the closed-loop poles to be. Suppose the criteria for the controller were settling time < 0.5 sec and overshoot < 5%, then we might try to place the two dominant poles at -10 +/- 10i (at zeta = 0.7 or 45 degrees with sigma = 10 > 4.6*2). The third pole we might place at -50 to start, and we can change it later depending on what the closed-loop behavior is. Remove the lsim command from your m-file and everything after it, then add the following lines to your m-file,

48

Page 49: PID Controller back up

p1 = -10 + 10i;p2 = -10 - 10i;p3 = -50;

K = place(A,B,[p1 p2 p3]);

lsim(A-B*K,B,C,0,u,t,x0);

The overshoot is too large (there are also zeros in the transfer function which can increase the overshoot; you do not see the zeros in the state-space formulation). Try placing the poles further to the left to see if the transient response improves (this should also make the response faster).

p1 = -20 + 20i;p2 = -20 - 20i;p3 = -100;K = place(A,B,[p1 p2 p3]);lsim(A-B*K,B,C,0,u,t,x0);

49

Page 50: PID Controller back up

This time the overshoot is smaller. Consult your textbook for further suggestions on choosing the desired closed-loop poles.

Compare the control effort required (K) in both cases. In general, the farther you move the poles, the more control effort it takes.

Note: If you want to place two or more poles at the same position, place will not work. You can use a function called acker which works similarly to place:

K = acker(A,B,[p1 p2 p3])

Introducing the reference inputNow, we will take the control system as defined above and apply a step input (we choose a small value for the step, so we remain in the region where our linearization is valid). Replace t,u and lsim in your m-file with the following,

t = 0:0.01:2; u = 0.001*ones(size(t));lsim(A-B*K,B,C,0,u,t)

The system does not track the step well at all; not only is the magnitude not one, but it is negative instead of positive!

Recall the schematic above, we don't compare the output to the reference; instead we measure all the states, multiply by the gain vector K, and then subtract this result from the reference. There is no reason to expect that K*x will be equal to the desired output. To eliminate this problem, we can scale the reference input to make it equal to K*x_steadystate. This scale factor is often called Nbar; it is introduced as shown in the following schematic:

50

Page 51: PID Controller back up

We can get Nbar from Matlab by using the function rscale (place the following line of code after K = ...).

Nbar=rscale(A,B,C,0,K)Note that this function is not standard in Matlab. You will need to copy it to a new m-file to use it. Now, if we want to find the response of the system under state feedback with this introduction of the reference, we simply note the fact that the input is multiplied by this new factor, Nbar:

lsim(A-B*K,B*Nbar,C,0,u,t)

and now a step can be tracked reasonably well.

Observer design

When we can't measure all the states x (as is commonly the case), we can build an observer to estimate them, while measuring only the output y = C x. For the magnetic ball example, we will add three new, estimated states to the system. The schematic is as follows:

51

Page 52: PID Controller back up

The observer is basically a copy of the plant; it has the same input and almost the same differential equation. An extra term compares the actual measured output y to

the estimated output ; this will cause the estimated states to approach the values of the actual states x. The error dynamics of the observer are given by the poles of (A-L*C).

First we need to choose the observer gain L. Since we want the dynamics of the observer to be much faster than the system itself, we need to place the poles at least five times farther to the left than the dominant poles of the system. If we want to use place, we need to put the three observer poles at different locations.

op1 = -100;op2 = -101;op3 = -102;

Because of the duality between controllability and observability, we can use the same technique used to find the control matrix, but replacing the matrix B by the matrix C and taking the transposes of each matrix (consult your text book for the derivation):

L = place(A',C',[op1 op2 op3])';The equations in the block diagram above are given for . It is conventional to write the combined equations for the system plus observer using the original state x plus the error state: e = x - . We use as state feedback u = -K . After a little bit of algebra (consult your textbook for more details), we arrive at the combined state and error equations with the full-state feedback and an observer:

At = [A - B*K B*K zeros(size(A)) A - L*C];Bt = [ B*Nbar zeros(size(B))];Ct = [ C zeros(size(C))];

To see how the response looks to a nonzero initial condition with no reference input, add the following lines into your m-file. We typically assume that the observer begins with zero initial condition, =0. This gives us that the initial condition for the error is equal to the initial condition of the state.

lsim(At,Bt,Ct,0,zeros(size(t)),t,[x0 x0])

52

Page 53: PID Controller back up

Responses of all the states are plotted below. Recall that lsim gives us x and e; to get we need to compute x-e.

Zoom in to see some detail:

53

Page 54: PID Controller back up

The blue solid line is the response of the ball position , the blue dotted line is the estimated state ; The green solid line is the response of the ball speed , the green dotted line is the estimated state ; The red solid line is the response of the current , the red dotted line is the estimated state .

We can see that the observer estimates the states quickly and tracks the states reasonably well in the steady-state.

The plot above can be obtained by using the plo command.

16d. Digital Control Tutorial

Key Matlab Commands used in this tutorial are: c2dm pzmap zgrid dstep stairs rlocus Note: Matlab commands from the control system toolbox are highlighted in red.

IntroductionThe figure below shows the typical continuous feedback system that we have been considering so far in this tutorial. Almost all of the continuous controllers can be built using analog electronics.

54

Page 55: PID Controller back up

The continuous controller, enclosed in the dashed square, can be replaced by a digital controller, shown below, that performs same control task as the continuous controller. The basic difference between these controllers is that the digital system operates on discrete signals (or samples of the sensed signal) rather than on continuous signals.

Different types of signals in the above digital schematic can be represented by the following plots.

55

Page 56: PID Controller back up

The purpose of this Digital Control Tutorial is to show you how to work with discrete functions either in transfer function or state-space form to design digital control systems.

Zero-order hold equivalenceIn the above schematic of the digital control system, we see that the digital control system contains both discrete and the continuous portions. When designing a digital control system, we need to find the discrete equivalent of the continuous portion so that we only need to deal with discrete functions.

For this technique, we will consider the following portion of the digital control system and rearrange as follows.

The clock connected to the D/A and A/D converters supplies a pulse every T seconds and each D/A and A/D sends a signal only when the pulse arrives. The purpose of having this pulse is to require that Hzoh(z) have only samples u(k) to work on and produce only samples of output y(k); thus, Hzoh(z) can be realized as a discrete function.

The philosophy of the design is the following. We want to find a discrete function Hzoh(z) so that for a piecewise constant input to the continuous system H(s), the sampled output of the continuous system equals the discrete output. Suppose the signal u(k) represents a sample of the input signal. There are techniques for taking this sample u(k) and holding it to produce a continuous signal uhat(t). The sketch below shows that the uhat(t) is held constant at u(k) over the interval kT to (k+1)T. This operation of holding uhat(t) constant over the sampling time is called zero-order hold.

56

Page 57: PID Controller back up

The zero-order held signal uhat(t) goes through H2(s) and A/D to produce the output y(k) that will be the piecewise same signal as if the continuous u(t) goes through H(s) to produce the continuous output y(t).

Now we will redraw the schematic, placing Hzoh(z) in place of the continuous portion.

By placing Hzoh(z), we can design digital control systems dealing with only discrete functions.

Note: There are certain cases where the discrete response does not match the continuous response due to a hold circuit implemented in digital control systems. For information, see Lagging effect associated with the hold.

57

Page 58: PID Controller back up

Conversion using c2dm

There is a Matlab function called c2dm that converts a given continuous system (either in transfer function or state-space form) to discrete system using the zero-order hold operation explained above. The basic command for this c2dm is one of the following.

[numDz,denDz] = c2dm (num,den,Ts,'zoh')[F,G,H,J] = c2dm (A,B,C,D,Ts,'zoh')

The sampling time (Ts in sec/sample) should be smaller than 1/(30*BW), where BW is the closed-loop bandwidth frequency.

1. Transfer function

Suppose you have the following continuous transfer function

M = 1 kg

b = 10 N.s/m

k = 20 N/m

F(s) = 1

Assuming the closed-loop bandwidth frequency is greater than 1 rad/sec, we will choose the sampling time (Ts) equal to 1/100 sec. Now, create an new m-file and enter the following commands.

M=1;b=10;k=20;

num=[1];den=[M b k];

Ts=1/100;[numDz,denDz]=c2dm(num,den,Ts,'zoh')

Running this m-file in the command window should give you the following numDz and denDz matrices.

numDz =

1.0e-04 * 0 0.4837 0.4678

denDz =

1.0000 -1.9029 0.9048

58

Page 59: PID Controller back up

From these matrices, the discrete transfer function can be written as

Note: The numerator and denominator matrices will be represented by the descending powers of z. For more information on Matlab representation, please refer to Matlab representation.

Now you have the transfer function in discrete form.

2. State-Space

Suppose you have the following continuous state-space model

All constants are same as before

The following m-file converts the above continuous state-space to discrete state-space.

M=1;b=10;k=20;

A=[0 1; -k/M -b/M];

B=[ 0; 1/M]; C=[1 0];

D=[0]; Ts=1/100;[F,G,H,J] = c2dm (A,B,C,D,Ts,'zoh')

59

Page 60: PID Controller back up

Create a new m-file and copy the above commands. Running this m-file in the Matlab command window should give you the following matrices. F =

0.9990 0.0095 -0.1903 0.9039 G =

0.0000 0.0095 H =

1 0 J =

0

From these matrices, the discrete state-space can be written as

Now you have the discrete time state-space model.

Note: For more information on the discrete state-space, please refer to Discrete State-Space.

Stability and transient responseFor continuous systems, we know that certain behaviors results from different pole locations in the s-plane. For instance, a system is unstable when any pole is located to the right of the imaginary axis. For discrete systems, we can analyze the system behaviors from different pole locations in the z-plane. The characteristics in the z-plane can be related to those in the s-plane by the expression:

T = Sampling time (sec/sample)

s = Location in the s-plane

z = Location in the z-plane

60

Page 61: PID Controller back up

The figure below shows the mapping of lines of constant damping ratio (zeta) and natural frequency (Wn) from the s-plane to the z-plane using the expression shown above.

If you noticed in the z-plane, the stability boundary is no longer imaginary axis, but is the unit circle |z|=1. The system is stable when all poles are located inside the unit circle and unstable when any pole is located outside.

For analyzing the transient response from pole locations in the z-plane, the following three equations used in continuous system designs are still applicable.

where: zeta = Damping ratio Wn = Natural frequency (rad/sec) Ts = Settling time Tr = Rise time Mp = Maximum overshoot

Important: The natural frequency (Wn) in z-plane has the unit of rad/sample, but when you use the equations shown above, the Wn must be in the unit of rad/sec.

Suppose we have the following discrete transfer function

61

Page 62: PID Controller back up

Create an new m-file and enter the following commands. Running this m-file in the command window gives you the following plot with the lines of constant damping ratio and natural frequency.

numDz=[1];denDz=[1 -0.3 0.5];

pzmap(numDz,denDz)axis([-1 1 -1 1])zgrid

From this plot, we see poles are located approximately at the natural frequency of 9pi/20T (rad/sample) and the damping ratio of 0.25. Assuming that we have the sampling time of 1/20 sec (which leads to Wn = 28.2 rad/sec) and using three equations shown above, we can determine that this system should have the rise time of 0.06 sec, the settling time of 0.65 sec and the maximum overshoot of 45% (0.45 more than the steady-state value). Let's obtain the step response and see if these are correct. Add the following commands to the above m-file and rerun it in the command window. You should get the following step response.

[x] = dstep (numDz,denDz,51);t = 0:0.05:2.5;stairs (t,x)

62

Page 63: PID Controller back up

As you can see from the plot, all of the rise time, the settling time and the overshoot came out to be what we expected. We proved you here that we can use the locations of poles and the above three equations to analyze the transient response of the system.

For more analysis on the pole locations and transient response, see Transient Response.

Discrete Root-Locus

The root-locus is the locus of points where roots of characteristic equation can be found as a single gain is varied from zero to infinity. The characteristic equation of an unity feedback system is

where G(z) is the compensator implemented in the digital controller and Hzoh(z) is the plant transfer function in z.

The mechanics of drawing the root-loci are exactly the same in the z-plane as in the s-plane. Recall from the continuous Root-Locus Tutorial, we used the Matlab function called sgrid to find the root-locus region that gives the right gain (K). For the discrete root-locus analysis, we use the function zgrid that has the same characteristics as the sgrid. The command zgrid(zeta, Wn) draws lines of constant damping ratio (zeta) and natural frequency (Wn).

Suppose we have the following discrete transfer function

and the requirements of having damping ratio greater than 0.6 and the natural frequency greater than 0.4 rad/sample (these can be found from design requirements, sampling time (sec/sample) and three equations shown in the previous section). The

63

Page 64: PID Controller back up

following commands draws the root-locus with the lines of constant damping ratio and natural frequency. Create an new m-file and enter the following commands. Running this m-file should give you the following root-locus plot.

numDz=[1 -0.3];denDz=[1 -1.6 0.7];

rlocus (numDz,denDz)axis ([-1 1 -1 1])

zeta=0.4;Wn=0.3;zgrid (zeta,Wn)

From this plot, you should realize that the system is stable because all poles are located inside the unit circle. Also, you see two dotted lines of constant damping ratio and natural frequency. The natural frequency is greater than 0.3 outside the constant-Wn line, and the damping ratio is greater than 0.4 inside the constant-zeta line. In this example, we do have the root-locus drawn in the desired region. Therefore, a gain (K) chosen from one of the loci in the desired region should give you the response that satisfies design requirements.

A resistive transducer is a device that senses a change to cause a change in resistance.  Transducers do NOT generate electricity.  Examples include:

Device Action Where used

Light Dependent Resistor

Resistance falls with increasing light level

Light operated switches

Thermistor Resistance falls with increased temperature

Electronic thermometers

Strain gauge Resistance changes with force

Sensor in an electronic balance

Moisture detector Resistance falls when wet Damp meter

64

Page 65: PID Controller back up

 These are called passive devices.  (Active transducers do generate electricity from other energy sources, or have a power supply.)

Light Dependent Resistors

The light dependent resistor consists of a length of material (cadmium sulphide) whose resistance changes according to the light level.    Therefore, the brighter the light, the lower the resistance. 

We can show the way the resistance varies with light level as a graph:

The first graph shows us the variation using a linear scale.   The graph on the right shows the plot as a logarithmic plot, which comes up as a straight line.  Logarithmic plots are useful for compressing scales.

LDRs are used for:

Smoke detection  Automatic lighting Counting Alarm systems.

65

Page 66: PID Controller back up

Resistive components can get hot when excessive current is flowing through them, and this can impair their function, or damage them.  This can be prevented by connecting a current limiting resistor in series, as shown in the picture below. 

 

Thermistors

The most common type of thermistor that we use has a resistance that falls as the temperature rises.  It is referred to as a negative temperature coefficient device.  A positive temperature coefficient device has a resistance that increases with temperature.

The graph of resistance against temperature is like this.

 

The resistance on this graph is on a logarithmic scale, as there is a large range of values. 

The LDR is most commonly used in a potential divider circuit. 

66

Page 67: PID Controller back up

Potential Divider

Although it is simple, the potential divider is a very useful circuit.  In its simplest form it is two resistors in series with an input voltage Vs across the ends. 

An output voltage Vout is obtained from a junction between the two resistors.

The potential divider circuit looks like this:

You need to learn this equation.  It is very useful.

This result can be thought of as the output voltage being the same fraction of the input voltage as R2 is the fraction of the total resistance.  Look at this circuit for the next example:

67

Page 68: PID Controller back up

Capacitive Transducers

AC instrumentation transducers

Just as devices have been made to measure certain physical quantities and repeat that information in the form of DC electrical signals (thermocouples, strain gauges, pH probes, etc.), special devices have been made that do the same with AC.

It is often necessary to be able to detect and transmit the physical position of mechanical parts via electrical signals. This is especially true in the fields of automated machine tool control and robotics. A simple and easy way to do this is with a potentiometer as shown below.

Potentiometer tap voltage indicates position of an object slaved to the shaft.

However, potentiometers have their own unique problems. For one, they rely on physical contact between the “wiper” and the resistance strip, which means they suffer the effects of physical wear over time. As potentiometers wear, their proportional output versus shaft position becomes less and less certain. You might have already experienced this effect when adjusting the volume control on an old radio: when twisting the knob, you might hear “scratching” sounds coming out of the speakers. Those noises are the result of poor wiper contact in the volume control potentiometer.

Also, this physical contact between wiper and strip creates the possibility of arcing (sparking) between the two as the wiper is moved. With most potentiometer circuits, the current is so low that wiper arcing is negligible, but it is a possibility to be considered. If the potentiometer is to be operated in an environment where combustible vapor or dust is present, this potential for arcing translates into a potential for an explosion!

Using AC instead of DC, we are able to completely avoid sliding contact between parts if we use a variable transformer instead of a potentiometer. Devices made for this purpose are called LVDT's, which stands for Linear Variable Differential Transformers. The design of an LVDT looks like this: (Figure below)

68

Page 69: PID Controller back up

AC output of linear variable differential transformer (LVDT) indicates core position.

Obviously, this device is a transformer: it has a single primary winding powered by an external source of AC voltage, and two secondary windings connected in series-bucking fashion. It is variable because the core is free to move between the windings. It is differential because of the way the two secondary windings are connected. Being arranged to oppose each other (180o out of phase) means that the output of this device will be the difference between the voltage output of the two secondary windings. When the core is centered and both windings are outputting the same voltage, the net result at the output terminals will be zero volts. It is called linear because the core's freedom of motion is straight-line.

The AC voltage output by an LVDT indicates the position of the movable core. Zero volts means that the core is centered. The further away the core is from center position, the greater percentage of input (“excitation”) voltage will be seen at the output. The phase of the output voltage relative to the excitation voltage indicates which direction from center the core is offset.

The primary advantage of an LVDT over a potentiometer for position sensing is the absence of physical contact between the moving and stationary parts. The core does not contact the wire windings, but slides in and out within a nonconducting tube. Thus, the LVDT does not “wear” like a potentiometer, nor is there the possibility of creating an arc.

Excitation of the LVDT is typically 10 volts RMS or less, at frequencies ranging from power line to the high audio (20 kHz) range. One potential disadvantage of the LVDT is its response time, which is mostly dependent on the frequency of the AC voltage source. If very quick response times are desired, the frequency must be higher to allow whatever voltage-sensing circuits enough cycles of AC to determine voltage level as the core is moved. To illustrate the potential problem here, imagine this exaggerated scenario: an LVDT powered by a 60 Hz voltage source, with the core being moved in and out hundreds of times per second. The output of this LVDT wouldn't even look like a sine wave because the core would be moved throughout its range of motion before

69

Page 70: PID Controller back up

the AC source voltage could complete a single cycle! It would be almost impossible to determine instantaneous core position if it moves faster than the instantaneous source voltage does.

A variation on the LVDT is the RVDT, or Rotary Variable Differential Transformer. This device works on almost the same principle, except that the core revolves on a shaft instead of moving in a straight line. RVDT's can be constructed for limited motion of 360o (full-circle) motion.

Continuing with this principle, we have what is known as a Synchro or Selsyn, which is a device constructed a lot like a wound-rotor polyphase AC motor or generator. The rotor is free to revolve a full 360o, just like a motor. On the rotor is a single winding connected to a source of AC voltage, much like the primary winding of an LVDT. The stator windings are usually in the form of a three-phase Y, although synchros with more than three phases have been built. (Figure below) A device with a two-phase stator is known as a resolver. A resolver produces sine and cosine outputs which indicate shaft position.

A synchro is wound with a three-phase stator winding, and a rotating field. A resolver has a two-phase stator.

Voltages induced in the stator windings from the rotor's AC excitation are not phase-shifted by 120o as in a real three-phase generator. If the rotor were energized with DC current rather than AC and the shaft spun continuously, then the voltages would be true three-phase. But this is not how a synchro is designed to be operated. Rather, this is a position-sensing device much like an RVDT, except that its output signal is much more definite. With the rotor energized by AC, the stator winding voltages will be proportional in magnitude to the angular position of the rotor, phase either 0o or 180o shifted, like a regular LVDT or RVDT. You could think of it as a transformer with one primary winding and three secondary windings, each secondary winding oriented at a unique angle. As the rotor is slowly turned, each winding in turn will line up directly with

70

Page 71: PID Controller back up

the rotor, producing full voltage, while the other windings will produce something less than full voltage.

Synchros are often used in pairs. With their rotors connected in parallel and energized by the same AC voltage source, their shafts will match position to a high degree of accuracy: (Figure below)

Synchro shafts are slaved to each other. Rotating one moves the other.

Such “transmitter/receiver” pairs have been used on ships to relay rudder position, or to relay navigational gyro position over fairly long distances. The only difference between the “transmitter” and the “receiver” is which one gets turned by an outside force. The “receiver” can just as easily be used as the “transmitter” by forcing its shaft to turn and letting the synchro on the left match position.

If the receiver's rotor is left unpowered, it will act as a position-error detector, generating an AC voltage at the rotor if the shaft is anything other than 90o or 270o shifted from the shaft position of the transmitter. The receiver rotor will no longer generate any torque and consequently will no longer automatically match position with the transmitter's: (Figure below)

71

Page 72: PID Controller back up

AC voltmeter registers voltage if the receiver rotor is not rotated exactly 90 or 270 degrees from the transmitter rotor.

This can be thought of almost as a sort of bridge circuit that achieves balance only if the receiver shaft is brought to one of two (matching) positions with the transmitter shaft.

One rather ingenious application of the synchro is in the creation of a phase-shifting device, provided that the stator is energized by three-phase AC: (Figure below)

Full rotation of the rotor will smoothly shift the phase from 0o all the way to 360o (back to 0o).

As the synchro's rotor is turned, the rotor coil will progressively align with each stator coil, their respective magnetic fields being 120o phase-shifted from one another. In between those positions, these phase-shifted fields will mix to produce a rotor voltage somewhere between 0o, 120o, or 240o shift. The practical result is a device capable of providing an infinitely variable-phase AC voltage with the twist of a knob (attached to the rotor shaft).

A synchro or a resolver may measure linear motion if geared with a rack and pinion mechanism. A linear movement of a few inches (or cm) resulting in multiple revolutions of the synchro (resolver) generates a train of sinewaves. An Inductosyn® is a linear version of the resolver. It outputs signals like a resolver; though, it bears slight resemblance.

The Inductosyn consists of two parts: a fixed serpentine winding having a 0.1 in or 2 mm pitch, and a movable winding known as a slider. (Figure below) The slider has a pair of windings having the same pitch as the fixed winding. The slider windings are offset by a quarter pitch so both sine and cosine waves are produced by movement. One slider winding is adequate for counting pulses, but provides no direction information. The 2-phase windings provide direction information in the phasing of the sine and cosine waves. Movement by one pitch produces a cycle of sine and cosine waves; multiple pitches produce a train of waves.

72

Page 73: PID Controller back up

Inductosyn: (a) Fixed serpentine winding, (b) movable slider 2-phase windings.

17. Modeling Examples.

17a. Example1. Modeling a Cruise Control System

This is a simple example of the modeling and control of a first order system. This model takes inertia and damping into account, and simple controllers are designed. This modeling example includes:

1. Physical setup and system equations2. Design requirements3. Matlab representation4. Open-loop response5. Closed-loop transfer function

Physical setup and system equations

The model of the cruise control system is relatively simple. If the inertia of the wheels is neglected, and it is assumed that friction (which is proportional to the car's speed) is what is opposing the motion of the car, then the problem is reduced to the simple mass and damper system shown below.

73

Page 74: PID Controller back up

Using Newton's law, modelling equations for this system becomes:

(1)

where u is the force from the engine. For this example, let's assume that

m = 1000kgb = 50Nsec/m

u = 500NDesign requirements

The next step in modelling this system is to come up with some design criteria. When the engine gives a 500 Newton force, the car will reach a maximum velocity of 10 m/s (22 mph). An automobile should be able to accelerate up to that speed in less than 5 seconds. Since this is only a cruise control system, a 10% overshoot on the velocity will not do much damage. A 2% steady-state error is also acceptable for the same reason.

Keeping the above in mind, we have proposed the following design criteria for this problem:

Rise time < 5 secOvershoot < 10%Steady state error < 2%

3. Matlab representation 3.1. Transfer Function

To find the transfer function of the above system, we need to take the Laplace transform of the modelling equations (1). When finding the transfer function, zero initial conditions must be assumed. Laplace transforms of the two equations are shown below.

74

Page 75: PID Controller back up

Since our output is the velocity, let's substitute V(s) in terms of Y(s)

The transfer function of the system becomes

To solve this problem using Matlab, copy the following commands into an new m-file:

m=1000;b=50;u=500;num=[1];den=[m b];

These commands will later be used to find the open-loop response of the system to a step input. But before getting into that, let's take a look at another representation, the state-space.

3.2. State-Space

We can rewrite the first-order modelling equation (1) as the state-space model.

To use Matlab to solve this problem, create an new m-file and copy the following commands:

m = 1000;b = 50;u = 500;A = [-b/m];B = [1/m];C = [1];D = 0;

Note: It is possible to convert from the state-space representation to the transfer function or vise versa using Matlab.

Open-loop responseNow let's see how the open-loop system responds to a step input. Add the following command onto the end of the m-file written for the transfer function (the m-file with num and den matrices) and run it in the Matlab command window:

step (u*num,den)

75

Page 76: PID Controller back up

You should get the following plot:

To use the m-file written for the state-space (the m-file with A, B, C, D matrices), add the following command at the end of the m-file and run it in the Matlab command window:

step (A,u*B,C,D)

You should get the same plot as the one shown above.

From the plot, we see that the vehicle takes more than 100 seconds to reach the steady-state speed of 10 m/s. This does not satisfy our rise time criterion of less than 5 seconds.

Closed-loop transfer function

To solve this problem, a unity feedback controller will be added to improve the system performance. The figure shown below is the block diagram of a typical unity feedback system.

The transfer function in the plant is the transfer function derived above {Y(s)/U(s)=1/ms+b}. The controller will to be designed to satisfy all design criteria. Four different methods to design the controller are listed at the bottom of this page. You may choose on PID, Root-locus, Frequency response, or State-space.

Illustrative Application:

Suppose we have a simple mass, spring, and damper problem.

76

Page 77: PID Controller back up

The modelling equation of this system is

(1)

Taking the Laplace transform of the modelling equation (1)

The transfer function between the displacement X(s) and the input F(s) then becomes

Let

M = 1kg b = 10 N.s/m k = 20 N/m F(s) = 1

Plug these values into the above transfer function

The goal of this problem is to show you how each of Kp, Ki and Kd contributes to obtain

Fast rise time Minimum overshoot No steady-state error

Open-loop step response

Let's first view the open-loop step response. Create a new m-file and add in the following code:

num=1;den=[1 10 20];step(num,den)

Running this m-file in the Matlab command window should give you the plot shown below.

77

Page 78: PID Controller back up

The DC gain of the plant transfer function is 1/20, so 0.05 is the final value of the output to an unit step input. This corresponds to the steady-state error of 0.95, quite large indeed. Furthermore, the rise time is about one second, and the settling time is about 1.5 seconds. Let's design a controller that will reduce the rise time, reduce the settling time, and eliminates the steady-state error.

Proportional controlFrom the table shown above, we see that the proportional controller (Kp) reduces the rise time, increases the overshoot, and reduces the steady-state error. The closed-loop transfer function of the above system with a proportional controller is:

Let the proportional gain (Kp) equals 300 and change the m-file to the following:

Kp=300;num=[Kp];den=[1 10 20+Kp];

t=0:0.01:2;step(num,den,t)

Running this m-file in the Matlab command window gives you the following plot.

78

Page 79: PID Controller back up

Note: The Matlab function called cloop can be used to obtain a closed-loop transfer function directly from the open-loop transfer function (instead of obtaining closed-loop transfer function by hand). The following m-file uses the cloop command that should give you the identical plot as the one shown above.

num=1;den=[1 10 20];Kp=300;

[numCL,denCL]=cloop(Kp*num,den);t=0:0.01:2;step(numCL, denCL,t)

The above plot shows that the proportional controller reduced both the rise time and the steady-state error, increased the overshoot, and decreased the settling time by small amount.

Proportional-Derivative control

From the table shown above, we see that the derivative controller (Kd) reduces both the overshoot and the settling time. The closed-loop transfer function of the given system with a PD controller is:

Let Kp equals to 300 as before and let Kd equals 10. Enter the following commands into an m-file and run it in the Matlab command window.

Kp=300;Kd=10;

79

Page 80: PID Controller back up

num=[Kd Kp];den=[1 10+Kd 20+Kp];

t=0:0.01:2;step(num,den,t)

This plot shows that the derivative controller reduced both the overshoot and the settling time, and had small effect on the rise time and the steady-state error.

Proportional-Integral control

Before going into a PID control, let's take a look at a PI control. From the table, we see that an integral controller (Ki) decreases the rise time, increases both the overshoot and the settling time, and eliminates the steady-state error. For the given system, the closed-loop transfer function with a PI control is:

Let's reduce the Kp to 30, and let Ki equals to 70. Create an new m-file and enter the following commands.

Kp=30;Ki=70;num=[Kp Ki];den=[1 10 20+Kp Ki];

t=0:0.01:2;step(num,den,t)

Run this m-file in the Matlab command window, and you should get the following plot.

80

Page 81: PID Controller back up

The proportional gain (Kp) is reduced because the integral controller also reduced the rise time and increased the overshoot as the proportional controller did (double effect). The above response shows that the integral controller eliminated the steady-state error.

Proportional-Integral-Derivative control

Considering a PID controller, the closed-loop transfer function of the given system with a PID controller is:

After several trial and error runs, the gains Kp=350, Ki=300, and Kd=50 provided the desired response. To confirm, enter the following commands to an m-file and run it in the command window. You should get the following step response.

Kp=350;Ki=300;Kd=50;

num=[Kd Kp Ki];den=[1 10+Kd 20+Kp Ki];

t=0:0.01:2;step(num,den,t)

81

Page 82: PID Controller back up

Now, we have obtained the system with no overshoot, fast rise time, and no steady-state error.

General tips for designing a PID controllerWhen you are designing a PID controller for a given system, follow the steps shown below to obtain a desired response.

1. Obtain an open-loop response and determine what needs to be improved 2. Add a proportional control to improve the rise time 3. Add a derivative control to improve the overshoot 4. Add an integral control to eliminate the steady-state error 5. Adjust each of Kp, Ki, and Kd until you obtain a desired overall response. You

can always refer to the table shown in this "PID Tutorial" page to find out which controller controls what characteristics.

Lastly, keep in mind that you do not need to implement all three controllers (proportional, derivative, and integral) into a single system, if not necessary. For example, if a PI controller gives a good enough response (like the above example), then you don't need to implement derivative controller to the system. Keep the controller as simple as possible.

82

Page 83: PID Controller back up

17c. Example 3: DC Motor Speed Modeling

A DC motor has second order speed dynamics when mechanical properties such as inertia and damping as well as electrical properties such as inductance and resistance are taken into account. The controller's objective is to maintain the speed of rotation of the motor shaft with a particular step response. This electromechanical system example demonstrates slightly more complicated dynamics than does the cruise control example, requiring more sophisticated controllers.

A common actuator in control systems is the DC motor. It directly provides rotary motion and, coupled with wheels or drums and cables, can provide transitional motion. The electric circuit of the armature and the free body diagram of the rotor are shown in the following figure:

Physical setup and system equations

For this example, we will assume the following values for the physical parameters. These values were derived by experiment from an actual motor under test:

moment of inertia of the rotor (J) = 0.01 kg.m^2/s^2damping ratio of the mechanical system (b) = 0.1 Nmselectromotive force constant (K=Ke=Kt) = 0.01 Nm/Ampelectric resistance (R) = 1 ohm electric inductance (L) = 0.5 Hinput (V): Source Voltageoutput (theta): position of shaft

The rotor and shaft are assumed to be rigid. The motor torque, T, is related to the armature current, i, by a constant factor Kt. The back emf, e, is related to the rotational velocity by the following equations:

83

Page 84: PID Controller back up

In SI units, Kt (armature constant) is equal to Ke (motor constant).

From the figure above we can write the following equations based on Newton's law combined with Kirchhoff's law:

1. Transfer FunctionUsing Laplace Transforms, the above modeling equations can be expressed in terms of s.

By eliminating I (s) we can get the following open-loop transfer function, where the rotational speed is the output and the voltage is the input.

2. State-SpaceIn the state-space form, the equations above can be expressed by choosing the rotational speed and electric current as the state variables and the voltage as an input. The output is chosen to be the rotational speed.

Design requirements

First, the uncompensated motor can only rotate at 0.1 rad/sec with an input voltage of 1 Volt (this will be demonstrated later when the open-loop response is simulated). Since the most basic requirement of a motor is that it should rotate at the desired speed, the steady-state error of the motor speed should be less than 1%. The other performance requirement is that the motor must accelerate to its steady-state speed as soon as it turns on. In this case, we want it to have a settling time of 2 seconds. Since a speed faster than the reference may damage the equipment, we want to have an overshoot of less than 5%.

84

Page 85: PID Controller back up

If we simulate the reference input (r) by a unit step input, then the motor speed output should have:

Settling time less than 2 seconds Overshoot less than 5% Steady-state error less than 1%

Matlab representation and open-loop response

1. Transfer Function

We can represent the above transfer function into Matlab by defining the numerator and denominator matrices as follows:

Create a new m-file and enter the following commands: J=0.01;b=0.1;K=0.01;R=1;L=0.5;num=K;den=[(J*L) ((J*R)+(L*b)) ((b*R)+K^2)];

To see how the original open-loop system performs, add the following commands onto the end of the m-file and run it in the Matlab command window:

step(num,den,0:0.1:3)title('Step Response for the Open Loop System')

The following plot shows the outcome:

85

Page 86: PID Controller back up

From the plot we see that when 1 volt is applied to the system, the motor can only achieve a maximum speed of 0.1 rad/sec, ten times smaller than our desired speed. Also, it takes the motor 3 seconds to reach its steady-state speed; this does not satisfy our 2 seconds settling time criterion.

2. State-SpaceWe can also represent the system using the state-space equations. Try the following commands in a new m-file.

J=0.01;b=0.1;K=0.01;R=1;L=0.5;A=[-b/J K/J -K/L -R/L];B=[0 1/L];C=[1 0];D=0;step(A, B, C, D)

Run this m-file in the Matlab command window, and you should get the same output as the one shown above.

17d. Example 4: Modeling DC Motor Position

Motor Position Control

The model of the position dynamics of a DC motor is third order, because measuring position is equivalent to integrating speed, which adds an order to the motor speed example. In this example, however, the motor parameters are taken from an actual DC motor under test. This motor has very small inductance, which effectively reduces the example to second order. It differs from the motor speed example in that there is a free integrator in the open loop transfer function. Also introduced in this example is the compensation for a disturbance input. This requires a free integrator in the controller, creating instability in the system which must be compensated for.

86

Page 87: PID Controller back up

Physical Setup

Free body diagram of the rotor:

For this example, we will assume the same values for the physical parameters as used in the previous example

System Equations

The motor torque, T, is related to the armature current, i, by a constant factor Kt. The back emf, e, is related to the rotational velocity by the following equations:

In SI units, Kt (armature constant) is equal to Ke (motor constant).

From the figure above we can write the following equations based on Newton's law combined with Kirchhoff's law:

1. Transfer FunctionUsing Laplace Transforms the above equations can be expressed in terms of s.

By eliminating I(s) we can get the following transfer function, where the rotating speed is the output and the voltage is an input.

87

Page 88: PID Controller back up

However during this example we will be looking at the position, as being the output. We can obtain the position by integrating Theta Dot, therefore we just need to divide the transfer function by s.

2. State SpaceThese equations can also be represented in state-space form. If we choose motor position, motor speed, and armature current as our state variables, we can write the equations as follows:

Design requirements

To position the motor very precisely, the steady-state error of the motor position should be zero. Also the steady-state error due to a disturbance, is zero as well. Another performance requirement is that the motor reaches its final position very quickly. In this case, a settling time of 40ms is assumed and that overshoot is smaller than 16%.

Simulating the reference input (R) by a unit step input, then the motor speed output should be:

Settling time less than 40 milliseconds Overshoot less than 16% No steady-state error No steady-state error due to a disturbance

Matlab representation and open-loop response

1. Transfer FunctionWe can put the transfer function into Matlab by defining the numerator and denominator as vectors:

Create a new m-file and enter the following commands: J=3.2284E-6;b=3.5077E-6;K=0.0274;R=4;L=2.75E-6;num=K;den=[(J*L) ((J*R)+(L*b)) ((b*R)+K^2) 0];

To see how the original open-loop system performs, add the following command onto the end of the m-file and run it in the Matlab command window:

step(num,den,0:0.001:0.2)

88

Page 89: PID Controller back up

The following plot is thus :

From the plot we see that when 1 volt is applied to the system, the motor position changes by 6 radians, six times greater than our desired position. For a 1 volt step input the motor should spin through 1 radian. Also, the motor doesn't reach a steady state which does not satisfy our design criteria

2. State SpaceExecute the state space equations into Matlab by defining the system's matrices as follows:

J=3.2284E-6;b=3.5077E-6;K=0.0274;R=4;L=2.75E-6;

A=[0 1 0 0 -b/J K/J 0 -K/L -R/L];B=[0 ; 0 ; 1/L];C=[1 0 0];D=[0];

The step response is obtained using the command step(A,B,C,D)

Unfortunately, Matlab will respond with:

Warning: Divide by zero??? Index exceeds matrix dimensions.

Error in ==> /usr/local/lib/matlab/toolbox/control/step.mOn line 84 ==> dt = t(2)-t(1);

89

Page 90: PID Controller back up

There are numerical scaling problems with this representation of the dynamic equations. To fix the problem, we scale time by tscale = 1000. Now the output time will be in milliseconds rather than in seconds. The equations are given by

tscale = 1000;J=3.2284E-6*tscale^2;b=3.5077E-6*tscale;K=0.0274*tscale;R=4*tscale;L=2.75E-6*tscale^2;

A=[0 1 0 0 -b/J K/J 0 -K/L -R/L];B=[0 ; 0 ; 1/L];C=[1 0 0];D=[0];

The output appears the same as when obtained through the transfer function, but the time vector must be divided by tscale.

[y,x,t]=step(A,B,C,D);plot(t/tscale,y)ylabel('Amplitude')xlabel('Time (sec)')

17e. Example 5: Modeling an Inverted PendulumThe inverted pendulum is a classic controls demonstration where a pole is balanced vertically on a motorized cart. It is interesting because without control, the system is unstable. This is a fourth order nonlinear system which is linearized about vertical equilibrium. In this example, the angle of the vertical pole is the controlled variable, and the horizontal force applied by the cart is the actuator input.

Problem setup and design requirements

The cart with an inverted pendulum, shown below, is "bumped" with an impulse force, F. Determine the dynamic equations of motion for the system, and linearize about the pendulum's angle, theta = Pi (in other words, assume that pendulum does not move more than a few degrees away from the vertical, chosen to be at an angle of Pi). Find a controller to satisfy all of the design requirements given below.

90

Page 91: PID Controller back up

For this example, the following assumptions apply:

M mass of the cart 0.5 kg m mass of the pendulum 0.5 kg b friction of the cart 0.1 N/m/sec l length to pendulum center of mass 0.3 m I inertia of the pendulum 0.006 kg*m^2 F force applied to the cart x cart position coordinate theta pendulum angle from vertical

For the PID, root locus, and frequency response sections of this problem we will be only interested in the control of the pendulums position. This is because the techniques used can only be applied for a single-input-single-output (SISO) system. Therefore, none of the design criteria deal with the cart's position. For these sections we will assume that the system starts at equilibrium, and experiences an impulse force of 1N. The pendulum should return to its upright position within 5 seconds, and never move more than 0.05 radians away from the vertical.

The design requirements for this system are:

Settling time of less than 5 seconds.

Pendulum angle never more than 0.05 radians from the vertical.

However, with the state-space method we are more readily able to deal with a multi-output system. Therefore, for this section of the Inverted Pendulum example we will attempt to control both the pendulum's angle and the cart's position. To make the design more challenging we will be applying a step input to the cart. The cart should achieve its desired position within 5 seconds and have a rise time under 0.5 seconds. We will also limit the pendulum's overshoot to 20 degrees (0.35 radians), and it should also settle in under 5 seconds.

91

Page 92: PID Controller back up

The design requirements for the Inverted Pendulum state-space example are:

Settling time for x and theta of less than 5 seconds.

Rise time for x of less than 0.5 seconds.

Overshoot of theta less than 20 degrees (0.35 radians).

Force analysis and system equations

Below are the two Free Body Diagrams of the system:

Summing the forces in the Free Body Diagram of the cart in the horizontal direction, you get the following equation of motion:

Note that you could also sum the forces in the vertical direction, but no useful information would be gained.

Summing the forces in the Free Body Diagram of the pendulum in the horizontal direction, you can get an equation for N:

Substituting this equation into the first equation, you get the first equation of motion for this system as:

(1)

To get the second equation of motion, sum the forces perpendicular to the pendulum:

To get rid of the P and N terms in the equation above, sum the moments around the centroid of the pendulum, which results to the following equation:

Combining these last two equations, you get the second dynamic equation:

92

Page 93: PID Controller back up

(2)

Since Matlab can only work with linear functions, this set of equations should be linearized about theta = Pi. Assume that theta = Pi + ø (ø represents a small angle from the vertical upward direction).

Therefore, cos(theta) = -1, sin(theta) = -ø, and (d(theta)/dt)^2 = 0. After linearization the two equations of motion become (where u represents the input):

1. Transfer FunctionTo obtain the transfer function of the linearized system equations analytically, we must first take the Laplace transform of the system equations. The Laplace transforms are:

NOTE: When finding the transfer function initial conditions are assumed to be zero.

Since we will be looking at the angle Phi as the output of interest, solve the first equation for X(s),

then substituting into the second equation:

Re-arranging, the transfer function is:

where,

93

Page 94: PID Controller back up

From the transfer function above it can be seen that there is both a pole and a zero at the origin. These can be canceled and the transfer function becomes:

2. State-SpaceAfter a little algebra, the linearized system equations equations can also be represented in state-space form:

The C matrix is 2 by 4, because both the cart's position and the pendulum's position are part of the output. For the state-space design problem we will be controlling a multi-output system so we will be observing the cart's position from the first row of output and the pendulum's with the second row.

Matlab representation and the open-loop response

1. Transfer FunctionThe transfer function found from the Laplace transforms can be set up using Matlab by inputting the numerator and denominator as vectors. Create an m-file and copy the following text to model the transfer function:

M = .5;m = 0.2;b = 0.1;i = 0.006;g = 9.8;l = 0.3;

q = (M+m)*(i+m*l^2)-(m*l)^2; %simplifies input

94

Page 95: PID Controller back up

num = [m*l/q 0]den = [1 b*(i+m*l^2)/q -(M+m)*m*g*l/q -b*m*g*l/q]

Your output should be:

num = 4.5455 0

den = 1.0000 0.1818 -31.1818 -4.4545

To observe the system's velocity response to an impulse force applied to the cart add the following lines at the end of your m-file:

t=0:0.01:5;impulse(num,den,t)axis([0 1 0 60])

Note: Matlab commands from the control system toolbox are highlighted in red.

You should get the following velocity response plot:

As you can see from the plot, the response is entirely unsatisfactory. It is not stable in open loop. You can change the axis to see more of the response if you need to convince yourself that the system is unstable.

1. State-SpaceBelow, we show how the problem would be set up using Matlab for the state-space model. If you copy the following text into a m-file (or into a '.m' file located in the same directory as Matlab) and run it, Matlab will give you the A, B, C, and D matrices for the state-space model and a plot of the response of the cart's position and pendulum angle to a step input of 0.2 m applied to the cart.

M = .5;m = 0.2;

95

Page 96: PID Controller back up

b = 0.1;i = 0.006;g = 9.8;l = 0.3;

p = i*(M+m)+M*m*l^2; %denominator for the A and B matriciesA = [0 1 0 0; 0 -(i+m*l^2)*b/p (m^2*g*l^2)/p 0; 0 0 0 1; 0 -(m*l*b)/p m*g*l*(M+m)/p 0]B = [ 0; (i+m*l^2)/p; 0; m*l/p]C = [1 0 0 0; 0 0 1 0]D = [0; 0]

T=0:0.05:10;U=0.2*ones(size(T));[Y,X]=lsim(A,B,C,D,U,T);plot(T,Y)axis([0 2 0 100])

You should see the following output after running the m-file:

A = 0 1.0000 0 0 0 -0.1818 2.6727 0 0 0 0 1.0000 0 -0.4545 31.1818 0

B = 0 1.8182 0 4.5455

C = 1 0 0 0 0 0 1 0 D = 0 0

96

Page 97: PID Controller back up

The blue line represents the cart's position and the green line represents the pendulum's angle. It is obvious from this plot and the one above that some sort of control will have to be designed to improve the dynamics of the system. Four example controllers are included with these tutorials: PID, root locus, frequency response, and state space. Select from below the one you would like to use.

Note: The solutions shown in the PID, root locus and frequency response examples may not yield a workable controller for the inverted pendulum problem. As stated previously, when we put this problem into the single-input, single-output framework, we ignored the x position of the cart. The pendulum can be stabilized in an inverted position if the x position is constant or if the cart moves at a constant velocity (no acceleration). Where possible in these examples, we will show what happens to the cart's position when our controller is implemented on the system. We emphasize that the purpose of these examples is to demonstrate design and analysis techniques using Matlab; not to actually control an inverted pendulum.

97

Page 98: PID Controller back up

17f. Example 6: Modeling a Pitch Controller

The pitch angle of an airplane is controlled by adjusting the angle (and therefore the lift force) of the rear elevator. The aerodynamic forces (lift and drag) as well as the airplane's inertia are taken into account. This is a third order, nonlinear system which is linearized about the operating point. This system is also naturally unstable in that it has a free integrator.

Physical setup and system equations

The equations governing the motion of an aircraft are a very complicated set of six non-linear coupled differential equations. However, under certain assumptions, they can be decoupled and linearized into the longitudinal and lateral equations. Pitch control is a longitudinal problem, and in this example, we will design an autopilot that controls the pitch of an aircraft.

The basic coordinate axes and forces acting on an aircraft are shown in the figure below:

Assume that the aircraft is in steady-cruise at constant altitude and velocity; thus, the thrust and drag cancel out and the lift and weight balance out each other. Also, assume that change in pitch angle does not change the speed of an aircraft under any circumstance (unrealistic but simplifies the problem a bit). Under these assumptions, the longitudinal equations of motion of an aircraft can be written as:

98

Page 99: PID Controller back up

(1)

Refer to any aircraft-related textbooks for the explanation of how to derive these equations.

For this system, the input will be the elevator deflection angle, and the output will be the pitch angle.

Design requirements

The next step is to set some design criteria. We want to design a feedback controller so that the output has an overshoot of less than 10%, rise time of less than 2 seconds, settling time of less than 10 seconds, and the steady-state error of less than 2%. For example, if the input is 0.2 rad (11 degress), then the pitch angle will not exceed 0.22 rad, reaches 0.2 rad within 2 seconds, settles 2% of the steady-state within 10 seconds, and stays within 0.196 to 0.204 rad at the steady-state.

Overshoot: Less than 10% Rise time: Less than 2 seconds Settling time: Less than 10 seconds Steady-state error: Less than 2%

Transfer function and the state-space

Before finding transfer function and the state-space model, let's plug in some numerical values to simplify the modelling equations (1) shown above.

(2)

These values are taken from the data from one of the Boeing's commercial aircraft.

1. Transfer function

To find the transfer function of the above system, we need to take the Laplace transform of the above modelling equations (2). Recall that, when finding a transfer function, zero initial conditions must be assumed. The Laplace transform of the above equations are shown below.

99

Page 100: PID Controller back up

Simplifying, the following transfer function is obtained.

2. State-space

Knowing the fact that the modelling equations (2) are already in the state-variable form, we can rewrite them into the state-space model.

Since our output is the pitch angle, the output equation is:

Matlab representation and open-loop response

Now, we are ready to observe the system characteristics using Matlab. First, let's obtain an open-loop system to a step input and determine which system characteristics need improvement. Let the input (delta e) be 0.2 rad (11 degrees). Create a new m-file and enter the following commands:

de=0.2;num=[1.151 0.1774];den=[1 0.739 0.921 0];step (de*num,den)

Running this m-file in the Matlab command window will give the following plot.

100

Page 101: PID Controller back up

From the plot, we see that the open-loop response does not satisfy the design criteria at all. In fact the open-loop response is unstable.

If you noticed, the above m-file uses the numerical values from the transfer function. To use the state-space model, enter the following commands into a new m-file (instead of the one shown above) and run it in the command window.

de=0.2;

A=[-0.313 56.7 0; -0.0139 -0.426 0; 0 56.7 0];B=[0.232; 0.0203; 0];C=[0 0 1];D=[0];

step(A,B*de,C,D)

The same response as the one shown above is expected to be obtained.

Note: It is possible to convert from the state-space to transfer function, or vice versa using Matlab.

Closed-loop transfer function

To solve this problem, a feedback controller will be added to improve the system performance. The figure shown below is the block diagram of a typical unity feedback system.

A controller needs to be designed so that the step response satisfies all design requirements. Four different methods to design a controller are listed at the bottom of this page. You may choose: PID, Root-locus, Frequency response, or State-space.

17g. Example 7: Modeling the Ball and Beam Experiment

This is another classic controls demo. A ball is placed on a straight beam and rolls back and forth as one end of the beam is raised and lowered by a cam. The position of the ball is controlled by changing the angular position of the cam. This is a second order system, since only the inertia of the ball is taken into account, and not that of the

101

Page 102: PID Controller back up

cam or the beam, although the mass of the beam is taken into account in the fourth order state-space model. The equations are linearized by assuming small deflections of the cam and beam. This is an example of a double integrator, which needs to be stabilized.

Problem Setup

A ball is placed on a beam, see figure below, where it is allowed to roll with 1 degree of freedom along the length of the beam. A lever arm is attached to the beam at one end and a servo gear at the other. As the servo gear turns by an angle theta, the lever changes the angle of the beam by alpha. When the angle is changed from the vertical position, gravity causes the ball to roll along the beam. A controller will be designed for this system so that the ball's position can be manipulated.

For this problem, assume that the ball rolls without slipping and friction between the beam and ball is negligible. The constants and variables for this example are defined as follows:

M mass of the ball 0.11 kg R radius of the ball 0.015 m d lever arm offset 0.03 m g gravitational acceleration 9.8 m/s^2 L length of the beam 1.0 m J ball's moment of inertia 9.99e-6 kgm^2 r ball position coordinate alpha beam angle coordinate theta servo gear angle

102

Page 103: PID Controller back up

The design criteria for this problem are:

Settling time less than 3 seconds Overshoot less than 5%

System Equations

The Lagrangian equation of motion for the ball is given by the following:

Linearization of this equation about the beam angle, alpha = 0, gives us the following linear approximation of the system:

The equation which relates the beam angle to the angle of the gear can be approximated as linear by the equation below:

Substituting this into the previous equation, we get:

1. Transfer FunctionTaking the Laplace transform of the equation above, the following equation is found:

NOTE: When taking the Laplace transform to find the transfer function initial conditions are assumed to be zero.

Rearranging we find the transfer function from the gear angle (theta(s)) to the ball position (R(s)).

103

Page 104: PID Controller back up

It should be noted that the above plant transfer function is a double integrator. As such it is marginally stable and will provide a challenging control problem.

2. State-Space

The linearized system equations can also be represented in state-space form. This can be done by selecting the ball's position (r) and velocity (rdot) as the state variables and the gear angle (theta) as the input. The state-space representation is shown below:

However, for our state-space example we will be using a slightly different model. The same equation for the ball still applies but instead of controlling the position through the gear angle theta, we will control alpha-doubledot. This is essentially controlling the torque of the beam. Below is the representation of this system:

Note: For this system the gear and lever arm would not be used, instead a motor at the center of the beam will apply torque to the beam, to control the ball's position.

104

Page 105: PID Controller back up

Matlab Representation and Open-Loop Response

1. Transfer FunctionThe transfer function found from the Laplace transform can be implemented in Matlab by inputting the numerator and denominator as vectors. To do this we must create an m-file and copy the following text into it:

m = 0.111;R = 0.015;g = -9.8;L = 1.0;d = 0.03;J = 9.99e-6;

K = (m*g*d)/(L*(J/R^2+m)); %simplifies input

num = [-K];den = [1 0 0];printsys(num,den)

The output should be: num/den =

0.21 ---------- s^2

Now, we would like to observe the ball's response to a step input of 0.25 m. To do this you will need to add the following line to your m-file:

step(0.25*num,den)

NOTE: Matlab commands from the control system toolbox are highlighted in red.

The will be plot below, showing the balls position as a function of time:

105

Page 106: PID Controller back up

From this plot it is clear that the system is unstable in open-loop causing the ball to roll right off the end of the beam. Therefore, some method of controlling the ball's position in this system is required. Three examples of controller design are listed below for the transfer function problem. You may select from PID, Root Locus, and Frequency Response.

2. State-SpaceThe state-space equations can be represented in Matlab with the following commands (these equations are for the torque control model).

m = 0.111;R = 0.015;g = -9.8;

J = 9.99e-6;

H = -m*g/(J/(R^2)+m); A=[0 1 0 0 0 0 H 0 0 0 0 1 0 0 0 0];B=[0;0;0;1];C=[1 0 0 0];D=[0];

The step response to a 0.25m desired position can be viewed by running the command below:

step(A,B*.25,C,D)The output will look like the following:

Like the plot for the transfer function this plot shows that the system is unstable and the ball will roll right off the end of the beam. Therefore, we will require some method of controlling the ball's position in this system. The State-Space example in the tutorial above shows how to implement a controller for this type of system.

106

Page 107: PID Controller back up

Now try this problem on your own!

Bus Suspension

This example looks at the active control of the vertical motion of a bus suspension. It takes into account both the inertia of the bus and the inertia of the suspension/tires, as well as springs and dampers. An actuator is added between the suspension and the bus. This fourth order system is particularly difficult to control because of the existence of two zeros near the imaginary axis. This requires careful compensation.

State Space Tutorial

State-space equations Control design using pole placement Introducing the reference input Observer design

Key Matlab commands used in this tutorial: acker, lsim, place, plot, rscale

Matlab commands from the control system toolbox are highlighted in red.Non-standard Matlab commands used in this tutorial are highlighted in green.

State-space equations

There are several different ways to describe a system of linear differential equations. The state-space representation is given by the equations:

where x is an n by 1 vector representing the state (commonly position and velocity 107

Page 108: PID Controller back up

variables in mechanical systems), u is a scalar representing the input (commonly a force or torque in mechanical systems), and y is a scalar representing the output. The matrices A (n by n), B (n by 1), and C (1 by n) determine the relationships between the state and input and output variables. Note that there are n first-order differential equations. State space representation can also be used for systems with multiple inputs and outputs (MIMO), but we will only use single-input, single-output (SISO) systems in these tutorials.

To introduce the state space design method, we will use the magnetically suspended ball as an example. The current through the coils induces a magnetic force which can balance the force of gravity and cause the ball (which is made of a magnetic material) to be suspended in midair. The modeling of this system has been established in many control text books (including Automatic Control Systems by B. C. Kuo, the seventh edition). The equations for the system are given by:

where h is the vertical position of the ball, i is the current through the electromagnet, V is the applied voltage, M is the mass of the ball, g is gravity, L is the inductance, R is the resistance, and K is a coefficient that determines the magnetic force exerted on the ball. For simplicity, we will choose values M = 0.05 Kg, K = 0.0001, L = 0.01 H, R = 1 Ohm, g = 9.81 m/sec^2 . The system is at equilibrium (the ball is suspended in midair) whenever h = K i^2/Mg (at which point dh/dt = 0). We linearize the equations about the point h = 0.01 m (where the nominal current is about 7 amp) and get the state space equations:

where: is the set of state variables for the system (a 3x1 vector), u is the input voltage (delta V), and y (the output), is delta h. Enter the system matrices into a m-file.

A = [ 0 1 0 980 0 -2.8 0 0 -100];B = [0 0 100];C = [1 0 0];

108

Page 109: PID Controller back up

One of the first things you want to do with the state equations is find the poles of the system; these are the values of s where det(sI - A) = 0, or the eigenvalues of the A matrix:

poles = eig(A)You should get the following three poles:

poles =

31.3050 -31.3050 -100.0000

One of the poles is in the right-half plane, which means that the system is unstable in open-loop.

To check out what happens to this unstable system when there is a nonzero initial condition, add the following lines to your m-file,

t = 0:0.01:2;u = 0*t;x0 = [0.005 0 0];[y,x] = lsim(A,B,C,0,u,t,x0);h = x(:,2); %Delta-h is the output of interestplot(t,h)

and run the file again.

It looks like the distance between the ball and the electromagnet will go to infinity, but probably the ball hits the table or the floor first (and also probably goes out of the range where our linearization is valid).

Control design using pole placement

Let's build a controller for this system. The schematic of a full-state feedback system is the following:

109

Page 110: PID Controller back up

Recall that the characteristic polynomial for this closed-loop system is the determinant of (sI-(A-BK)). Since the matrices A and B*K are both 3 by 3 matrices, there will be 3 poles for the system. By using full-state feedback we can place the poles anywhere we want. We could use the Matlab function place to find the control matrix, K, which will give the desired poles.

Before attempting this method, we have to decide where we want the closed-loop poles to be. Suppose the criteria for the controller were settling time < 0.5 sec and overshoot < 5%, then we might try to place the two dominant poles at -10 +/- 10i (at zeta = 0.7 or 45 degrees with sigma = 10 > 4.6*2). The third pole we might place at -50 to start, and we can change it later depending on what the closed-loop behavior is. Remove the lsim command from your m-file and everything after it, then add the following lines to your m-file,

p1 = -10 + 10i;p2 = -10 - 10i;p3 = -50;

K = place(A,B,[p1 p2 p3]);

lsim(A-B*K,B,C,0,u,t,x0);

110

Page 111: PID Controller back up

The overshoot is too large (there are also zeros in the transfer function which can increase the overshoot; you do not see the zeros in the state-space formulation). Try placing the poles further to the left to see if the transient response improves (this should also make the response faster).

p1 = -20 + 20i;p2 = -20 - 20i;p3 = -100;K = place(A,B,[p1 p2 p3]);lsim(A-B*K,B,C,0,u,t,x0);

This time the overshoot is smaller. Consult your textbook for further suggestions on choosing the desired closed-loop poles.

Compare the control effort required (K) in both cases. In general, the farther you move the poles, the more control effort it takes.

Note: If you want to place two or more poles at the same position, place will not work. You can use a function called acker which works similarly to place:

K = acker(A,B,[p1 p2 p3])

Introducing the reference inputNow, we will take the control system as defined above and apply a step input (we choose a small value for the step, so we remain in the region where our linearization is valid). Replace t,u and lsim in your m-file with the following,

t = 0:0.01:2; u = 0.001*ones(size(t));lsim(A-B*K,B,C,0,u,t)

111

Page 112: PID Controller back up

The system does not track the step well at all; not only is the magnitude not one, but it is negative instead of positive!

Recall the schematic above, we don't compare the output to the reference; instead we measure all the states, multiply by the gain vector K, and then subtract this result from the reference. There is no reason to expect that K*x will be equal to the desired output. To eliminate this problem, we can scale the reference input to make it equal to K*x_steadystate. This scale factor is often called Nbar; it is introduced as shown in the following schematic:

We can get Nbar from Matlab by using the function rscale (place the following line of code after K = ...).

Nbar=rscale(A,B,C,0,K)Note that this function is not standard in Matlab. You will need to copy it to a new m-file to use it. Click here for more information on using functions in Matlab. Now, if we want to find the response of the system under state feedback with this introduction of the reference, we simply note the fact that the input is multiplied by this new factor, Nbar:

lsim(A-B*K,B*Nbar,C,0,u,t)

112

Page 113: PID Controller back up

and now a step can be tracked reasonably well.

Observer design

When we can't measure all the states x (as is commonly the case), we can build an observer to estimate them, while measuring only the output y = C x. For the magnetic ball example, we will add three new, estimated states to the system. The schematic is as follows:

The observer is basically a copy of the plant; it has the same input and almost the same differential equation. An extra term compares the actual measured output y to

the estimated output ; this will cause the estimated states to approach the values of the actual states x. The error dynamics of the observer are given by the poles of (A-L*C).

First we need to choose the observer gain L. Since we want the dynamics of the observer to be much faster than the system itself, we need to place the poles at least five times farther to the left than the dominant poles of the system. If we want to use place, we need to put the three observer poles at different locations.

op1 = -100;op2 = -101;

113

Page 114: PID Controller back up

op3 = -102;Because of the duality between controllability and observability, we can use the same technique used to find the control matrix, but replacing the matrix B by the matrix C and taking the transposes of each matrix (consult your text book for the derivation):

L = place(A',C',[op1 op2 op3])';The equations in the block diagram above are given for . It is conventional to write the combined equations for the system plus observer using the original state x plus the error state: e = x - . We use as state feedback u = -K . After a little bit of algebra (consult your textbook for more details), we arrive at the combined state and error equations with the full-state feedback and an observer:

At = [A - B*K B*K zeros(size(A)) A - L*C];Bt = [ B*Nbar zeros(size(B))];Ct = [ C zeros(size(C))];

To see how the response looks to a nonzero initial condition with no reference input, add the following lines into your m-file. We typically assume that the observer begins with zero initial condition, =0. This gives us that the initial condition for the error is equal to the initial condition of the state.

lsim(At,Bt,Ct,0,zeros(size(t)),t,[x0 x0])

Responses of all the states are plotted below. Recall that lsim gives us x and e; to get we need to compute x-e.

114

Page 115: PID Controller back up

Zoom in to see some detail:

The blue solid line is the response of the ball position , the blue dotted line is the estimated state ; The green solid line is the response of the ball speed , the green dotted line is the estimated state ; The red solid line is the response of the current , the red dotted line is the estimated state .

We can see that the observer estimates the states quickly and tracks the states reasonably well in the steady-state.

The plot above can be obtained by using the plot command.

115