Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

7
ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING Asia-Pac. J. Chem. Eng. 2008; 3: 673–679 Published online in Wiley InterScience (www.interscience.wiley.com) DOI:10.1002/apj.201 Special Theme Research Article Kernel learning adaptive one-step-ahead predictive control for nonlinear processes Yi Liu, Haiqing Wang* and Ping Li State Key Laboratory of Industrial Control Technology, Institute of Industrial Process Control, Zhejiang University, Hangzhou, 310027, P. R. China Received 23 July 2008; Accepted 24 July 2008 ABSTRACT: A Kernel learning adaptive one-step-ahead Predictive Control (KPC) algorithm is proposed for the general unknown nonlinear processes. The main structure of the KPC law is twofold. A one-step-ahead predictive model is first obtained by using the kernel learning (KL) identification framework. An analytical control law is then derived from Taylor linearization method, resulting in an efficient computation for on-line implementation. The convergence analysis of the KPC control strategy is presented, meanwhile a new concept of adaptive modification index is proposed to improve the tracking ability of KPC and reject the unknown disturbance. This simple KPC scheme has few parameters to be chosen and small computation scale, which make it very suitable for real-time control. Numerical simulations compared with a well-tuned proportional-integral-derivative (PID) controller on a nonlinear chemical process show the new KPC algorithm exhibits much better performance and more satisfactory robustness to both additive noise and unknown process disturbance. 2008 Curtin University of Technology and John Wiley & Sons, Ltd. KEYWORDS: nonlinear processes; kernel learning; adaptive control; predictive control INTRODUCTION Many chemical processes are inherently nonlinear dynamic systems. For the control of such processes, the linear-based control techniques and classical propor- tional-integral-derivative (PID) controllers are some- times found insufficient, and hence alternate nonlinear control strategies need to be explored to obtain satisfac- tory results. This has led to an increasing interest in the controller design for unknown nonlinear dynamic pro- cesses. However, due to the absence of general method- ology, the design of nonlinear controllers is a difficult task. For nonlinear processes, neural network (NN) based adaptive or predictive control techniques have been intensively studied in the last two decades. [1–7] However, there are still no guarantees of high con- vergence speed, avoidance of local minima, and the overfitting phenomenon; meanwhile, no general meth- ods to choose the number of hidden units for common NN are available. Recently, support vector machines (SVM), which are novel powerful machines that aid a learning method based on statistical learning theory (SLT) and ker- nel learning (KL) technique, are gaining widespread *Correspondence to : Haiqing Wang, State Key Laboratory of Indus- trial Control Technology, Institute of Industrial Process Control, Zhejiang University, Hangzhou, 310027, P. R. China. E-mail: [email protected] attention in the field of nonlinear process modeling and control. [8 – 13] Some SVM model based nonlinear control algorithms have been proposed. [14 – 16] How- ever, there exist some technical difficulties in these new control schemes. The control strategy proposed by Iplikci [16] requires intensive computation. The con- trollers designed by Zhong et al . [15] and Bao et al . [14] may be possibly rendered invalid when their proposed quadratic polynomial or linear kernel functions based SVM model cannot describe well the nonlinear dynam- ics. As to the process control applications, on the other hand, it is desirable to keep the control strategy as sim- ple as possible for the real-time implementation. In this paper, a new KL adaptive one-step-ahead pre- dictive control (KPC) algorithm with an analytical form is presented for the general nonlinear systems. After a brief review of nonlinear generalized minimum vari- ance (NGMV) control law in the section on NGMV Control Law, the main structure of this KPC is formu- lated in the section on Proposed Control Law: KPC, which includes two technical parts. First, a one-step- ahead predictive model is obtained by KL identification framework; second, the control law is derived based on the first-order Taylor approximation method, resulting in an analytical control law with effective computation for the real-time implementation. The convergence anal- ysis of this control strategy is presented in the section on Convergence Analysis and Corresponding Adaptive 2008 Curtin University of Technology and John Wiley & Sons, Ltd.

Transcript of Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

Page 1: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERINGAsia-Pac. J. Chem. Eng. 2008; 3: 673–679Published online in Wiley InterScience(www.interscience.wiley.com) DOI:10.1002/apj.201

Special Theme Research Article

Kernel learning adaptive one-step-ahead predictive controlfor nonlinear processes

Yi Liu, Haiqing Wang* and Ping Li

State Key Laboratory of Industrial Control Technology, Institute of Industrial Process Control, Zhejiang University, Hangzhou, 310027, P. R. China

Received 23 July 2008; Accepted 24 July 2008

ABSTRACT: A Kernel learning adaptive one-step-ahead Predictive Control (KPC) algorithm is proposed for the generalunknown nonlinear processes. The main structure of the KPC law is twofold. A one-step-ahead predictive model isfirst obtained by using the kernel learning (KL) identification framework. An analytical control law is then derivedfrom Taylor linearization method, resulting in an efficient computation for on-line implementation. The convergenceanalysis of the KPC control strategy is presented, meanwhile a new concept of adaptive modification index is proposedto improve the tracking ability of KPC and reject the unknown disturbance. This simple KPC scheme has few parametersto be chosen and small computation scale, which make it very suitable for real-time control. Numerical simulationscompared with a well-tuned proportional-integral-derivative (PID) controller on a nonlinear chemical process showthe new KPC algorithm exhibits much better performance and more satisfactory robustness to both additive noise andunknown process disturbance. 2008 Curtin University of Technology and John Wiley & Sons, Ltd.

KEYWORDS: nonlinear processes; kernel learning; adaptive control; predictive control

INTRODUCTION

Many chemical processes are inherently nonlineardynamic systems. For the control of such processes,the linear-based control techniques and classical propor-tional-integral-derivative (PID) controllers are some-times found insufficient, and hence alternate nonlinearcontrol strategies need to be explored to obtain satisfac-tory results. This has led to an increasing interest in thecontroller design for unknown nonlinear dynamic pro-cesses. However, due to the absence of general method-ology, the design of nonlinear controllers is a difficulttask. For nonlinear processes, neural network (NN)based adaptive or predictive control techniques havebeen intensively studied in the last two decades.[1–7]

However, there are still no guarantees of high con-vergence speed, avoidance of local minima, and theoverfitting phenomenon; meanwhile, no general meth-ods to choose the number of hidden units for commonNN are available.

Recently, support vector machines (SVM), which arenovel powerful machines that aid a learning methodbased on statistical learning theory (SLT) and ker-nel learning (KL) technique, are gaining widespread

*Correspondence to: Haiqing Wang, State Key Laboratory of Indus-trial Control Technology, Institute of Industrial Process Control,Zhejiang University, Hangzhou, 310027, P. R. China.E-mail: [email protected]

attention in the field of nonlinear process modelingand control.[8–13] Some SVM model based nonlinearcontrol algorithms have been proposed.[14–16] How-ever, there exist some technical difficulties in thesenew control schemes. The control strategy proposedby Iplikci[16] requires intensive computation. The con-trollers designed by Zhong et al .[15] and Bao et al .[14]

may be possibly rendered invalid when their proposedquadratic polynomial or linear kernel functions basedSVM model cannot describe well the nonlinear dynam-ics. As to the process control applications, on the otherhand, it is desirable to keep the control strategy as sim-ple as possible for the real-time implementation.

In this paper, a new KL adaptive one-step-ahead pre-dictive control (KPC) algorithm with an analytical formis presented for the general nonlinear systems. After abrief review of nonlinear generalized minimum vari-ance (NGMV) control law in the section on NGMVControl Law, the main structure of this KPC is formu-lated in the section on Proposed Control Law: KPC,which includes two technical parts. First, a one-step-ahead predictive model is obtained by KL identificationframework; second, the control law is derived based onthe first-order Taylor approximation method, resultingin an analytical control law with effective computationfor the real-time implementation. The convergence anal-ysis of this control strategy is presented in the sectionon Convergence Analysis and Corresponding Adaptive

2008 Curtin University of Technology and John Wiley & Sons, Ltd.

Page 2: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

674 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

Control Strategy; meanwhile, a new concept of adap-tive modification index (AMI) is obtained to achieve agood tracking performance. This simple KPC schemehas few parameters to be chosen beforehand includinga small computation scale, which make it very suitablefor nonlinear real-time control. Application of the pro-posed KPC algorithm to a nonlinear chemical processis illustrated in the section on Simulation Results andthe conclusions are drawn in the final section.

NGMV CONTROL LAW

For simplicity, we limit our discussion only to single-input–single-output (SISO) nonlinear processes. Theextension of the proposed method to multi-input–multi-output (MIMO) cases, however, is straightforward andwill not be discussed here. Many SISO nonlinear pro-cesses can be accurately represented by the followingdiscrete model:

y(k + 1) = f [y(k), · · · , y(k − ny + 1),

u(k), · · · , u(k − nu + 1)] (1)

where k is the discrete time, y(k) and u(k) representthe controlled output and the manipulated input, respec-tively, f (·) is the general nonlinear function vector, andny and nu denote the process orders. Equation (1) canbe rewritten compactly as

y(k + 1) = f [Y (k), u(k), U (k − 1)] = f [x(k)] (2)

where Y (k) = [y(k), · · · , y(k − ny + 1)] and U (k −1) = [u(k − 1), · · · , u(k − nu + 1)] are the vectors con-sisting of the past process outputs and the past processinputs, respectively.

Let yr (k) be the process desired output. Then thecontrol law can be obtained by minimizing the one-step-ahead weighted predictive control performance index[17]

J [u(k)] = [yr (k + 1) − y(k + 1)]2

+ λ[u(k) − u(k − 1)]2 (3)

s.t. umin(k) ≤ u(k) ≤ umax(k)�umin(k) ≤ �u(k) ≤ �umax(k)

, (4)

where �u(k) = u(k) − u(k − 1) is the manipulatedvariable increment, which is subject to Eqn (4) andλ (λ > 0) denotes the control effort weighting factor.It is difficult to get the optimal solution for Eqn (3)because it needs to be solved by a nonlinear optimiza-tion method. For process control, it is desirable to obtaina simple analytical control law, even though it maybe sub-optimal, so that the computation requirementcan be greatly reduced. We can expand Eqn (2) by theTaylor series with respect to the argument u(k) at the

point u(k − 1), meanwhile neglecting the higher-orderterms.[2] Then we have

y(k + 1) = f [x(k)]

+ ∂f

∂u(k)

∣∣∣∣u(k)=u(k−1)

�u(k) + O(�u(k))

≈ f [x(k)] + ∂f

∂u(k)

∣∣∣∣u(k)=u(k−1)

�u(k) (5)

where, x(k) = [Y (k), u(k − 1), U (k − 1)] and O(�u(k)) is the higher-order terms of �u(k). Substi-tuting Eqn (5) into Eqn (3) and then minimizing it, wecan obtain the following NGMV control law[2]

u(k) = u(k − 1) +∂f

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂f

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

× {yr (k + 1) − f [x(k)]} (6)

where ∂f∂u(k)

∣∣∣u(k)=u(k−1)

is the input–output sensitiv-

ity function, and f [x(k)] is the quasi-one-step-aheadpredictive output.

PROPOSED CONTROL LAW: KPC

To implement the control law described earlier,∂f

∂u(k)

∣∣∣u(k)=u(k−1)

and f [x(k)] must be calculated on

line. It is necessary to develop an effective method toestimate these two quantities. Several NN based meth-ods were used to provide them to the controller.[2] How-ever, as mentioned earlier, NN still has a number ofweak points, furthermore, NN models are generally notparsimonious and hence any adaptive control schemebased on them has to deal with the issue of updat-ing a very large number of weights. On the contrary, aKL identification model can describe the nonlinear sys-tem well while exhibiting good generalization ability,especially with very few samples.[10] Hence we suggestutilization of the KL method to identify the nonlinearsystems. Based on SLT and KL theory, a unified KLidentification model can be expressed as

ym(k + 1) = KL[x(k), α(k)] (7)

where α(k) is the KL model coefficient vectorand ym(k + 1) is the model predictive output. α(k)

can be obtained by using batch learning or on-line learning.[8,10,13] When a support vector regression

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj

Page 3: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 675

(SVR) or a least squares SVR is adopted, the KL iden-tification model can be expressed as a uniform formula

ym(k + 1) = KL[x(k), α(k)]

=NSV∑i=1

αi K 〈x(i ), x(k)〉 + b (8)

where αi denote general Lagrange multipliers, whichare the linear combinations of Lagrange multipliers;NSV is the number of support vectors; 〈·, ·〉 denotes thedot product and K 〈x(i ), x(k)〉 is a kernel function thathandles the inner product in the feature space and hencethe explicit form of nonlinear mapping does not needto be known; b is the bias term.[10,11] Then substitutingthe KL model into Eqn (6) yields the original KPC law

u(k) = u(k − 1) +∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

× {yr (k + 1) − KL[x(k)]} (9)

To compensate both the Taylor approximation andthe identification error, an adaptive modification item(AMI) µ(k) is introduced to the control law in Eqn (9).Then the KPC law can be rewritten as

u(k) = u(k − 1) +µ(k)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

× {yr (k + 1) − KL[x(k)]} (10)

The proposed AMI in Eqn (10) is much differentfrom the adjustable parameter in Gao et al .[2] althoughthey seem somewhat similar. This is because AMI istime varying, while the latter proposed by Gao et al .[2]

is a constant and has to be selected by simulation.Furthermore, AMI can be adaptively obtained accordingto the convergence analysis at every sampling time,which guarantees advanced tracking ability all thetime, especially when the system suffers from unknowndisturbances.

Moreover, to overcome the model mismatch by theKL identification and other unknown disturbances, itis necessary to utilize the latest measured output y(k)to compensate them. Here we use a simple but usefulstrategy such as the output feedback in model predictivecontrol.[18] By adding the error e(k) = y(k) − ym(k) tothe quasi-one-step-ahead predictive output KL[x(k)], acorrected prediction is obtained

yp(k + 1) = KL[x(k)] + he(k) (11)

where h is the error correction coefficient, and inmost cases h = 1. Consequently, the KPC law can beultimately formulated as

u(k) = u(k − 1) +µ(k)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

× E (k + 1) (12)

where E (k + 1) = yr (k + 1) − e(k) − KL[x(k)] is thetotal error of KL predictive model at time k .

CONVERGENCE ANALYSIS ANDCORRESPONDING ADAPTIVE CONTROLSTRATEGY

It is well known that it is extremely important to guaran-tee a control law convergent. Thus, we investigate theproposed KPC law detailed and obtain the followingtheorem.

Theorem 1. There exists suitable µ(k) such that thecontrol algorithm given in Eqn (12) will be convergent.

Proof: Let

ε(k + 1) = yr (k + 1) − y(k + 1) (13)

and

δ(k) =µ(k)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2 E (k + 1). (14)

Note that y(k + 1) = f [x(k)] = KL[x(k)] + e(k),then from the mean-value theorem, we have

ε(k + 1) = yr (k + 1) − e(k) − KL[x(k)]

− ∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

δ(k) (15)

where u(k − 1) ∈ [u(k − 1), u(k − 1) + δ(k)]. Substi-tuting Eqn (15) into Eqn (14) yields

ε(k + 1) =

1 −µ(k)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

λ +(

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

× E (k + 1) (16)

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj

Page 4: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

676 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

Thus, there exists suitable AMI when

µ(k) =λ +

(∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

)2

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

(17)

one can always make ε(k + 1) = 0. That is to say theKPC law given in Eqn (12) is convergent.

To obtain a reliable estimate of µ(k), we designa simple recursive efficient method. According toEqns (9) and (10), µ(0) is set to 1 and then µ(k) isreplaced by µ(k − 1) in Eqn (14) at time k to obtainthe estimation of δ(k). Let u(k − 1) = [u(k − 1) +u(k)]/2 for more precision; at last the actual AMIat time k is calculated in Eqn (17). By substitutingEqn (17) into Eqn (12), the convergent and adaptivecontrol law at time k can be deduced

u(k) = u(k − 1) + E (k + 1)/

[∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

](18)

Figure 1 shows the whole KPC strategy. TDL denotesthe common time delay and GTDL is defined as a gen-eral time delay, through which x(k) = [Y (k), u(k −1), U (k − 1)] is obtained. KPC is composed of two pri-mary modules: a KL predictive model and an adaptivecontroller. The flowchart of this simple strategy is asfollows: at time k , a corrected KL predictive modelis obtained by adding the latest error e(k) to the quasi-one-step-ahead predictive output KL[x(k)], and then thetotal error E (k + 1) and the AMI µ(k) are both intro-duced into the controller to compute the process controllaw u(k).

There are three main advantages in our proposedKPC law:

1. It is easy to obtain an accurate nonlinear predictivemodel with good generalization by KL identificationmethodology.

2. The control law can be modified adaptively by AMIµ(k) to keep high tracking performance, especiallywhen the process is time varying or suffers fromunknown disturbances.

3. A simple analytical control law makes real-timecomputation effective.

Two common kernel functions, Gaussian and poly-nomial, are usually utilized in KL methods.[10] Whena Gaussian kernel function is adopted: K (xi , xj ) =exp(−||xi − xj ||2/σ 2), where σ is the Gaussian kernelwidth, the KL identification model can be formulatedas

ym(k + 1) = KL[x(k)]

=NSV∑i=1

αi exp(−||x(i )

− x(k)||2/σ 2) + b (19)

From Eqn (19) we can obtain

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

= 2

σ 2

NSV∑i=1

αi exp(−||x(i ) − x(k)||2

/σ 2)[xny +1(i ) − u(k − 1)] (20)

where x(k) = [Y (k), u(k − 1), U (k − 1)], xny +1(i ) isthe ny + 1 item of x(i ) vector. Thus, a Gaussian kernelfunction based KPC law is expressed below

u(k) = u(k − 1)+E (k + 1)

NSV∑i=1

αi exp(−||x(i ) − x(k)||2/

σ 2)[xny +1(i ) − u(k − 1)]

. (21)

Figure 1. Flowsheet of KPC.

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj

Page 5: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 677

When a polynomial kernel function is chosen:K (xi , xj ) = (〈xi , xj 〉 + τ)p , where the integer p is thepolynomial degree and p ≥ 1 and τ = 1 in most cases.Similarly, we can deduce the polynomial kernel func-tion based KL identification model and its correspond-ing KPC law as follows:

KL[x(k)] =NSV∑i=1

αi (〈x(i ), x(k)〉 + τ)p + b (22)

∂KL

∂u(k)

∣∣∣∣u(k)=u(k−1)

= pNSV∑i=1

αi (〈x(i ), x(k)〉 + τ)p−1xny +1(i ) (23)

u(k) = u(k − 1)

+ E (k + 1)

pNSV∑i=1

αi (〈x(i ), x(k)〉 + τ)p−1xny +1(i )

(24)

SIMULATION RESULTS

As an example to illustrate the validity of the proposedKPC algorithm, we consider a highly nonlinear continu-ous stirred tank reactor (CSTR) process, which is knownfor its significant nonlinear behavior and exhibits multi-ple steady states and poses a difficult control problem.[5]

This complex CSTR process has been studied withother control strategies, e.g. NN based nonlinear internalmodel control[7] and SVM based generalized predic-tive control.[16] Figure 2 is the schematic of this CSTRprocess, in which an exothermic irreversible first-orderreaction takes place.

The concentration Ca inside the reactor is controlledby manipulating the coolant flow qc through the jacket.Under the mass and energy balance, the dynamics ofthis CSTR process can be described as follows:

dCa(t)

dt= q

V[Ca0(t) − Ca(t)] − k0Ca(t) exp

×[ −E

RT (t)

](25)

dT (t)

dt= q

V[T0 − T (t)] − k0�H

ρCpCa(t) exp

[ −E

RT (t)

]

+ ρcCpc

ρCpVqc(t)

{1 − exp

[ −ha

qc(t)ρcCpc

]}[Tc0 − T (t)] (26)

The nominal conditions for a product concentra-tion Ca = 0.1 mol l−1 are: T = 438.54 K and qc =

q,Ca0,T0q,Ca,T

qc,Tc0

Figure 2. Schematic of the CSTR. Thisfigure is available in colour online atwww.apjChemEng.com.

103.41 l min−1. The other meanings and nominal val-ues of the variables in the above equations can bereferred to Nahas et al .[5] Under the input constraint,90 l min−1 ≤ qc ≤ 110 l min−1, the control objectiveis to regulate Ca by manipulating qc .

The sampling period of all process measurementsis 6 s. For the identification procedure a sequence ofonly 500 samples, which is much less than NN basedidentification method,[7] is generated to form the iden-tification set S = {{x(i ), y(i + 1)}N

i=1, x ∈ Rn , y ∈ R},

where n = nu + ny is the order of input vector, whichis chosen as x(k) = [y(k), y(k − 1), y(k − 2), u(k),

u(k − 1), u(k − 2)] according to Lightbody and Irwin[7]

and Iplikci.[16] The simulation environment is MatlabV7.1 with CPU main frequency 2.4 GHz and 256 Mmemory. Without lost generality, an offline LSSVRidentification model is used with polynomial kernelfunction, and the regularization parameter γ = 103 andpolynomial degree p = 3 are chosen by using cross-validation approach. It only takes us several minutes toobtain this satisfactory KL identification model. Com-pared with the other methods, the KL identification ismuch easier to implement.

Case 1: set-point tracking

Firstly, the set-point tracking ability of KPC is investi-gated. To provide a suitable comparison with standardtechniques, a well tuned PID controller with param-eters (Kc, Ti , Td ) = (190, 0.056, 0.827) is used.[5] Asmentioned above, a larger λ implies more penalties on�u(k), and vice versa. With the action of AMI, λ canbe selected in a wide range and the control perfor-mance will not degrade. The integral of the absoluteset-point tracking error (IAE) is used to quantify theperformance characteristics of both controllers. Figure 3depicts the details of the performance comparison ofboth controllers. The IAE performance index is alsoshown in Fig. 3. When we select λ = 1−8, the IAE of

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj

Page 6: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

678 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

KPC is 0.039 which is obviously much smaller thanthe PID controller (IAE = 0.164). When λ = 1−4 orλ = 1, KPC can adaptively obtain the same controlperformance due to the effect of AMI (In both casesIAE = 0.098). The KPC controller with different λ cantrack the set-point quickly and with little overshoot.Moreover, the running time of KPC in one sampleis about 0.05 s which is much less than the samplingperiod of this CSTR process (6 s) and almost equals toPID (0.03 s). So we can conclude that the KPC strat-egy presents much better performance than the PIDcontroller.

Case 2: noise and disturbance rejection

In order to mimic a realistic situation the system issubjected to additive noise and unmeasured disturbance.Different magnitudes of the super-imposed Gaussiannoise and disturbance are discussed. The satisfactoryperformance is achieved when λ is varying in a widerange just as analyzed aforementioned. And we chooseλ = 1−5 in this case. It is shown in Fig. 4 that the

0.1

0.11

0.12

Ca,

mol

⋅l-1

110

90

100

q c, l

⋅min

-1

0 20 40 60 80 100100

105

AM

I

Sample Number

yr

PID (0.164)

l=1-8 (0.039)

l=1-4 (0.098)

l=1 (0.039)

Figure 3. System response to the set-point tracking withdifferent weighting factors. This figure is available in colouronline at www.apjChemEng.com.

Table 1. IAE comparison of KPC and PID with differentnoises and disturbances.

Simulationconditions

Controlstrategy

Lownoise

Normalnoise

Highnoise

Low disturbance KPC 0.296 0.411 0.536PID 0.446 0.504 0.541

Normal disturbance KPC 0.469 0.585 0.683PID 0.698 0.735 0.779

High disturbance KPC 0.772 0.878 0.950PID 1.001 1.058 1.116

KPC is more robust against additive noise and unknowndisturbance. For details about the IAE comparison istabulated in Table 1. All the simulation results are theaverage of 20 running times.

Case 3: a ‘stair’ reference tracking

Furthermore, when the proposed controller is workingin a large operating range, is it still valid? Fig. 5 showsthe system response of a ‘stair’ reference starting on0.085 mol/l and ending at 0.12 mol/l; each step has aduration of 10 min and amplitude of 0.005 mol/l. Thevalue of λ is unchanged and kept the same as in Case2 to validate its robustness. As can be appreciated, the

0 20 40 60 80 100 120 140 160 180 2000.08

0.09

0.1

0.11

0.12

0.13

0.14

Sample Number

Ca,

mol

⋅l-1

Reference TrajectoryKPCPID

Figure 4. System response with both noise and dis-turbance. This figure is available in colour online atwww.apjChemEng.com.

0 100 200 300 400 500 600 700 8000.08

0.085

0.09

0.095

0.1

0.105

0.11

0.115

0.12

0.125

Sample Number

Ca,

mol

⋅l-1

Reference TrajectoryKPCPID

Figure 5. System response to tracking a ‘stair’ set-point. This figure is available in colour online atwww.apjChemEng.com.

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj

Page 7: Kernel learning adaptive one-step-ahead predictive control for nonlinear processes

Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 679

simple control strategy still provides excellent set-pointtracking over the nonlinear operating range, with highclosed-loop bandwidth. Importantly the performance ofthe closed-loop system at the lower end of the operatingregion is similar to that at the upper end. Whereas, wecan remark the degradation in the performance underthe PID controller when the concentration is smallerthan 0.09 mol/l or larger than 0.11 mol/l.

CONCLUSIONS

This paper addresses the subject of nonlinear controland presents a new simple analytical control strategy.The simplicity of the identification by KL methodologyand the excellent performance of the KPC controllermake the strategy very attractive for nonlinear processcontrol. The proposed KPC law can be easily imple-mented; in addition, the AMI assures the convergenceof this algorithm. The simulations on such a severelynonlinear system as CSTR process show that the pro-posed algorithm has a good tracking performance anda satisfactory robustness to both additive noise andunknown disturbance. Furthermore, as the KL identi-fication model can describe the nonlinear systems welland with good generalization using a small sample set, itcan also be expected that KL controller, e.g. not limitedto utilize the KL methodology to obtain process model,is a promising method for nonlinear process control.

Acknowledgements

This work was sponsored by the National Natural Sci-ence Foundation of China (Project No. 20576116),National Key Technology R&D Program of China(Project No. 2007BAF14B02) and Alexander von Hum-boldt Foundation, Germany (Dr Haiqing Wang), whichare gratefully acknowledged.

REFERENCES

[1] M.S. Ahmed. IEEE Trans. Automat. Contr., 2000; 45(1),119–124.

[2] F.R. Gao, F.L. Wang, M.Z. Li. Chem. Eng. Sci., 2000; 55,1283–1288.

[3] K.J. Hunt, D. Sbarbaro, R. Zbikowski, P.J. Gawthrop. Auto-matica, 1992; 28(6), 1083–1112.

[4] C.H. Lu, C.C. Tsai. J. Process Control, 2007; 17(1), 83–92.[5] E.P. Nahas, M.A. Henson, D.E. Seborg. Comput. Chem. Eng.,

1992; 16(12), 1039–1057.[6] K.S. Narendra, K. Parthasarathy. IEEE Trans. Neural Netw.,

1990; 1(1), 4–27.[7] G. Lightbody, G.W. Irwin. IEEE Trans. Neural Netw., 1997;

8(3), 553–567.[8] Y. Liu, D.C. Yang, H.Q. Wang, P. Li. Modeling of fermenta-

tion processes using online kernel learning. In Proceedings of17th IFAC World Congress , Seoul, 2008,; 9679–9684.

[9] V. Vapnik. The Nature of Statistical Learning Theory,Springer-Verlag: New York, 1995.

[10] B. Scholkopf, A.J. Smola. Learning with Kernels, The MITPress: Cambridge, MA, 2002.

[11] J.A.K. Suykens, T. van Gestel, J. de Brabanter, B. de Moor,J. Vandewalle. Least Squares Support Vector Machines, WorldScientific: Singapore, 2002.

[12] H.T. Toivonen, S. Totterman, B. Akesson. Int. J. Control,2007; 80(9), 1454–1470.

[13] H.Q. Wang, P. Li, F.R. Gao, Z.H. Song, S.X. Ding. AIChE J.,2006; 52(10), 3515–3531.

[14] Z.J. Bao, D.Y. Pi, Y.X. Sun. Chin. J. Chem. Eng., 2007; 15(5),691–697.

[15] W.M. Zhong, G.L. He, D.Y. Pi, Y.X. Sun. Chin. J. Chem.Eng., 2005; 13(3), 373–379.

[16] S. Iplikci. Int. J. Robust Nonlinear Cont., 2006; 16, 843–862.[17] G.C. Goodwin, K.S. Sin. Adaptive Filtering Prediction and

Control, Prentice-Hall: Englewood Cliffs, NJ, 1984.[18] S.J. Qin, T.A. Badgwell. Control Eng. Pract., 2003; 11(7),

733–764.

2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679DOI: 10.1002/apj